chash
stringlengths
16
16
content
stringlengths
267
674k
419fe5b64a75fed7
Binding Quiddities Excerpt from The Combination Problem for Panpsychism (2013) by David Chalmers [Some] versions of identity panpsychism are holistic in that they invoke fundamental physical entities that are not atomic or localized. One such view combines identity panpsychism with the monistic view that the universe itself is the most fundamental physical entity. The result is identity cosmopsychism, on which the whole universe is conscious and on which we are identical to it. (Some idealist views in both Eastern and Western traditions appear to say something like this.) Obvious worries for this view are that it seems to entail that there is only one conscious subject, and that each of us is identical to each other and has the same experiences. There is also a structural mismatch worry: it is hard to see how the universe’s experiences (especially given a Russellian view on which these correspond to the universe’s physical properties) should have anything like the localized idiosyncratic structure of my experiences. Perhaps there are sophisticated versions of this view on which a single universal consciousness is differentiated into multiple strands of midlevel macroconsciousness, where much of the universal consciousness is somehow hidden from each of us. Still, this seems to move us away from identity cosmopsychism toward an autonomous cosmopsychist view in which each of us is a distinct constituent of a universal consciousness. As before, the resulting decomposition problem seems just as hard as the combination problem. Perhaps the most important version of identity panpsychism is quantum holism. This view starts from the insight that on the most common understandings of quantum mechanics, the fundamental entities need not be localized entities such as particles. Multiple particles can get entangled with each other, and when this happens it is the whole entangled system that is treated as fundamental and that has fundamental quantum-mechanical properties (such as wave functions) ascribed to it. A panpsychist might speculate that such an entangled system, perhaps at the level of the brain or one of its subsystems, has microphenomenal properties. On the quantum holism version of identity panpsychism, macrosubjects such as ourselves are identical to these fundamental holistic entities, and our macrophenomenal properties are identical to its microphenomenal properties. This view has more attractions than the earlier views, but there are also worries. Some worries are empirical: it does not seem that there is the sort of stable brain-level entanglement that would be needed for this view to work. Some related worries are theoretical: on some interpretations of quantum mechanics the locus of entanglement is the whole universe (leading us back to cosmopsychism), on others there is no entanglement at all, and on still others there are regular collapses that tend to destroy this sort of entanglement. But perhaps the biggest worry is once again a structural mismatch worry. The structure of the quantum state of brain-level systems is quite different from the structure of our experience. Given a Russellian view on which microphenomenal properties correspond directly to the fundamental microphysical properties of these entangled systems, it is hard to see how they could have the familiar structure of our macroexperience. The identity panpsychist (of all three sorts) might try to remove some of these worries by rejecting Russellian panpsychism, so that microphenomenal properties are less closely tied to microphysical structure. The cost of this move is that it becomes much less clear how these phenomenal properties can play a causal role. On the face of it they will be either epiphenomenal, or they will make a difference to physics. The latter view will in effect require a radically revised physics with something akin to our macrophenomenal structure present at the basic level. Then phenomenal properties will in effect be playing the role of quiddities within this revised physics, and the resulting view will be a sort of revisionary Russellian identity panpsychism. Glossary of Qualia Research Institute Terms This is a glossary of key terms and concept handles that are part of the memetic ecosystem of the Qualia Research Institute. Reading this glossary is itself a great way to become acquainted with this emerging memeplex. If you do not know what a memeplex is… you can find its definition in this glossary. Consciousness (standard psychology, neuroscience, and philosophy term): There are over a dozen common uses for the word consciousness, and all of them are interesting. Common senses include: self-awareness, linguistic cognition, and the ability to navigate one’s environment. With that said, the sense of the word in the context of QRI is more often than not: the very fact of experience, that experience exists and there is something that it feels like to be. Talking loosely and evocatively- rather than formally and precisely- consciousness refers to “what experience is made of”. Of course formalizing that statement requires a lot of unpacking about the nature of matter, time, selfhood, and so on. But this is a start. Qualia (standard psychology, neuroscience, and philosophy term): This word refers to the range of ways in which experience presents itself. Experiences can be richly colored or bare and monochromatic, they can be spatial and kinesthetic or devoid of geometry and directions, they can be flavorfully blended or felt as coming from mutually unintelligible dimensions, and so on. Classic qualia examples include things like the redness of red, the tartness of lime, and the glow of bodily warmth. However, qualia extends into categories far beyond the classic examples, beyond the wildest of our common-sense conceptions. There are modes of experience as altogether different from everything we have ever experienced as vision qualia is different from sound qualia. Valence / Hedonic Tone (standard psychology, neuroscience, and philosophy term): How good or bad an experience feels – each experience expresses a balance between positive, neutral, and negative notes. The aspect of experience that accounts for its pleasant and unpleasant qualities. The term is evocative of pleasant sensations such as warming up one’s body when cold with a blanket and a cup of hot chocolate. That said, hedonic tone refers to a much broader class of sensations than just the feeling of warmth. For example, the music appreciation enhancement produced by drugs can be described as “enhanced hedonic tone in sound qualia”. Hedonic tone can appear in any sensory modality (touch, smell, sight, etc.), and even more generally, in every facet of experience (such as cognitive and proprioceptive elements, themselves capable of coming with their own flavor of euphoria/dysphoria). Experiences with both negative and positive notes are called “mixed”, which are the most common ones. Helpful Philosophy Ontology (standard high-level philosophy term; ref: 1): At the most basic level, an ontology is an account of what is real and what is good. Epistemology (standard high-level philosophy term; ref: 1): The set of strategies, heuristics, and methods for knowing. In the context of consciousness research, what constitutes a good epistemology is a highly contentious subject. Some scientists argue that we should only take into account objectively-measurable third-person data in order to build models and postulate theories about consciousness (cf. heterophenomenology). On the other extreme, some argue that the only information that counts is first-person experiences and what they reveal to us (cf. new mysterianism). Somewhere in the middle, QRI fully embraces objective third-person data. And along with it, QRI recognizes the importance of skepticism and epistemic rigor when it comes to which first-person accounts should be taken seriously. Its epistemology does accept the information gained from alien state-spaces of consciousness as long as they meet some criteria. For example, we are very careful to distinguish between information about the intentional content of experience (what it was about) and information about its phenomenal character (how it felt). As a general heuristic, QRI tends to value more e.g. trip reports that emphasize the phenomenal character of the experience (e.g. “30Hz flashes with slow-decay harmonic reverb audio hallucinations”) relative to intentional content (e.g. “the DMT alien said I should learn to play the guitar”). Ultimately, first-person and third-person data are complementary views of the same substrate of consciousness (cf. dual-aspect monism), and so are both equally necessary for a complete scientific account of consciousness. Functionalism (standard high-level philosophy term; ref: 1, 2): In Philosophy of Mind, functionalism is the view that consciousness is produced (and in some cases identical with) not only by the input-output mapping of an information-processing system, but also by the internal relationships that make that information-processing possible. In light of Marr’s Levels of Analysis (see below), we could say that functionalism identifies the content of conscious experience with the algorithmic level of analysis. Hence this philosophy is usually presented in conjunction with the concept of “substrate neutrality” which posits that the material makeup of brains is not necessary for the arising of consciousness out of it. If we implemented the same information-processing functions that are encoded in the neural networks of a brain using rocks, buckets of water, or a large crowd instantiating a large computer, we would also generate the same experiences the brain generates on its own. Importantly, functionalism tends to deny any essential role of the substrate in the generation of consciousness, and will typically also deny any significant interaction between levels of analysis (see below). Eliminativism (standard high-level philosophy term; ref: 1, 2, 3): In Philosophy of Mind, eliminativism refers to a cluster of ideas concerning whether the word “consciousness” is clear enough to be useful for making sense of how brains work. One key idea in eliminativist views is that most of the language that we use to talk about experiences (from specific emotions to qualia) is built on top of folk-psychology rather than physical reality. In a way, terms such as “experience” and “feelings” are an interface for the brain to model itself and others in a massively simplified but adaptive way. There is no reason why our evolved intuitions about how the brain works should even approximate how it really works. In many cases, eliminativists advocate starting from scratch and abandoning our intuitions about experience, sticking to hard physical and computational analysis of the brain as empirically measured. This view suggests that once we truly understand scientifically how brains work, the language we will use to talk about it will look nothing like the way we currently speak about our experiences, and that this change will be so dramatic that we would effectively start thinking as if “consciousness never existed to begin with”. Presentism (standard high-level philosophy term; ref: 1): The view that only the present is real, the past and the future being illusory inferences and projections made in the present. Oftentimes presentism posits that change is a fundamental aspect of the present and that the feeling of the passage of time is based on the ever-changing nature of reality itself. Eternalism (standard high-level philosophy term; ref: 1): The view that every here-and-now in reality is equally real. Rather than thinking of the universe as a “now” sandwiched between a “past” and “future”, eternalism posits that it is more accurate to simply describe pairs of moments as having a “before” and “after” relationship, but neither of them being in the future or past. Some of the strongest arguments for eternalism come from Special and General Relativity (see: Rietdijk–Putnam argument), where space-time forms a continuous 4-dimensional geometric shape that stands together as a whole, and where any notion of a “present” is only locally valid. In some sense, eternalism says that all of reality exists in an “eternal now” (including your present, past, and future selves). Personal Identity (standard high-level philosophy term; ref: 1): The relevant sense of this term for our purposes refers to the set of questions about what constitutes the natural unit for subjects of experience. Questions such as “will the consciousness who wakes up in my current body tomorrow morning be me?”, “if we make an atom-by-atom identical copy of me right now, will I start existing in it as well?”, “if you conduct a Wada Test, is the consciousness generated by my right hemisphere alone also me?”, and so on. Closed Individualism (coined by Daniel Kolak; ref: 1): In its most basic form, this is the common-sense personal identity view that you start existing when you are born and stop existing when you die. According to this view each person is a different subject of experience with an independent existence. One can believe in a soul ontology and be a Closed Individualist at the same time, with the correction that you exist as long as your soul exists, which could be the case even before or after death. Empty Individualism (coined by Daniel Kolak; ref: 1, 2, 3): This personal identity view states that each “moment of experience” is its own separate subject. While it may seem that we exist as persons with an existence that spans decades, Empty Individualism does not associate a single subject to each person. Rather, each moment a new “self” is born and dies, existing for as long as the conscious event takes place (something that could be anywhere between a femtosecond and a few hundred milliseconds, depending on which scientific theory of consciousness one believes in). Open Individualism (coined by Daniel Kolak; ref: 1, 2, 3, 4): This is the personal identity view that we are all one single consciousness. The apparent partitions and separations between the universal consciousness, in this view, are the result of partial information access from one moment of experience to the next. Regardless, the subject who gets to experience every moment is the same. Each sentient being is fundamentally part of the same universal subject of experience. Goldilocks Zone of Oneness (QRI term; 1, 2, 3): Having realized that there are both positive and negative psychological aspects to each of the three views of personal identity discussed (Closed, Empty, Open Individualism), the Goldilocks Zone of Oneness emerges as a conceptual resolution. Open Individualism comes with a solution to the fear of death, but it also can give rise to a sort of cosmic solipsism. Closed Individualism allows you to feel fundamentally special, but also disconnected from the universe and fundamentally misunderstood by others. Empty Individualism is philosophically satisfying, but it may come with a sense of lack of agency and the fear of being a time-slice that is stuck in a negative place. The Goldilocks Zone of Oneness posits that there is a way to transcend classical logic in personal identity, and that the truth incorporates elements of all of the three views at once. In the Goldilocks Zone of Oneness one is simultaneously part of a whole but also not the entirety of it. One can relate with others by having a shared nature, while also being able to love them on their own terms by recognizing their unique identity. This view has yet to be formalized, but in the meantime it may prove to be pragmatically useful for community-building. The Problem of Other Minds (standard high-level philosophy term; ref: 1, 2): This is the philosophical conundrum of whether other people (and sentient beings in general) are conscious. While your own consciousness is self-evidence, the consciousness of others is inferred. Possible solutions involve technologies such as the Generalized Wada Test (see below), phenomenal puzzles, and thalamic bridges, which you can use to test the consciousness of another being by having it solve a problem that can only be solved by making comparisons between qualia values. Solipsism (standard high-level philosophy term; ref: 1, 2, 3): In its classic formulation, solipsism refers to a state of existence in which the only person who is conscious is “oneself”, which resides in the body of an individual human over time. A more general version of solipsism involves crossing it with personal identity views (see above). Through this lens, the classic person-centric formulation of solipsism refers exclusively to a Closed Individualist universe. Alternatively, Open Individualism also has a solipsistic interpretation – it is thus compatible with (and in at least in one sense entails) solipsism: the entire multiverse of experiences are all experiences of a single solipsistic cosmic consciousness. With an Empty Individualist universe, too, we can have a solipsistic interpretation of reality. In one version you use epiphenomenalism to claim that this moment of experience is the only one that is conscious even though the whole universe still exists and it had an evolutionary path that led it to the configuration in which you stand right now. In another version, one’s experience is the result of the fact that in the cosmic void everything can happen. This is not because it is likely, but because there is a boundless amount of time for it to happen. That is, no matter how thin its probability is, it will still take place at some point (see: Boltzmann brain). That said, one’s present experience -with its highly specific information content- being the only one that exists seems very improbable a priori. Like imagining that despite the fact that “the void can give rise to anything” the only thing that actually gets materialized is an elephant. Why would it only produce an elephant, of all things? Likewise, solipsistic Empty Individualism has this problem – why would this experience be the only one? To cap it off, we can also reason about solipsism in its relation to hybrid views of personal identity. In their case solipsism either fails, or its formulation needs to be complicated significantly. This is partly why the concept of the Goldilocks Zone of Oneness (see above) might be worth exploring, as it may be a way out of ultimate solipsism. On a much more proximal domain, it may be possible to use Phenomenal Puzzles, Wada tests, and ultimately mindmelding to test the classical (Closed Individualist) formulation of solipsism. Suffering Focused Ethics (recent philosophy term from rationalist-adjacent communities; ref: 1, 2) The view that our overriding obligation is to focus on suffering. In particular, taking seriously the prevention of extreme suffering is one of the features of this view. This is not unreasonable if we take into account the logarithmic scales of pain and pleasure into account, which suggest that the majority of suffering is concentrated in a small percent of experiences of intense suffering. Hence why caring about the extreme cases matters so much. Antinatalism (standard high-level philosophy term; ref: 1, 2): This is the view that being born entails a net negative. Classic formulations of this view tend to implicitly assume Closed Individualism, where there is someone who may or may not be born and it is meaningful to consider this a yes or no question with ontological bearings. Under Open Individualism the question becomes whether there should be any conscious being at all, for neither preventing someone’s birth nor committing an individual suicide entail the real birth or death of a consciousness. They would merely add or subtract from the long library corridors of experiences had by universal consciousness. And in Empty Individualism, antinatalism might be seen through the light of “preventing specific experiences with certain qualities”. For example, having an experience of extreme suffering is not harming a person (though it may have further psychological repercussions), but rather harming that very experience in an intrinsic way. This view would underscore the importance of preventing the existence of experiences of intense suffering rather than preventing the existence of people as such. A final note on antinalism is that even in its original formulation we encounter the problem that selection pressures makes any trait that reduces inclusive fitness disappear in the long run. The traits that predispose to such views would simply be selected out. A more fruitful way of improving the world is to encourage the elimination of suffering in ways that do not reduce inclusive fitness, such as the prevention of genetic spell errors and diseases that carry a high burden of suffering. Tyranny of the Intentional Object (coined by David Pearce; ref: 1, 2): The way our reward architecture is constructed makes it difficult for us to have a clear sense of what it is that we enjoy about life. Our brains reinforce the pursuit of specific objects, situations, and headspaces, which gives the impression that these are intrinsically valuable. But this is an illusion. In reality such conditions trigger positive valence changes to our experience, and it is those that we are really after (as evidenced by the way in which our reward architecture is modified in presence of euphoric and dysphoric drugs and external stimuli such as music). We call this illusion the tyranny of the intentional object because in philosophy “intentionality” refers to “what the experience is about”. Our world-simulations chain us to the feeling that external objects, circumstances, and headspaces are the very source of value. More so, dissociating from such sources of positive valence triggers negative valence, so critical insight into the way our reward architecture really works is itself negatively reinforced by it. Formalism Terms Formalism (standard high-level philosophy term; ref: 1, 2): Formalism is a philosophical and methodological approach for analyzing systems which postulates the existence of mathematical objects such that their mathematical features are isomorphic to the properties of the system. An example of a successful formalism is the use of Maxwell’s equations in order to describe electromagnetic phenomena. Qualia Formalism (QRI term; 1, 2, 3): Qualia Formalism means that for any given physical system that is conscious, there will be a corresponding mathematical object associated to it such that the mathematical features of that object will be isomorphic to the phenomenology of the experience generated by the system. Marr’s Levels of Analysis (standard cognitive science term; ref: 1, 2): This powerful analytic framework was developed by cognitive scientist David Marr to talk more precisely about vision, but it is more broadly applicable to information processing systems in general. It is a way to break down what a system does in a conceptually clear fashion that lends itself to a clean analysis. Computational Level (standard cognitive science term; ref: 1, 2): The first of three of Marr’s Levels of Analysis, the Computational Level of abstraction describes what the system does from a third-person point of view. That is, the input-output mapping, the runtime complexity for the problems it can solve, and the ways in which it fails are all facts about a system that are at the computational level of abstraction. In a simple example case, we can describe an abacus at the computational level by saying that it can do sums, subtractions, multiplications, divisions, and other arithmetic operations. Algorithmic Level (standard cognitive science term; ref: 1, 2): The second of three of Marr’s Levels of Analysis, the Algorithmic Level of abstraction describes the internal representations, operations, and their interactions used to transform the input into the output. In aggregate, representations, operations, and their interactions constitute the algorithms of the system. As a general rule, we find that there are many possible algorithms that give rise to the same computational-level properties. Following the simple example case of an abacus, the algorithmic-level account would describe how passing beads from one side to another and using each row to represent different orders of magnitude are used to instantiate algorithms to perform arithmetic operations. Implementation Level (standard cognitive science term; ref: 1, 2): The third of three of Marr’s Levels of Analysis, the Implementation Level of abstraction describes the way in which the system’s algorithms are physically instantiated. Following the case of the abacus, an implementation-level account would detail how the various materials of the abacus are put together in order to allow the smooth passing of beads between the sides of each row and how to prevent them from sliding by accident (and “forgetting” the state). Interaction Between Levels (obscure cognitive science concept handle; ref: 1, 2): Some information-processing systems can be fully understood by describing each of Marr’s Levels of Analysis separately. For example, it does not matter whether an abacus is made of metal, wood, or even if it is digitally simulated in order to explain its algorithmic and computational-level properties. But while this is true for an abacus, it is not the case for analog systems that leverage the unique physical properties of their components to do computational shortcuts. In particular, in quantum computing one intrinsically requires an understanding of the implementation-level properties of the system in order to explain the algorithms used. Hence, for quantum computing, there are strong interactions between levels of analysis. Likewise, we believe this is likely going to be the case for the algorithms our brains perform by leveraging the unique properties of qualia. Natural Kind (standard high-level philosophy term; ref: 1, 2): Natural kinds are things whose objective existence makes it possible to discover durable facts about them. They are the elements of a “true ontology” for the universe, and what “carves reality at its joints”. This is in contrast to “reifications” which are aggregates of elements with no unitary independent existence. State-Space (standard term in physics and mathematics; ref: 1, 2): A state-space of a system is a geometric map where each point corresponds to a particular state of the system. Usually the space has a Euclidean geometry with a number of dimensions equal to the number of variables in the system, so that the value of each variable is encoded in the value of a corresponding dimension. This is not always the case, however. In the general case, not all points in the state-space are physically realizable. Additionally, some system configurations do not admit a natural decomposition into a constant set of variables. This may give rise to irregularities in the state-space, such as non-Euclidean regions or a variable number of dimensions. State-Space of Consciousness (coined by David Pearce; 1, 2, 3): This is a hypothetical map that contains the set of all possible experiences, organized in such a way that the similarities between experiences are encoded in the geometry of the state-space. For example, the experience you are having right now would correspond to a single point in the state-space of consciousness, with the neighboring experiences being Just Noticeably Different from your experience right now (e.g. simplistically, we could say they would be different from your current experience “by a single pixel”). Qualia Value (QRI term; ref: 1): Starting with examples-  the scent of cinnamon, a spark of sourness, a specific color hue, etc. are all qualia values. Any particular quality of experience that cannot be decomposed further into overlapping components is a qualia value. Qualia Variety (QRI term; ref: 1): A qualia variety refers to the set of qualia values that belong to the same category (for example, tentatively, phenomenal colors are all part of the same qualia variety, which is different from the qualia variety of phenomenal sounds). A possible operationalization for qualia varieties involves the construction of equivalence classes based on the ability to transform a given qualia value into another via a series of Just-Noticeable Differences. For example, in the case of color, we can transform a given qualia value like a specific shade of blue, into another qualia value like a shade of green by traversing across a straight line from one to the other in the CIELAB color space. Tentatively, it is not possible to do the same between a shade of blue and a particular phenomenal sound. That said, the large number of unknowns (and unknown unknowns!) about the state-space of consciousness does not allow us to rule out the existence of qualia values that can bridge the gap between color and sound qualia. If that turned out to be the case, we would need to rethink our approach to defining qualia varieties. Region of the State-Space of Consciousness (QRI term; ref: 1, 2): A set of possible experiences that are similar to each other in some way. Given an experience, the “experiences nearby in the state-space of consciousness” are those that share its qualities to a large degree but have variations. The term can be used to point at experiences with a given property (such as “high-valence” and “phenomenal color”). The Binding Problem (standard psychology, neuroscience, and philosophy term; ref: 1, 2): The binding problem (also called the combination problem) arises from asking the question: how is it possible that the activity of a hundred billion neurons that are spatially distributed can simultaneously contribute to a unitary moment of experience? It should be noted that in the classical formulation of the problem we start with an “atomistic” ontology where the universe is made of space, particles, and forces, and the question then becomes how spatially-distributed discrete particles can “collaborate” to form a unified experience. But if one starts out with a “globalistic” ontology where the universe is made of a universal wavefunction, then the question that arises is how something that is fundamentally unitary (the whole universe) can give rise to “separate parts” such as individual experiences, which is often called “the boundary problem”. Thus, the “binding problem” and “the boundary problem” are really the same problem, but starting with different ontologies (atomistic vs. globalistic). Phenomenal Binding (standard high-level philosophy term; ref: 1, 2): This term refers to the hypothetical mechanism of action that enables information that is spatially-distributed across a brain (and more generally, a conscious system) to simultaneously contribute to a unitary discrete moment of experience. Local Binding (lesser-known cognitive science term; ref: 1): Local binding refers to the way in which the features of our experience are interrelated. Imagine you are looking at a sheet of paper with a drawing of a blue square and a yellow triangle. If your visual system works well you do not question which shape is colored blue; the color and the shapes come unified within one’s experience. In this case, we would say that color qualia and shape qualia are locally bound. Disorders of perception show that this is not always the case: people with simultagnosia find it hard to perceive more than one phenomenal object at a time and thus would confuse the association between the colors and shapes they are not directly attending to, people with schizophrenia have local binding problems in the construction of their sense of self, and people with motion blindness have a failure of local binding between sensory stimuli separated by physical time. Global Binding (lesser-known cognitive science term; ref: 1, 2): Global binding refers to the fact that the entirety of the contents of each experience is simultaneously apprehended by a unitary experiential self. As in the example for local binding, while blue and the square (and the yellow and the triangle) are locally bound into separate phenomenal objects, both the blue square and the yellow triangle are globally bound into the same experience. The Mathematics of Valence Valence Realism (QRI term; ref: 1): This is the claim that valence is a crisp phenomenon of conscious states upon which we can apply a measure. Also defined as: “Valence (subjective pleasantness) is a well-defined and ordered property of conscious systems.” Valence Structuralism (QRI term; ref: 1): Valence could have a simple encoding in the mathematical representation of a system’s qualia. Symmetry Theory of Valence (QRI term; 1, 2, 3): Given a mathematical object isomorphic to the qualia of a system, the mathematical property which corresponds to how pleasant it is to be that system is that object’s symmetry. Valence Gradients (QRI term; ref: 1, 2): It is postulated that one of the important inputs that contributes to our decision-making involves “valence gradients”. To understand what a valence gradient is, it is helpful to provide an example. Imagine coming back from dancing in the rain and feeling pretty cold. In order to warm yourself up you get into the shower and turn on the hot water. Ouch! Too hot, so you dial down the temperature. Brrr! Now it’s too cold, so you dial up the temperature just a little. Ah, just perfect! See, during this process you evaluated, at each point, in what way you could modify your experience in order to make it feel better. At first the valence gradient was pointing in the direction of higher temperature. As soon as you felt it being too hot, the valence gradient changed direction and pointed to lower temperature. And so on until it feels like there is nothing else you could do to improve how you feel. In the more general case, we posit that a significant input into our decision-making is the direction of change along which we believe our experience would improve. At an implementation level of analysis (see above) the very syntax of our experience might be built with a landscape of valence gradients. In a sense, noticing them is possible, but it is a task akin to the metaphor of a fish not knowing what water is. We use valence gradients to navigate both the external and internal world in such a basic and all-pervasive way that missing this fact altogether is easy. When we justify why we did such and such, we often forget that a big component of the decision was made based on how each of the options felt. The difficulty we face when trying to point at the specific valence gradients that influence our decision-making is one of the reasons why the tyranny of the intentional object (see above) arises, which is that what pulls and pushes us is not explicitly represented in our conceptual scheme. This slideshow requires JavaScript. CDNS Analysis (QRI term; ref: 1, 2): A scientific and philosophical hypothesis with implications for measuring valence in conscious systems. Namely, the hypothesis is that the Symmetry Theory of Valence is expressed in the structure of neural patterns over time, implying that the valence of a brain will be in part determined by neural dissonance, consonance, and noise. This makes precise, empirically testable predictions within paradigms such as Connectome-Specific Harmonic Waves. Research Paradigms Evolutionary Qualia (QRI term): Evolutionary Qualia is a scientific discipline that will emerge as the science of consciousness improves to the point where cellular gene expression analysis, brain imaging, and interpretation algorithms get to infer the qualia present in the experience of the brains of animals in general. For instance, we may find out that certain combinations of receptor types and protein shapes inside neurons of the visual cortex are necessary and sufficient for generating color qualia. Additionally, such understanding could be complemented with an information-theoretic account of why color qualia is more effective (cost-benefit-wise) for certain information-processing than other qualia. Together, these two kinds of understanding will allow us to explain why the specific qualia that we have was recruited by natural selection for information-processing purposes. Evolutionary Qualia is the (future) discipline that explains from an evolutionary point of view why we have the specific qualia and patterns of local binding that we do (said differently, it will explain why “the walls of our world-simulation are painted the way they are”). So while Evolutionary Psychology may explain why we have evolved to have some emotions from the point of view of their behavioral effects, Evolutionary Qualia will explain why the emotions feel the way they do and how those specific feelings happen to have the right “shape” for the information-processing tasks they accomplish. Algorithmic Reduction (QRI term; ref: 1, 2): A reduction is a model that explains a set of behaviors, often very complex and diverse, in terms of the interaction between variables. A successful reduction is one that explains the intricacies and complexities present in the set of behaviors as emergent effects from a much smaller number of variables and their interactions. A specific case is that of “atomistic reductions” which decompose a set of behaviors in terms of particles interacting with each other (e.g. ideal gas laws from statistical mechanics in physics). While many scientifically significant reductions are atomistic in nature, one should not think that every phenomenon can be successfully reduced atomistically (e.g. double-slit experiment). Even when a set of behaviors cannot be reduced atomistically we may be able to algorithmically reduce it. That is, to identify a set of processes, internal representations, and interactions that when combined give rise to the set of observed behaviors. This style of reduction is very useful in the field of phenomenology since it can provide insights into how complex phenomena (such as psychedelic hallucinations) emerge out of a few relatively simple algorithmic building blocks. This way we avoid begging the question by not assuming an atomistic ontology in a context where it is not clear what atoms correspond to. Psychedelic Cryptography (QRI term; ref: 1, 2, 3): Encoding information in videos, text, abstract paintings, etc. such that only people who are in a specific state of consciousness can decode it. A simple example is the use of alternations in after-image formation on psychedelics (enhanced persistence of vision, aka. tracers) to paint a picture by presenting the content of an image one column of pixels at a time. Sober individuals only see a column of pixels while people high on psychedelics will see a long trace forming parts of an image that can be inferred by paying close attention. In general, psychedelic cryptography can be done by taking advantage of any of the algorithms one finds with algorithmic reductions of arbitrary states of consciousness. In the case of psychedelics, important effects that can be leveraged include tracers, pareidolia, drifting, and symmetrification.enhanced_mturk_1 Psychedelic Turk (QRI term; ref: 1, 2, 3, 4): Mechanical Turk is a human task completion platform that matches people who need humans to do many small (relatively) easy tasks with humans willing to do a lot of small (relatively) easy tasks. Psychedelic Turk is akin to Mechanical Turk, but where workers disclose the state of consciousness they are in. This would be helpful for task requesters because many tasks are more appropriate for people in specific states of consciousness. For example, it is better to test ads intended to be seen by drunk people by having people who are actually drunk evaluate them, as opposed to asking sober people to imagine how they would perceive them while drunk. Likewise, some high-stakes tasks would benefit from being completed by people who are demonstrably very alert and clear-headed. And for foundational consciousness research, Psychedelic Turk would be extremely useful as it would allow researchers to test how people high on psychedelics and other exotic agents process information and experience emotions usually inaccessible in sober states. Generalized Wada Test (QRI term; ref: 1, 2, 3): This is a generalization of the Wada Test where rather than pentobarbital being injected in just one hemisphere while the other hemisphere is kept sober, one injects substance A in one hemisphere and substance B on the other. This could be used to improve our epistemology about various states of consciousness. By keeping one hemisphere in a state with robust linguistic ability the other hemisphere could be used to explore alien-state spaces of consciousness and allow for real-time verbal interpretation. The caveats and complications are myriad, but the general direction this concept handle is pointing to is worth exploring. Self-Locating Uncertainty (originally a physics term but we also use it for describing a phenomenal character of experience; ref: 1, 2): The uncertainty that one has about who and where one is. This is relevant in light of states of consciousness that are common on high-dose psychedelics, mental illnesses, and meditation, where the information about one’s identity and one’s place in the world is temporarily inaccessible. Very high- and low-valence states tend to induce a high level of self-locating uncertainty as the information content of the experience is over-written by very simple patterns that dominate one’s attention. Learning to navigate states with self-locating uncertainty without freaking out is a prerequisite for studying alien state-spaces of consciousness. Phenomenal Time (standard high-level philosophy term; ref: 1): The felt-sense of the passage of time. This is in contrast to the physical passage of time. Although physical time and phenomenal time tend to be intimately correlated, as you will see in the definition of “exotic phenomenal time” this is not always the case. Phenomenal Space (standard high-level philosophy term; ref: 1, 2): The experience of space. Usually our sense of space represents a smooth 3D Euclidean space in a projective fashion (with variable scale encoding subjective distance). In altered states of consciousness phenomenal space can be distorted, expanded, contracted, higher-dimensional, topologically distinct, and even geometrically modified as in the case of hyperbolic geometry while on DMT (see below). Pseudo-Time Arrow (QRI term; ref: 1): This is a formal model of phenomenal time. It utilizes a simple mathematical object: a graph. The nodes of the graph are identified with simple qualia values (such as colors, basic sounds, etc.) and the edges are identified with local binding connections. According to the pseudo-time arrow model, phenomenal time is isomorphic to the patterns of implicit causality in the graph, as derived from patterns of conditional statistical independence. Exotic Phenomenal Time (QRI term; ref: 1): It is commonly acknowledged that in some situations time can feel like it is passing faster or slower than normal (cf. tachypsychia). What is less generally known is that experiences of time can be much more general, such as feeling like time stops entirely or that one is stuck in a loop. These are called exotic phenomenal time experiences, and while not very common, they certainly are informative about what phenomenal time is. Deviations from an apparent universal pattern are usually scientifically significant. Reversed Time (QRI term; ref: 1): This is a variant of exotic phenomenal time in which experience seems to be moving backwards in time. “Inverted tracers” are experienced where one first experiences the faint after-images of objects before they fade in, constitute themselves, and then quickly disappear without a trace. According to the pseudo-time arrow model this experience can be described as an inversion of the implicit arrow of causality, though how this arises dynamically is still a mystery. Moments of Eternity (common psychedelic phenomenology term; ref: 1): This exotic phenomenal time describes experiences where all apparent temporal movement seems to stop. One’s experience seems to have an unchanging quality and there is no way to tell if there will ever be something else other than the present experience in the whole of existence. In most cases this state is accompanied by intense emotions of simple texture and immediacy (rather than complex layered constructions of feelings). The experience seems to appear as the end-point and local maxima of annealing on psychedelic and dissociative states. That is, it often comes as metastable “flashes of large-scale synchrony” that are created over the course of seconds to minutes and decay just as quickly. Significantly, sensory deprivation conditions are ideal for the generation of this particular exotic phenomenal time. Timelessness (QRI term; ref: 1): Timelessness is a variant of exotic phenomenal time where causality flows in a very chaotic way at all scales. This prevents forming a general global direction for time. In the state, change is perceptible and it is happening everywhere in your experience, and yet it seems as if there is no consensus among the different parts of your experience about the direction of time. That is, there is no general direction along which the experience seems to be changing as a whole over time. The chaotic bustle of changes that make up the texture of the experience are devoid of a story arc, and yet remain alive and turbulent. Trip reports suggest that the state that arises at the transition points between dissociative plateaus has this noisy timelessness quality (e.g. coming up on ketamine). Listening to green noise evokes the general idea, but you need to imagine that happening on every sensory modality and not just audio. Time Loops (common psychedelic phenomenology term; ref: 1): This is perhaps the most common exotic phenomenal time experience that people have on psychedelics and dissociatives. This is due to the fact that, while it can be generated spontaneously, it is relatively easy to trigger by listening to repetitive music (e.g. a lot of EDM, trance, progressive rock, etc.), repetitive movements (e.g. walking, dancing), and repetitive thoughts (e.g. talking about the same topic for a long time) all of which are often abundant in the set and setting of psychedelic users. The effect happens when your projections about the future and the past are entirely informed by what seems like an endlessly repeating loop of experience. This often comes with intense emotions of its own (which are unusual and outside of the normal range of human experience), but it also triggers secondary emotions (which are just normal emotions amplified) such as fear and worry, or at times wonder and bliss. The pseudo-time arrow model of phenomenal time describes this experience as a graph in which the local patterns of implicit causality form a cycle at the global scale. Thus the phenomenal past and future merge at their tails and one inhabits an experiential world that seems to be infinitely-repeating. Time Branching (QRI term; ref: 1, 2): A rare variant of exotic phenomenal time in which you feel like you are able to experience more than one outcome out of events that you witness. Your friend stands up to go to the bathroom. Midway there he wonders whether to go for a snack first, and you see “both possibilities play out at once in superposition”. In an extreme version of this experience type, each event seems to lead to dozens if not hundreds of possible outcomes at once, and your mind becomes like a choose-your-own-adventure book with a broccoli-like branching of narratives, and at the limit all things of all imaginable possible timelines seem to happen at once and you converge on a moment of eternity, thus transitioning out of this variety. We would like to note that a Qualia Computing article delved into the question of how to test if the effect actually allows you to see alternative branches of the multiverse. The author never considered this hypothesis plausible, but the relative ease of testing it made it an interesting, if wacky, research lead. The test consisted of trying to tell apart the difference between a classical and a quantum random number generator in real time. The results of the experiment are all null for the time being. World-Sheet (QRI term; ref: 1, 2): We represent modal and amodal information in our experience in a projective way. In most common cases, this information forms a 2D “sheet” that encodes the distance to the objects around you, which can be used as a depth-map to navigate your surroundings. A lot of the information we experience is in the combination of this sheet and phenomenal time (i.e. how it changes over time). Hyperbolic Phenomenal Space (QRI term; ref: 1, 2): The local curvature of the world-sheet encodes a lot of information about the scene. There is a sense in which the “energy” of the experience is related to the curvature of the world-sheet (in addition to its phenomenal richness and brightness). So when one raises the energy of the state dramatically (e.g. by taking DMT) the world-sheet tends to instantiate configurations with very high-curvature. The surface becomes generically hyperbolic, which profoundly alters the overall geometry of one’s experience. A lot of the accounts of “space expansion” on psychedelics can be described in terms of alterations to the geometry of the world-sheet. Dimensionality of Consciousness (QRI term; ref: 1, 2, 3): A generative definition for the dimensionality of a moment of experience can be “the highest virtual dimension implied by the network of correlations between globally bound degrees of freedom”. Admittedly, at the moment this is more of an intuition pump than a precise formalism, but a number of related phenomena suggest there is something in this general direction. For starters, differences between degrees of pain and pleasure are often described in terms of qualitative changes with phase transitions between them. Likewise, one generally experiences a higher degree of emotional involvement in a given stimuli the more sensory channels one is utilizing to interact with it. Pleasure that has cognitive, emotional, and physical components in a coordinated fashion is felt as much more profound and significant than pleasure that only involves one of those “channels”, or even pleasure that involves all three but where they lack coherence between them. Another striking example involves the states of consciousness induced by DMT, in which there are phase-transitions between the levels. These phase transitions seem to involve a change in the dimensional character of the hallucinations: in addition to hyperbolic geometry, DMT geometry involves a wide range of phenomena with virtual dimensions. On lower doses the hallucinations take the shape of 2D symmetrical plane coverings. On higher doses those covers transform into 2.5D wobbly worldsheets, and on higher doses still into 3D symmetrical tessellations and rooms with 4D features. For example, the DMT level above 3D tessellations has its “walls” covered with symmetrical patterns that are correlated with one another in such a way that they generate a “virtual” 4th dimension, itself capable of containing semantic content. We suspect that one of the reasons why MDMA is so uniquely good at healing trauma is that in order to address a high-dimensional pain you need a high-dimensional pleasure to hold space for it. MDMA seems to induce a high-dimensional variety of feelings of wellbeing, which can support and smooth a high-dimensional pain like such as those which underly traumatic memories. Qualia Futurology Meme (standard science/psychology term coined by Richard Dawkins; 1): A “meme” is a cultural unit of information capable of being transmitted from one mind to another. Examples of memes include jokes, hat styles, window-dressing color palettes, and superstitions. Memeplex (lesser known term coined by Richard Dawkins; 1, 2): A “memeplex” is a set of memes that, when simultaneously present, increase their ability to replicate (i.e. to be spread from one mind to another). Memeplexes do not need to say true things in order to be good at spreading; many strategies exist to motivate humans to share memes and memeplexes, ranging from producing good feelings (e.g. jokes), being threatening (e.g. apostasy), to being salient (e.g. famous people believe in them). A classic example of a memeplex is that of an ideology such as libertarianism, communism, capitalism, etc. Full-Stack Memeplex (QRI term; ref: 1, 2): A “full-stack memeplex” is a memeplex that provides an answer to most common human questions. While the scope of a memeplex like “libertarianism” extends across a variety of fields including economics and ethics, it is not a full-stack memeplex because it does not attempt to answer questions such as “why does anything exist?”, “why are the constants of nature the way they are?” and “what happens after we die?”. Religions and some philosophies like existentialism, Buddhism, and the LessWrong Sequences are full-stack memeplexes. We also consider the QRI ecosystem to contain a full-stack memeplex. Hedonistic Imperative (coined by David Pearce; ref: 12): The Hedonistic Imperative is a book-length internet manifesto written by David Pearce which outlines how suffering will be eliminated with biotechnology and why our biological descendants are likely to be animated by gradients of information-sensitive bliss. Abolitionism (coined by David Pearce; ref: 1): In the context of transhumanism, Abolitionism refers to the view in ethics that we should eliminate all forms of involuntary suffering both in human and non-human animals alike. The term was coined by David Pearce. Fast Euphoria (QRI term; ref: 1): This is one of the main dimensions along which a drug can have effects, roughly described as “high-energy and high-valence” (with high-loading terms including: energetic, charming, stimulating, sociable, erotic, etc.). Slow Euphoria (QRI term; ref: 1): This is one of the main dimensions along which a drug can have effects, roughly described as “low-energy and high-valence” (with high-loading terms including: calming, relieving, blissful, loving, etc.). Spiritual/Philosophical Euphoria (QRI term; ref: 1, 2): This is one of the main dimensions along which a drug can have effects, roughly described as “high-significance and high-valence” (with high-loading terms including: incredible, spiritual, mystical, life-changing, interesting, colorful, etc.). Wireheading (standard psychology, neuroscience, and philosophy term; 1, 2): The act of modifying a mind’s reward architecture and hedonic baseline so that it is always generating experiences with a net positive valence (whether or not they are mixed). Wireheading Done Right (QRI term; ref: 1, 2): Wireheading done in such a way that one can remain rational, economically productive, and ethical. In particular, it entails (1) taking into account neurological negative feedback systems, (2) avoiding reinforcement cycles that narrow one’s behavioral focus, and (3) preventing becoming a pure replicator (see below). A simple proof of concept reward architecture for Wireheading Done Right is to cycle between different kinds of euphoria, each with immediate diminishing returns, and with the ability to make it easier to experience other kinds of euphoria. This would give rise to circadian cycles with stages involving fast, slow, and spiritual/philosophical euphoria at different times. Wireheading Done Right entails never getting stuck while always being in a positive state. Pure Replicator (QRI term; 1, 2): In the context of agents and minds, a Pure Replicator is an intelligence that is indifferent towards the valence of its conscious states and those of others. A Pure Replicator invests all of its energy and resources into surviving and reproducing, even at the cost of continuous suffering to themselves or others. Its main evolutionary advantage is that it does not need to spend any resources making the world a better place. Consciousness vs. Replicators (QRI term; 1, 2): This is a reframe of the big-picture narrative of the meaning of life in which the ultimate battle is between the act of reproducing for the sake of reproduction and the act of seeking the wellbeing of sentient beings for the sake of conscious value itself. Maximum Effector (QRI term; 1): A Maximum Effector is an entity that uses all of its resources for the task of causing large effects, irrespective of what they may be. There is a sense in which most humans have a Maximum Effector side. Since causing large effects is not easy, one can reason that for evolutionary reasons people find such an ability to be a hard-to-fake signal of fitness. Arrogance and power may not be all that people find attractive, but they do play a role in what makes someone seem sexy to others. Hence why, unfortunately, people research how to cause large effects even if they are harmful to everyone. The idealized version of a Maximum Effector, however, would be exclusively interested in causing large effects to happen rather than doing so as a way to meet an emotional need among others. Although being a Maximum Effector may seem crazy and pointless, they are important to consider in any analysis of the future because the long-tailed nature of large effects suggest that those who specifically seek to cause them are likely to have an impact on reality orders of magnitude higher than the impact of agents who try to simultaneously have both large and good effects. Sasha Shulgin All-Is-One Simulation Theory Allen Saakyan asks in All-Is-One Simulation Theory  Personal Identity: Closed, Empty, Open In brief: This slideshow requires JavaScript. Antinatalism and Closed Individualism Antinatalism and Empty Individualism Antinatalism and Open Individualism And on a different thread: Prandium Interruptus The One-Electron View Philosophy of Time Personal Identity and Eternalism Personal Identity X Philosophy of Time X Antinatalism Physicalism Implies Existence Never Dies Also, from the same essay: An Evolutionary Environment Set Up For Success The Binding Problem The brain is wider than the sky, For, put them side by side, The one the other will include With ease, and you beside. – Emily Dickinson Is it for real? The Psychedelic State of Input Superposition Further down on the same thread, written by someone else: — GatorAutomator Three Hypothesis for PSIS: Cognitive, Spiritual, Quantum The Cognitive Hypothesis The Spiritual Hypothesis The Quantum Hypothesis Deriving PSIS with Quantum Mechanics The Experiment Participants were instructed to: Predicted Psychedelic Perception Algorithmic Reduction of Psychedelic States Only when sexual choice favored the reportability of our subjective experiences- with the emergence of the mental clearing-house we call consciousness- did our strangely promiscuous introspection abilities emerge, such that we seem to have instant conscious access to such a range of impressions, ideas, and feelings. This may explain why philosophical writing about consciousness so often sounds like love poetry- philosophers of mind, like lovesick teenagers, dwell upon the redness of the rose, the emotional urgency of music, the soft warmth of skin, and the existential loneliness of the self. The philosophers wonder why such subjective experiences exist, given that they seem irrelevant to our survival prospects, while the lovesick teenagers know perfectly well that their romantic success depends, in part, on making a credible show of aesthetic sensitivity to their own conscious pleasures. The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature (pg. 365) by Geoffrey F. Miller A Darwinian Set and Setting According to The Mating Mind, human sexual selection favors particular fitness-indicating traits, both physical and mental. In the context of mental traits, we have verbal and introspective abilities, agreeableness, conscientiousness, openness to experience, low neuroticism and extroversion. No matter how verbally capable and introspective a given person is, unless that is balanced with some degree of agreeableness, conscientiousness, etc. the person will not be all that attractive. But, when all else is being held equal, stronger verbal and introspective abilities are favored. Teenagers, arguably, know this best of all: courtship is intensely verbal. Our minds evolved in a Darwinian environment. If people like Miller are right in thinking that language evolved as a fitness indicator, we are right to expect that the way we think and verbalize is biased to be impressive to the members of the opposite sex during courtship. Powerful introspective abilities, as it were, can make one’s language seem deeper, more romantic, and even at an entirely different level than that of one’s peers. In this backdrop of sexual choices and judgements, it is not surprising that humans would develop ever-increasing verbal and introspective capacities. At some point everyday life could not present sufficient opportunities for people, especially males, to show off their own abilities. And as these abilities increased over time, culture was forced to invent handicaps so that people could display their top capabilities. Over time, elaborate and competitive handicaps were integrated into the culture. Even verbal and introspective abilities at the top of the scale can still be compared side by side by using carefully selected handicaps: for example, poetry is exactly that; rhyme, rhythm and meter make it easier for the best poets to show off their excellent abilities. The handicaps adjust to the maximum level of competence in the population. The space of handicaps that are used to show off traits that are reliable indicators of fitness is very large. From Greek Symposiums to modern day Frat Parties, Western civilization has embraced a niche subculture that uses chemical handicaps as a means to display verbal, social and creative skills. If you can philosophize after drinking a gallon of wine, or stay capable of managing the playlist after 16 cheap cans of beer, you are showing off your biological robustness. Clearly, many of our ancestors were capable of impressing potential sexual mates with a mixture of booze, loud music and stunning philosophical conversations. One could argue that psychedelics have come to disrupt our traditional games of handicaps. “Sure you can drink a bottle of tequila and sing in a band, but can you take three hits of acid and tell me what your experience reveals about the intrinsic nature of consciousness?” Psychedelics are, in a way, a cultural hyper-stimulus that presents the most difficult and interesting handicap currently in existence for verbal and introspective abilities. Cultures can have an allergic reaction to the states of consciousness that these agents can disclose; people are afraid that psychedelic users will discover something that they themselves don’t know. Notably, psychedelicists have been both demonized and deified since the 60s. Sure, these researchers became extremely open minded, and in many ways weird. But, above all, they became extremely interesting people. And interesting people who challenge the current games of status can cause cultural allergic reactions. Every acid head and psychedelic researcher has a pet theory of what these compounds are really doing in one’s mind. Many of these folk theories about the effects of psychedelics involve ontologies that currently have little scientific support (such as souls, thought fields, spirit worlds, archetypes, alien conspiracies, and so on). Although we cannot rule out explanations of this sort out of hand, the ontologies themselves are so abstract and poorly defined that we cannot accept them as useful forms of reductions. That said, their future versions will be more interesting. It is likely that committed, rational, spiritual psychedelic users will formalize models of this sort at some point. Rather than talking about a “spirit world,” they will talk about “mind-independent extra-dimensional space that consciousness can access in altered states” and then go on to define the differential equations that govern consciousness’s interactions with this space. When this happens, we will be in a much better position to assess the validity of these models, test the reality of those spaces, and perhaps even recruit the extra-dimensional inhabitants of these worlds for computational tasks. Psychedelic experiences drastically increase people’s introspection, capacity for deep aesthetic appreciation, while at the same time increasing their ability to entertain unusual ideas. Insofar as the selection pressures of our introspective abilities have been heavily biased towards courtship ability, it is not surprising that people tend to immediately cast self-enhancing, life-affirming and magical narratives into their interpretations of their personal psychedelic experiences. After all, having a very interesting story to tell is highly praised during courtship. Are people’s psychedelic narratives a modern day form of the peacock’s tail? While psychedelic talk does not yet form part of any mainstream game of courtship, I envision this changing in the next decades. Undoubtedly, the most insightful, sound, and scientifically rigorous members of the Super-Shulgin Academy will attract attention, status, resources and… desirable mates. What is the deep structure of psychedelic experiences? Psychedelics seem to have a generalized effect on one’s consciousness. At minimum, we could talk of experience amplification. Without delving into specifics, psychedelics introduce spontaneous activity into our consciousness that our mind is compelled to integrate somehow. Our state of consciousness changes dynamically as our mind adjusts itself to the incoming stimulation. The result is tightly dependent on the interplay between our brain anatomy, motivational system and the actual changes to the micro-structure of consciousness induced by LSD. As John Lilly noted in light of his psychedelic experiences: “in the province of the mind, what one believes to be true is true or becomes true, within certain limits to be found experientially and experimentally. These limits are further beliefs to be transcended. In the mind, there are no limits…”.* While there are reasons not to take this literally, we have grounds for claiming that a large number of limits on our experience are placed there by our deeply held beliefs and attitudes. The space of possible LSD experiences that a single individual can experience is much larger than what said individual will typically be able to explore in practice. Many limits are imposed by his or her beliefs and background assumptions, rather than by physiology per se. Social cognition is a profound attractor in psychedelic experiences. “What will I say about this? What would this person think about this experience? etc.” are captivating thoughts. However, they occupy valuable mental space. And the thick mental judgements that people naturally focus on come with large conceptual and emotional baggage that taints the experience. Meditators, philosophers and scientists are more likely to set aside some time during their explorations to delve more deeply into what the energy introduced by LSD can produce in one’s consciousness. After extreme training and tens (or hundreds) of trips, dedicated psychonauts will discover qualities that all of the trips share. Most people will likely experience a variant of Lilly’s realization that whatever you believe can be perceived as true during psychedelic experiences. Lilly emphasized the limitless quality of the mind, but one must wonder: If one can experience as true anything conceivable, are we not, then, limited by what we can conceive? No matter how much time one spends with an open mind waiting for new and interesting ideas to take shape, one cannot know the nature of what one has not yet even conceived of. It may be true that we will always find fundamental limits that cannot be overcome. There are fundamental physiological constraints to the possible configurations of our consciousness, and arguably, chemical agents, while capable of expanding the space of possibilities, will not automatically give access to all possible states of consciousness. As future research is likely to show, 2C-B and LSD probably facilitate slightly different kinds of thoughts and experiences. Thus the limits of our mind are at least to a large extent the result of our physiology. Memes and meditation can only go so far. In addition to physiological limits, the structure of the state-space of qualia is itself a constraint on what can and cannot be experienced. To the extent that psychedelic states enable the exploration of a larger space of possible experiences, we are more likely while on psychedelics to find states of consciousness that demonstrate fundamental limits imposed by the structure of the state-space of qualia. In normal everyday experience we can see that yellow and blue cannot be mixed (phenomenologically), while yellow and red can (and thus deliver orange). This being a constraint of the state-space of qualia itself is not at all evident, but it is a good candidate and many introspective individuals agree. On psychedelic states one can detect many other rules like that, except that they operate on much higher-dimensional and synesthetic spaces (E.g. “Some feelings of roughness and tinges of triangle orange can mix well, while some spiky mongrels and blue halos simply won’t touch no matter how much I try.” – 150 micrograms of LSD). One of the objectives of Qualia Computing is to define the state space of possible experiences and the interdependencies between them. While normal everyday states of consciousness are important datapoints, I predict that the bulk of the most useful information will come from studying the behavior and mechanics of consciousness in radically altered states. To this end, however, we should focus on simple explanations that can be generalized to all psychedelic experiences. Starting Background Assumptions For the purpose of this article I will assume that direct realism, in all of its guises, is wrong. That is, I will assume that any mind-independent object can only be experienced indirectly. What we experience is not the object (or beings) themselves, but a qualia-furnished representation entirely contained within one’s mind (this is often called the simulationist account of perception). Furthermore, I will also assume that the behavior of  the universe can be fully described with the Standard Model of physics (or a future version of it). In what is to follow I will propose, as a first approximation, an algorithmic reduction of psychedelic states; I will propose a set of changes in our consciousness that (1) is as simple and assumption-free as possible, and (2) can be used to reconstruct as many psychedelic effects as possible. Two Kinds of Reduction The word reduction in the context of philosophy of science has a lot of historical and conceptual baggage. In the context of this article, I will use the word in the following sense: We say that a property of a given phenomenon X reduces to Y if we can fully explain X’s property by referencing Y’s properties. X can be a physical phenomenon, a mathematical construct or even an experience. Y is an ontology with interaction rules, which allow the pieces of said ontology to interact with one another. We do not commit to the idea that Y itself needs to be the fundamental (or true) ontology of X. But we do want to make sure that Y is at least more fundamental than X in some appropriate sense. So what kind of ontologies can Y have? In the context of philosophy of mind, reductions usually attempt to account for not only the behavior of consciousness but also for its underlying nature. Thus, functionalism is both a reduction program as well as a philosophical take on what the mind fundamentally is. Thankfully, we do not need to commit to any ontology in order to advance a particular style of reduction. Reductions are useful regardless: they reduce the amount of information needed to describe a phenomenon, and if accurate, they can also make useful predictions. Finally, these reductions can provide hints for how to bridge different areas of science; by identifying isomorphisms or even further reductions, entire fields can cross-pollinate once their respective reductions are compatible (such as biology and chemistry or chemistry and physics). Atomistic Reduction For most intents and purposes, science relies on a particular kind of reduction that we can call atomistic reduction. This style of reduction focuses on explaining macroscopic phenomena by modeling it as the emergent structure of many particles interacting with one another at a much finer level of resolution. Even though this style of reduction is usually fruitful (e.g. thermodynamics), it can be counter-productive to assume in some situations. An extreme case would be the quantum computer. If states of superposition help a computer find an answer, it will be hard to explain the behavior of said superposition by postulating that it actually reduces to little particles interacting using simple rules. The model could in principle be worked out, but at the cost of very high complexity. It would be much easier to start with a quantum-mechanical ontology that allows the superposition of wavefunctions! Then what is left is to reduce the rest of the computer to quantum mechanics (which is possible, given that particle models and quantum mechanical models usually converge at the macroscopic limit). It is tempting to try to reduce the properties of the mind (including psychedelic states) using an atomistic reduction. Unfortunately, the phenomenal binding problem adds a complication to this reduction. Rather than discussing (right now) whether an atomistic (and thus classical) account will ultimately be capable of modeling conscious experience, we will side-step this problem by using a different style of reduction. We will focus only on the algorithmic level of analysis. Algorithmic Reduction Without assuming a fundamental ontology (atoms, fields, wavefunctions, etc.) we can still make a lot of progress. We can restrict ourselves to identifying what we call an algorithmic reduction: find a set of procedures, state-spaces, shapes and overall main effects out of which you can reconstruct as much of the observed behavior as possible. In reality, every reduction is, at least in part, an algorithmic reduction. By specifying a particular ontology such as “particles”, we restrict the shape of our possible reductions. By keeping the reduction at the algorithmic level, we allow arbitrary ontologies to be the final explanations (then depending on actual empirical measurements). The main criteria for success still includes (1) the overall complexity of the model, and (2) the explanatory power of the model. In other words, how easily and precisely does the model reconstruct the behavior of our experiences? A Zoo of Psychedelic Effects PsychonautWiki has a detailed and fascinating taxonomy of reported psychedelic visual effects. One could argue that all of these countless effects are completely unique. As a philosopher might put it, these effects may ultimately be qualitatively irreducible to one another. But what are the chances that a simple molecule would happen to trigger a whole zoo of unrelated effects? As a form of reduction, nothing is achieved by stating that every effect is its own unique phenomenon. Four Principal Operators: A Simple Algorithmic Model of Psychedelic States In trying to account for the strange effects of psychedelics, we will aim to propose as few main effects as possible and then use these effects, and their interactions, to derive all of the remaining effects. By doing this, we will be algorithmically reducing the complex phenomena found in psychedelic states. In turn, this will allow us to increase our understanding of the source of information processing benefits provided by psychedelic states, and to derive new and exciting applications of such states. Additionally, by identifying a good algorithmic reduction, we might be able to refine the states themselves, to amplify their benefits while minimizing the drawbacks. The model we will treat for now has four main effects, and with those four effects we will attempt to reconstruct the rest. These effects are: 1. control interruption 2. drifting 3. eidetic hallucinations/enhanced pattern recognition/apophenia 4. symmetry detection/symmetry propagation Symmetric drifting. What would Giulio Tononi think about this? Source. Control interruption is the simplest and most universal psychedelic effect. It enables the buildup of qualia in one’s consciousness. People say that psychedelics are intense, deep, bright, etc. Every experience, whether a thought, a smell or an emotion, seems to be both stronger and longer-lasting on psychedelics. Things seem more lively, and this is not because a switch is suddenly turned on and your experience of the current input is amplified. Rather, one seems to be experiencing a gentle overlap of many previous frames (and feature bundles) of one’s experience. In medium to high doses, this can give rise to solid frame stacking. In turn, the buildup of sensation creates complex patterns of interference: In order for a perceptual system to transition from a linear to a nonlinear state, negative feedback control must be subverted. If control is entirely removed then perception becomes totally unconstrained, leaving a system that is quickly overloaded with too much information. If control is placed in a state where it is partially removed or in a toggled superposition where it is alternately in control and not in control over the period of a rapid oscillation, then the constraints of linear sensory throughput will bifurcate into a nonlinear spectrum of multi-stable output with signal complexity correlating to the functional interruption of control. Common entheogenic wisdom states that you must relinquish control and submit to the experience to get the most out of psychedelics. Holding onto control causes negative experiences and amplifies anxiety; letting go of control and embracing unconstrained perception is a central psychedelic tenet. This demonstrates that psychedelics directly subvert feedback control over linear perception to promote states of unconstrained consciousness. – Control Interrupt Model of Psychedelic Action, PIT Control interruption explains a large variety of effects, including the increase in the raw intensity (and amount) of experience, as well as the longer lasting positive afterimages (and thus tracers). Here we show a simple example of this effect. Consider the “original stimuli” to be what one experiences under a sober state. Likewise, consider the 9 squares to be different states of consciousness brought up by various psychotropic combinations. The 9 gifs you see above are simulations of control interruption using a simple feedback model (which we will describe in detail in a later article). The x-axis has different “echo strengths” while the y-axis has varying feedback strengths. These are two of the model parameters. Notice that the lower right corner is a credible rendition of something that people describe as moments of eternity. These are experiences where time seems to stop due to an over-saturation of regular and ordered qualia. When considering the following effects, don’t forget that control interruption is also going on all the time. The stranger the psychedelic effect, the more intense it is. Drifting is responsible for breathing walls, animated plants, feelings of boundary dissolution, merging and melting, and so on. Small amounts of drifting usually involve individual feature detachments from perceptual objects (such as the color and shape of a chair becoming dissociated). Medium amounts of drifting make textures flow constantly. If one’s experience was made of tiny magnetic gears that are usually aligned in a coherent way, drifting would result from increasing the overall energy of the system. Thus, the visual system is constantly descending to “more aligned local states” while incoming energy is constantly adding noise and destroying all of the alignment progress made. Source: PsychonautWiki, Anonymous A particularly salient aspect of drifting is that features and locally-bound fragments of experience can drift in any direction in 3D. Pieces of the wall don’t only drift left and right, but also forwards and backwards. On high doses of psychedelics or synergistic combinations of dissociatives and psychedelics (e.g. LSD + nitrous, 2C-B + ketamine, etc.), drifting can become all-encompassing. A critical point is crossed when one loses the capacity to define a mainframe of experience (the dominating orientation-giving island of locally bound experience that we use as a reference point). When this happens, one feels like one cannot tell left from right, or up from down. One simply experiences a constant chaotic flow of experience. In some cases one can even spot interesting instabilities that resemble actual physical instabilities found in fluid mechanics (such as the Kelvin–Helmholtz instability). Drifting does not occur in isolation, and its mechanics are dependent on the particular set and setting in which the psychedelic experience is developing. From a computational point of view, drifting can be useful because it allows a quick exploration of the state-space of possible local binding configurations between the phenomenal objects present in one’s experience. Indeed, not only does red fail to mix with green, but many of the synesthetic qualia varieties present in a scene with constant drifting will refuse to touch each other. Drifting feels like there is some sort of psychedelic energy (somewhat reminiscent of anxiety, but not restricted to body feelings) that overheats certain parts of one’s conscious experience, and in turn disassembles the local connections there. Enhanced Pattern Recognition: This effect refers to the transient (but often powerful) lowering of the detection threshold for previously experienced patterns and known ontologies (e.g. animals, plants, people, etc.). Psychedelics, in other words, temporarily increase one’s degree of apophenia. Another name given to this effect is eidetic hallucinations. From a Bayesian point of view, the effect could be described thus: psychedelics intensify the effect of our priors. As explained in Getting Closer to Digital LSD, Google’s deep belief neural network inceptionist technique works by finding bundles of features that trigger high-level neurons (such as face-detectors, object-detectors, etc) at sub-threshold levels (e.g. “this almost looks like a frog”) and then modifying the picture so that the network more strongly detects those same high level features. This particular algorithm can be understood in terms of the pharmacological action of psychedelics: one can have breakthroughs of eidetic hallucinations by impairing the inhibitory control coming from the cortex. In a sense we could say that while tracers are the result of “simple cell control interruption”, eidetic hallucinations are the result of “complex cell control interruption.” The former allows the build-up of colors, edges and simple shapes, while the latter amplifies the features that trigger high-level percepts such as faces and objects. Enhanced Pattern Recognition / Eidetic Hallucinations / Visial Apophenia The way one directs attention during a psychedelic trip influences the way eidetic hallucinations evolve over time. For this reason any psychedelic replication movie will probably require human input (in the form of eye-tracking) in order to incorporate human saliency preferences and interests into an evolving virtual psychedelic trip simulated with the Inceptionist Method. Lower Symmetry Detection and Propagation Thresholds: Finally, this is perhaps the most interesting and scientifically salient effect of psychedelics. The first three effects are not particularly difficult to square with standard neuroscience. This fourth effect, while not incompatible with connectionist accounts, does suggest a series of research questions that may hint at an entirely new paradigm for understanding consciousness. I have not seen anyone in the literature specifically identify this effect in all of its generality. The lowering of the symmetry detection threshold really has to be experienced to be believed. I claim that this effect manifests in all psychedelic experiences to a greater or lesser extent, and that many effects can in fact be explained by simply applying this effect iteratively. Credit: Chelsea Morgan from PsychonautWiki and r/replications Symmetry detection can be (and typically is) recursively applied to previously detected symmetry bundles. A given symmetry bundle is a set of n-dimensional symmetry planes (lines, hyperplanes, etc.) for which the qualities of the experience surrounding this bundle obey the symmetry constraints imposed by these planes. The planes can create mirror, rotational or oblique symmetry. Each symmetry bundle is capable of establishing a merging relationship with another symmetry bundle. These relationships are fleeting, but they influence the evolution of the relative position of each plane of symmetry. When x symmetry planes are in a merging relationship, one’s mind tries to re-arrange them (often using drifting) to create a symmetrical arrangement of these x symmetry planes. To do so, the mind detects one (or several) more symmetry planes, along which the previously-existing symmetry planes are made to conform, to organize in a symmetrical way (mirror, rotational, translational or otherwise). There is an irresistible subjective pull towards those higher levels of symmetry. The direction of highest symmetry and meta-symmetry feels blissful, interesting, mind-expanding, and awe-producing. In the future, perhaps at a Super-Shulgin Academy, people will explore and compare the various states of consciousness that exhibit peak symmetry. These states would be the result of iteratively applying symmetry detection, amplification and re-arrangement. We would see fractals, tessellations, graphs and higher dimensional projections. Which one of these experiences contains the highest degree of inter-connectivity? And if psychedelic symmetry is somehow related to conscious bliss, which experience of symmetry is human peak bliss? The Micro-Structure of Consciousness At Qualia Computing we explore models of consciousness that acknowledge the micro-structure of consciousness. Experiences are not just higher-order mental operations applied to propositional content. Rather, an instant of experience contains numerous low-level textural properties. This is true for every sensory modality, and I would argue, even for the what-its-likeness of thought itself. Even just thinking about a mathematical idea (ex. “the intersection of two arbitrary sets”) is done by interacting with a background of raw feels, and these raw feels determine our attitudes and interactions with the ideas we are trying to abstract (some people, for example, experience emotional distress when trying out mathematical problems, and this is not because certain mathematical spaces are inherently unpleasant or anxiety-inducing). In the case of vision, the micro-structure of consciousness is capable of supporting at least the following low-level features: color, color gradients, points, edges, oriented movement, and acceleration. A full conversation about the range of visual features that we are capable of experiencing is a discussion for another time. But for the time being, it will suffice to point out that (static) models of peripheral vision only need 5 summary statistics. With only five summary stats you can create textures that a human will find impossible to distinguish in peripheral vision. These so-called mongrels are textural metamers (equivalence classes of subjectively indistinguishable input patterns). The state-space of perceivable visual textures is the space of possible mongrels, and that is an example of the sort of micro-structure we are looking for. Unlike the cozy high-definition space inscribed in the fovea, most of the information found in our sensory modalities comes in the form of textures that are mappable to state-spaces of summary statistics. Psychedelic symmetry detection and amplification operates on the inner structure of mongrels. The fact that the mongrels are the objects becoming symmetric is something that can elude introspection until someone points it out. It happens right in front of any tripper’s eyes and yet people don’t seem to report it very often (if at all). This may be a result of the fact that the fine-grained structure of consciousness is rarely a topic of conversation, and that we usually describe what we see in the fovea (unless we have no other option). Our words usually refer to whole percepts or, at best, the simplest raw values of experience (such as the hue of colors or the presence of edges). And yet, the structure of our mongrels is quite obvious once symmetry propagation has conformed a large patch of your experience to have a tessellated identical mongrel repeating across it. How Are these Components Related to Each Other? The Kaleidoscopic technique to induce qualia annealing relies on a combination of drifting and symmetry detection in order to resolve implicit inconsistencies within one’s own memory gestalts. As we live and grow our experienced evidence base, we accumulate memories and impressions of many worldviews. Each worldview is, in a way, a response to all of the previous ones (or at least the memorable ones) and the current situation and the problems one is facing. Thanks to the four effects here described, a person can utilize a psychedelic state to increase the probability of the systematic co-occurrence of (usually) mutually-exclusive gestalts (worldviews) and thus enable their mutual awareness. And with mutual awareness, the symmetry detection and amplification effect creates (somehow forcefully) a unified phenomenal object that incorporates the inconsistent views into an unbiased (or less biased) point of view. One can achieve a higher order of memetic and affective integration. pGIFjd3Mongrel repetition / symmetrical tessellation. Source. Psychedelics as Introspectoscopes** Given the symmetry detection and amplification property of psychedelics, one can reasonably argue that psychedelic states may be able to reveal the properties of the micro-structure of consciousness. Timothy Leary, among others, described LSD as a sort of microscope for one’s psyche. The very word psychedelic means mind-manifest (the manifestation of one’s mind). Given the four components of these experiences, the fact that psychedelics work as some sort of microscope should not be surprising. Symmetry detection and control interruption multiply the amount of raw experience, while pattern recognition shows you what you are expecting (your priors become evident) and drifting makes the fleeting synesthetic effects malleable and easier to move around. People generally agree that psychedelics can show you subtle aspects of your own mind with stark clarity. But can they reveal the intrinsic properties of the nature of qualia at the most fundamental level? The way to achieve this may be to create a fractal structure of symmetries in such a way that any tiny part of one’s experience can get reflected throughout the entirety of the phenomenal structure. One can then use eidetic hallucinations (or further symmetry detection) to focus and stabilize the fractal structure. Thus one would multiply the surface area of all of one’s attention into countless replicas of the micro-structure of a given part of one’s experience. A fractal kaleidoscopic mirror amplifier chamber is exactly what I imagine when I think about how to analyze the fine-grained structure of consciousness. And it so happens that meditation plus psychedelics can allow you to (fleetingly) build just that. Psychedelic Introspectoscope (fractal kaleidoscope of generalized symmetries) to amplify arbitrary qualia values (such as particular emotions, phenomenal colors, synesthetic inter-junctions, etc.) Any subtle qualia space can be multiplied countless times in such a way that all of one’s experience becomes a coherent interlocking structure that can be perceived all at once. If one wants to study, for example, the possible interactions between two hues of color, one can amplify the boundary between two regions that make the desired contrast of hues and make the entire fractal structure amplify this boundary hundreds of times. Arguably, if one discovers that certain qualia values cannot be mixed in the introspectoscope (such as blue and yellow), one may still not know if these are fundamental constraints, or if they are the result of our connectome structure. If, on the other hand, two qualia values can mix in the introspectoscope, then we would know that they are not fundamentally mutually exclusive. Thus we would find out relational properties of the very state-space of qualia. Reducing All Effects Can we derive all psychedelic effects using the four components discussed above? While this is not yet possible, I trust that further work will show how most of the weird (and weirder) effects of psychedelics may be reduced to relatively simple (but not always atomistic) algorithms applied to the micro-structure of consciousness. I anticipate that we will discover that high doses actually produce entirely new effects (for example, what happens on 400 micrograms of LSD often include qualitative jumps from what happens at 150 micrograms). To note, ontological qualia and other subtle aspects of consciousness may resist reduction for still many more decades to come. *Programming and Meta programming in the Human Biocomputer **An Introspectoscope is a hypothetical apparatus that enables a person to study the deep structure of his or her own consciousness. The concept comes from a paper in the making by Andrew Y. Lee. Obviously this comes with significant challenges. Some challenges come from the fact that we are trying to analyze something very small, and other challenges come from the fact we are trying to analyze qualia. Additionally, there are unique challenges that come from analyzing microscopic qualia qua microscopic qualia. I suggest that we use methods that amplify the micro-structure by taking advantage of fractal states: recursive and scale-free symmetry planes can amplify anything minute to a prominent place in the entire consciousness. Qualia Computing in Tucson: The Magic Analogy Panpsychism and Panprotopanpsychism, David Chalmers (2011) 1. What conception of consciousness does this person have? Playing Rogue Our Conception of Consciousness Theoretical Requirements Background Assumptions In order to make sense both of and Qualia Computing, it makes sense to be explicit about the background assumptions that we hold. Without explaining them in depth, here are some key assumptions that color the way we think about consciousness: A Battle of Wits A Broken Political Analogy Magic: The Gathering analogy Types (Clusters) The Cards and Deck Types of Consciousness Theories 1. Integrated Information Theory (IIT) 2. Orchestrated Objective Reduction (Orch OR) 3. Prediction Error Minimization (PEM) 4. Global Neuronal Workspace Theory (GNWS) 5. Panprotopanpsychism (not explicitly named) 6. Nondual Consciousness Monism (not explicitly named) 8. Higher Order Thought Theory (HOT) Some Definitions Both physics and philosophy are jargon-ridden. So let’s first define some key concepts. Both “consciousness” and “physical” are contested terms. Accurately if inelegantly, consciousness may be described following Nagel (“What is it like to be a bat?”) as the subjective what-it’s-like-ness of experience. Academic philosophers term such self-intimating “raw feels” “qualia” – whether macro-qualia or micro-qualia. The minimum unit of consciousness (or “psychon”, so to speak) has been variously claimed to be the entire universe, a person, a sub-personal neural network, an individual neuron, or the most basic entities recognised by quantum physics. In The Principles of Psychology (1890), American philosopher and psychologist William James christened these phenomenal simples “primordial mind-dust“. This paper conjectures that (1) our minds consist of ultra-rapidly decohering neuronal superpositions in strict accordance with unmodified quantum physics without the mythical “collapse of the wavefunction”; (2) natural selection has harnessed the properties of these neuronal superpositions so our minds run phenomenally-bound world-simulations; and (3) predicts that with enough ingenuity the non-classical interference signature of these conscious neuronal superpositions will be independently experimentally detectable (see 6 below) to the satisfaction of the most incredulous critic. The “physical” may be contrasted with the supernatural or the abstract and – by dualists and epiphenomenalists, with the mental. The current absence of any satisfactory “positive” definition of the physical leads many philosophers of science to adopt instead the “via negativa“. Thus some materialists have sought stipulatively to define the physical in terms of an absence of phenomenal experience. Such a priori definitions of the nature of the physical are question-begging. Physicalism” is sometimes treated as the formalistic claim that the natural world is exhaustively described by the equations of physics and their solutions. Beyond these structural-relational properties of matter and energy, the term “physicalism” is also often used to make an ontological claim about the intrinsic character of whatever the equations describe. This intrinsic character, or metaphysical essence, is typically assumed to be non-phenomenal. “Strawsonian physicalists” (cf. “Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?”) dispute any such assumption. Traditional reductive physicalism proposes that the properties of larger entities are determined by properties of their physical parts. If the wavefunction monism of post-Everett quantum mechanics assumed here is true, then the world does not contain discrete physical parts as understood by classical physics. Materialism” is the metaphysical doctrine that the world is made of intrinsically non-phenomenal “stuff”. Materialism and physicalism are often treated as cousins and sometimes as mere stylistic variants – with “physicalism” used as a nod to how bosonic fields, for example, are not matter. “Physicalistic materialism” is the claim that physical reality is fundamentally non-experiential and that the natural world is exhaustively described by the equations of physics and their solutions. Panpsychism” is the doctrine that the world’s fundamental physical stuff also has primitive experiential properties. Unlike the physicalistic idealism explored here, panpsychism doesn’t claim that the world’s fundamental physical stuff is experiential. Epiphenomenalism” in philosophy of mind is the view that experience is caused by material states or events in the brain but does not itself cause anything; the causal efficacy of mental agency is an illusion. For our purposes, “idealism” is the ontological claim that reality is fundamentally experiential. This use of the term should be distinguished from Berkeleyan idealism, and more generally, from subjective idealism, i.e. the doctrine that only mental contents exist: reality is mind-dependent. One potential source of confusion of contemporary scientific idealism with traditional philosophical idealism is the use by inferential realists in the theory of perception of the term “world-simulation”. The mind-dependence of one’s phenomenal world-simulation, i.e. the quasi-classical world of one’s everyday experience, does not entail the idealist claim that the mind-independent physical world is intrinsically experiential in nature – a far bolder conjecture that we nonetheless tentatively defend here. Physicalistic idealism” is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions: more specifically, by the continuous, linear, unitary evolution of the universal wavefunction of post-Everett quantum mechanics. The “decoherence program” in contemporary theoretical physics aims to show in a rigorously quantitative manner how quasi-classicality emerges from the unitary dynamics. Monism” is the conjecture that reality consists of a single kind of “stuff” – be it material, experiential, spiritual, or whatever. Wavefunction monism is the view that the universal wavefunction mathematically represents, exhaustively, all there is in the world. Strictly speaking, wavefunction monism shouldn’t be construed as the claim that reality literally consists of a certain function, i.e. a mapping from some mind-wrenchingly immense configuration space to the complex numbers, but rather as the claim that every mathematical property of the wavefunction except the overall phase corresponds to some property of physical world. “Dualism”, the conjecture that reality consists of two kinds of “stuff”, comes in many flavours: naturalistic and theological; interactionist and non-interactionist; property and ontological. In the modern era, most scientifically literate monists have been materialists. But to describe oneself as both a physicalist and a monistic idealist is not the schizophrenic word-salad it sounds at first blush. Functionalism” in philosophy of mind is the theory that mental states are constituted solely by their functional role, i.e. by their causal relations to other mental states, perceptual inputs, and behavioural outputs. Functionalism is often associated with the idea of “substrate-neutrality”, sometimes misnamed “substrate-independence”, i.e. minds can be realised in multiple substrates and at multiple levels of abstraction. However, micro-functionalists may dispute substrate-neutrality on the grounds that one or more properties of mind, for example phenomenal binding, functionally implicate the world’s quantum-mechanical bedrock from which the quasi-classical worlds of Everett’s multiverse emerge. Thus this paper will argue that only successive quantum-coherent neuronal superpositions at naively preposterously short time-scales can explain phenomenal binding. Without phenomenal binding, no functionally adaptive classical world-simulations could exist in the first instance. The “binding problem(10), also called the “combination problem”, refers to the mystery of how the micro-experiences mediated by supposedly discrete and distributed neuronal edge-detectors, motion-detectors, shape-detectors, colour-detectors (etc) can be “bound” into unitary experiential objects (“local” binding) apprehended by a unitary experiential self (“global” binding). Neuroelectrode studies using awake, verbally competent human subjects confirm that neuronal micro-experiences exist. Classical neuroscience cannot explain how they could ever be phenomenally bound. Mereology” is the theory of the relations between part to whole and the relations between part to part within a whole. Scientifically literate humans find it’s natural and convenient to think of particles, macromolecules or neurons as having their own individual wavefunctions by which they can be formally represented. However, the manifest non-classicality of phenomenal binding means that in some contexts we must consider describing the entire mind-brain via a single wavefunction. Organic minds are not simply the “mereological sum” of discrete classical parts. Organic brains are not simply the “mereological sum” of discrete classical neurons. Quantum field theory” is the formal, mathematico-physical description of the natural world. The world is made up of the states of quantum fields, conventionally non-experiential in character, that take on discrete values. Physicists use mathematical entities known as “wavefunctions” to represent quantum states. Wavefunctions may be conceived as representing all the possible configurations of a superposed quantum system. Wavefunction(al)s are complex valued functionals on the space of field configurations. Wavefunctions in quantum mechanics are sinusoidal functions with an amplitude (a “measure”) and also a phase. The Schrödinger equation: describes the time-evolution of a wavefunction. “Coherence” means that the phases of the wavefunction are kept constant between the coherent particles, macromolecules or (hypothetically) neurons, while “decoherence” is the effective loss of ordering of the phase angles between the components of a system in a quantum superposition. Such thermally-induced “dephasing” rapidly leads to the emergence – on a perceptual naive realist story – of classical, i.e. probabilistically additive, behaviour in the central nervous system (“CNS”), and also the illusory appearance of separate, non-interfering organic macromolecules. Hence the discrete, decohered classical neurons of laboratory microscopy and biology textbooks. Unlike classical physics, quantum mechanics deals with superpositions of probability amplitudes rather than of probabilities; hence the interference terms in the probability distribution. Decoherence should be distinguished from dissipation, i.e. the loss of energy from a system – a much slower, classical effect. Phase coherence is a quantum phenomenon with no classical analogue. If quantum theory is universally true, then any physical system such as a molecule, neuron, neuronal network or an entire mind-brain exists partly in all its theoretically allowed states, or configuration of its physical properties, simultaneously in a “quantum superposition“; informally, a “Schrödinger’s cat state”. Each state is formally represented by a complex vector in Hilbert space. Whatever overall state the nervous system is in can be represented as being a superposition of varying amounts of these particular states (“eigenstates”) where the amount that each eigenstate contributes to the overall sum is termed a component. The “Schrödinger equation” is a partial differential equation that describes how the state of a physical system changes with time. The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. The absolute value of the probability amplitude encodes information about probability densities, so to speak, whereas its phase encodes information about the interference between quantum states. On measurement by an experimenter, the value of the physical quantity in a quantum superposition will naively seem to “collapse” in an irreducibly stochastic manner, with a probability equal to the square of the coefficient of the superposition in the linear combination. If the superposition principle really breaks down in the mind-brain, as traditional Copenhagen positivists still believe, then the central conjecture of this paper is false. Mereological nihilism“, also known as “compositional nihilism”, is the philosophical position that objects with proper parts do not exist, whether extended in space or in time. Only basic building blocks (particles, fields, superstrings, branes, information, micro-experiences, quantum superpositions, entangled states, or whatever) without parts exist. Such ontological reductionism is untenable if the mind-brain supports macroscopic quantum coherence in the guise of bound phenomenal states because coherent neuronal superpositions describe individual physical states. Coherent superpositions of neuronal feature-detectors cannot be interpreted as classical ensembles of states. Radical ontological reductionism is even more problematic if post-Everett(11) quantum mechanics is correct: reality is exhaustively described by the time-evolution of one gigantic universal wavefunction. If such “wavefunction monism” is true, then talk of how neuronal superpositions are rapidly “destroyed” is just a linguistic convenience because a looser, heavily-disguised coherence persists within a higher-level Schrödinger equation (or its relativistic generalisation) that subsumes the previously tighter entanglement within a hierarchy of wavefunctions, all ultimately subsumed within the universal wavefunction. Direct realism“, also known as “naive realism”, about perception is the pre-scientific view that the mind-brain is directly acquainted with the external world. In contrast, the “world-simulation model”(12) assumed here treats the mind-brain as running a data-driven simulation of gross fitness-relevant patterns in the mind-independent environment. As an inferential realist, the world-simulationist is not committed per se to any kind of idealist ontology, physicalistic or otherwise. However, s/he will understand phenomenal consciousness as broader in scope compared to the traditional perceptual direct realist. The world-simulationist will also be less confident than the direct realist that we have any kind of pre-theoretic conceptual handle on the nature of the “physical” beyond the formalism of theoretical physics – and our own phenomenally-bound physical consciousness. “Classical worlds” are what perceptual direct realists call the world. Quantum theory suggests that the multiverse exists in an inconceivably vast cosmological superposition. Yet within our individual perceptual world-simulations, familiar macroscopic objects 1) occupy definite positions (the “preferred basis” problem); 2) don’t readily display quantum interference effects; and 3) yield well-defined outcomes when experimentally probed. Cats are either dead or alive, not dead-and-alive. Or as one scientific populariser puts it, “Where Does All the Weirdness Go?” This paper argues that the answer lies under our virtual noses – though independent physical proof will depend on next-generation matter-wave interferometry. Phenomenally-bound classical world-simulations are the mind-dependent signature of the quantum “weirdness”. Without the superposition principle, no phenomenally-bound classical world-simulations could exist – and no minds. In short, we shouldn’t imagine superpositions of live-and-dead cats, but instead think of superpositions of colour-, shape-, edge- and motion-processing neurons. Thanks to natural selection, the content of our waking world-simulations typically appears classical; but the vehicle of the simulation that our minds run is inescapably quantum. If the world were classical it wouldn’t look like anything to anyone. A “zombie“, sometimes called a “philosophical zombie” or “p-zombie” to avoid confusion with its lumbering Hollywood cousins, is a hypothetical organism that is materially and behaviourally identical to humans and other organic sentients but which isn’t conscious. Philosophers explore the epistemological question of how each of us can know that s/he isn’t surrounded by p-zombies. Yet we face a mystery deeper than the ancient sceptical Problem of Other Minds. If our ordinary understanding of the fundamental nature of matter and energy as described by physics is correct, and if our neurons are effectively decohered classical objects as suggested by standard neuroscience, then we all ought to be zombies. Following David Chalmers, this is called the Hard Problem of consciousness. Non-Materialist Physicalism: An experimentally Testable Conjecture by David Pearce Personal Identity Joke The bartender is an Empty Individualist. Closed: “Well, the universe. I think…” Open: “Did we call you into the conversation?”
69473e27c11d560f
Mathematical Equations That Remarkably Impacted The World Calculation, equation, and, math is continuously revolutionizing our world. From the time mankind wanted to calculate the field area for growing crops – there was a thirst to know and understanding the secrets of the world. Why apple always fall down rather than flying, is there a pattern to the movement of star, what can assist in navigation, and why birds fly while we cannot – these questions of the curious minds lead to the thirst of known and the answer provided the mean to modernize the world one invention at a time! 1- Calculus Due to the applicability of calculus, it is not only used in mathematics but in engineering biology, physics, chemistry, and many more branches of science. Calculus can help you in the determination of weather pattern movement of sound, movement of light, and motion of astronomical objects. Euler’s Polyhedra Formula Fourier Transform 2- Law of Gravity Gravity is an undeniable force responsible for the existent of our planet. The law of gravity helps in the evaluation of weight and speed leading to significant modernizations including race car amd airplanes. 3- Logarithms There are many example of the use of logarithms in the real world starting from interest rate to Google page rankings. Logarithms are also used to detect changes in multiplication and help count them. 4- Maxwell’s Equations It is the set of 4 differential equations that describe the relation between electricity and magnetism. These equations are the basis for understanding the behavior of electromagnetism. From MRI scanners in the hospital to computer – the credit goes to the basic understanding of Maxwell’s equations. 5- Navier-Stokes Equations These differential equation helped us understand the behaviors of flowing liquids such as smoke rising from cigarette, water moving through pipes, and air flow over plane wings. Navier-Stokes Equations are also used to model the weather and observe ocean currents. 6- Normal Distribution Normal probability also known as normal distribution forms a bell curve and it is signifint in statistics. It is used in social sciences, physics, and biology to define the behavior of large groups of independent processes. Normal distribution is followed in the measurement of errors, heinght, IQ score, and blood pressure. 7- Quadratic Equation There are various functions that are modeled by quadratic equation including shooting a cannon, hitting golf ball, and diving. You can calculate the expected profit you are going to get if you are using the quadratic equation. It can prevent unwanted surprises and provide you with the accurate numbers and what to expect in the future. Even in the business where you are simply selling bottled water, it can help you estimate how many bottles you have to sell to generate the profit you want. 8- Relativity Relativity opened the door to understanding – be it our understanding of the outer space or the speed of light. It provided us with the idea that light speed is universal but the time factor is different for the speed of people or objects. Relativity helped us understand the fate, structure, and the origin of the universe. 9- Schrodinger’s Equation The behavior of atomic and subatomic particle is defined by Schrodinger’s Equation. It enhance the understanding of quantum physics hence played a huge role in the development of computing devices. Computational chemistry is the direct application of Schrödinger equation and it is currently being used in medication and engineered food. 10- Second Law of Thermodynamics According to the second law of thermodynamics, heat flow from hot to cold environment due to the change in temperature. This is the concept used in the working of internal combustion engines used in airplane, ship, car, and motorcycles. The law is applicable to all engine cycles and led to the progress of modern vehicles. 11- The Pythagorean Theorem Whenever you need to find out if a triangle in acute, right-angled, or obtuse – you can use Pythagoras theorem for that. It made the life of mathematicians easier as it help them to find the missing length of any side of a triangle. 12- The square root of -1 The square root of -1 = I, this process gave rise to complex numbers that are supremely elegant. In case an equation have complex number solution, it will represented by ‘I’. With the help of this equation, mathematicians were able to find symmetries and the properties of the number which are implemented in signal processing and electronics. 13- Wave Equation Wave equation as the name indicates describe the behavior of waves along with ripples, guitar strings, and incandescent bulb light. It is one of the first differential equations that helped us understand other differential equations as well. The world of mathematics is abundant with equations that helped us revolutionize the world as we know it today. We were not only able to understand the concept behind natural phenomenon ut also manipulate them for the modern advancement. These were just the few examples, stay tuned to know more!
a3a0b945206e7820
Chemistry definitions and branches What is chemistry Chemistry is defined as the science of studying matter (property and compounds), its properties, composition, structure, transformation, and energy released during these chemical processes and reactions. Each substance, whether natural or artificial, consists of one or more atoms that have been identified as elements. Although these atoms are composed of elementary particles, they form the basis of chemical substances. The chemical composition and atomic rearrangement of one or more substances to produce different materials as new products. it can be defined as the science of matter, which looks at the structure, structure, properties, behavior, and interaction of matter in addition to the interactions it creates. It studies the properties of compounds, elements, and laws that control their interactions when combined, which is called synthesis and is separated from each other by analysis. Main branches of chemistry there are different main branches including: • Analytical chemistry The physical and chemical properties of materials are determined and measured based on qualitative and quantitative control. • Organic Chemistry This science is related to the study of compounds that contain carbon elements, among them, carbon elements have many unique characteristics, which can form complex chemical bonds and huge molecules. • Physical chemistry It is a combination of chemistry and physical science, and how interactive material with energy, and the science of thermodynamics and quantum mechanics is divided into two branches. It is one of the traditional subdisciplines, with the application of the concepts and theories of physics in the analysis of chemical properties and the reactivity of substances. An interface between physics and chemistry, which are physically and chemically different. • Inorganic Chemistry This science is dedicated to studying carbon-free substances such as metals and gases. It is related to the properties and behavior of non-essential compounds, including metals, minerals, and organometallic compounds. Although organic chemistry is referred to as the study of carbon compounds, and organic chemistry is the study of another class of compounds other than organic compounds, there is overlap between the two fields (such as (Directly connected to carbon). • Biochemistry This will examine the chemical processes that take place in the body. Other branches there are of different other branches including: • Polymer chemistry In which examines plastics and interlocking chain atoms that are shaped by snaring little particles together. study huge, complex atoms that are comprised of extremely little (now and again rehashing) units. They study how little structure squares (monomers) join and consolidate the monomers/polymer sub-atomic structures, figure monomer/polymer mixes, and utilize synthetic and handling strategies, They make it conceivable to make a helpful substance with explicit highlights. To a huge degree, it influences the nature of the last item. The polymer is novel in the network as the connection between their structure and properties stretches out from the sub-atomic scale to the plainly visible scale. • Nuclear chemistry studying nuclear interaction. it is concerned with the properties and changes of atomic nuclei, compared to conventional chemistry, which involves properties and changes related to the electronic structure of atoms and molecules. The topic includes, for example, the study of radioactivity and nuclear reactions. • Chemical thermodynamics Chemical thermodynamics dealing with energy changes that occur during a chemical reaction and how temperature and pressure differences affect the reaction. • Quantum chemistry It analyzes the distribution of electrons within molecules and interprets the chemical behavior of these molecules based on electron applies quantum mechanics to theoretical studies of chemical systems. Its purpose is, in principle, to solve the Schrödinger equation for the system under investigation; However, its complexity for all but the simplest of atoms or molecules requires simplifying assumptions and estimates, leading to a trade-off between accuracy and computational cost. • Surface chemistry examines testing the surface properties of a chemical. • Applied Chemistry which is concerned with the practical application of matter and chemical processes. Unlike pure chemistry, its principles and theories to answer a specific question or solve a real-world problem are applied chemistry, which aims to enhance knowledge within the field. We say your goal is to find a cure for any disease – Alzheimer’s. You work hard in the lab to develop a drug that prevents dementia. This would be an example of applied chemistry because you used this to solve a particular, real-world problem. • Qualitative analysis Detects the type of compounds and the elements that make up the material. • Quantitative analysis Based on estimating the quantities of different chemicals that make up the materials. Quantitative analysis is a mathematical and statistical method that studies behavior and predicts the results that investors and administrators use in their decision-making processes. Through the use of financial research and analysis, this form of analysis seeks to evaluate investment opportunities or estimate economic value change. • Agricultural chemistry It is concerned with the development of pesticides and is the study of agriculture. In agriculture, factors such as agricultural production, use of agricultural products, and environmental issues are studied and methods for improving them are developed. In agriculture, the relationship between plant animals and the environment is called upon to improve the agricultural sector. • Chemical Kinetics Chemical kinetics is concerned with studying the steps in a chemical reaction. Chemical kinetics is the study of chemical processes and reaction rates. This includes analyzing the conditions that affect the speed of a chemical reaction, understanding the reaction mechanisms and transition states, and forming mathematical models to predict and describe the chemical reaction. The rate of chemical reaction usually contains a 1 s-1 unit. • Radiochemistry studies the chemical effects of radiation on materials. Atoms with the same number of protons but a different number of neutrons are isotopic. To identify an isotope we use the notation AZE, where E is the atomic symbol of the element, Z is the atomic number of the element, and A is the atomic mass number of the element. Although different isotopes of an element have the same chemical properties, their atomic properties are different. The most important difference between isotopes is their stability. The atomic configuration of a stable isotope remains constant over time. Volatile isotopes, however, spontaneously disintegrate, emitting radioactive particles as they are converted to more stable forms. The importance of chemistry lies in several fields of life, as follows: • Cooking It explains how foods change while cooking, preserving, and rotting. The chemicals in our diet are often put into four categories: carbohydrates, proteins, fats, fats and everything else. This final set does not have specific properties but includes vitamins, minerals, medicines and hundreds of trace chemicals that we each consume every day. • Medicine It explains how vitamins, nutritional supplements, drugs in the body, and other matters of medicine work. • Environmental Issues It explains the presence of pollutants, which are not polluting the environment. • Cleaning It explains how detergents work and make it easier to identify the best types of detergent, and everything related to it. It is considered one of the natural sciences that have been used and researched by man since ancient times. This science in the old has played a huge role in many fields that touched the life of man directly and vitally. Every substance, whether natural or synthetic, is composed of one or more atoms that are recognized as elements. They form the basis of nuclear chemicals, though they contain elementary particles. Chemical synthesis, and maintenance of atoms of one or more substances to produce different materials, such as new products. It begins with the study of elementary bodies, molecules, atoms, crystals, chemicals, and other forms of matter clusters in their liquid, solid, or gaseous state, whether isolated or combined with each other. This behavior is studied in laboratories using various forms of laboratory equipment and instruments. Food Chemistry Food science deals with the three biological components of food – carbohydrates, fats, and proteins. Carbohydrates are sugars and starches, the chemical fuel needed for our cells to function. Fats are fats and oils and are essential parts of the cell membrane and lubrication and cushioning of organs within the body. Because fats contain 2.25 times the energy per gram of carbohydrates or protein, many people try to limit their intake to avoid being overweight. Proteins are complex molecules consisting of 100 to 500 amino acids, which are bound together and converted into three-dimensional shapes required for the structure and function of each cell. Our bodies can synthesize some amino acids. However, eight of them, essential amino acids, must be taken as part of our diet. Food scientists are also interested in the inorganic ingredients of foods such as water, minerals, vitamins, and enzymes. The emergence and development of chemistry are due to the need for human beings to discover the world with its different elements, as well as the nature of chemistry associated with all things of life. Therefore, man has worked since ancient times to adapt the material in such a way as to satisfy its diverse and renewable needs throughout history. Through chemistry, material profit was sought by transforming materials, producing new goods, and trying to achieve eternal health with many experiments to prepare the elixir of life. Related Topics Different topics of chemistry are: Chemistry Books • Basics definitions of chemistry • Clinical chemistry • Organic chemistry • Complete inorganic chemistry • Synergy in supramolecular chemistry • Chemistry precision and design • physical chemistry • Chemical communication • Applied chemistry • Secondary chemistry • living science chemistry • Pearson chemistry • ICSE chemistry • Advanced chemistry • Fundamental of analytical chemistry • Numeracy skills in chemistry • Modern chemistry • Visualizing chemistry • Medical chemistry Famous chemists and their contributions Name of chemists                                         Famous for Marie curie                                                      Discovery of radium and polonium John Dalton                                                     Identification and presenting the atomic theory George Washington                                        Promoting alternative crops of cotton Louis Pasteur                                                  The process of pasteurization and creation of vaccines Alfred Noble                                                     Inventing the dynamic Rosalind Franklin                                             Discovery of DNA structure in genetics Antoine Lavoisier                                             Being the” Father of Modern Chemistry” Robert Boyle                                                    Being the first “Modern Chemist” Linus Pauling                                                   His work in molecular biology and quantum chemistry Dimitri Mendeleev                                            Creating the tale of element used in chemistry and physics Joseph Priestly                                                 Inventing soda water Mario Molina                                                    Discovered the ozone hole in the antarctic Humphry Davy                                                The discovered of earth-based alkaline metals and alkali Otto Hahn                                                        Being the “Father of nuclear chemistry “ Svante Arrhenius                                            Theory of the greenhouse effect and                                                                                                                          founder on the science of physical chemistry Ahmad Zewail                                                 Being the “Father of Femtochemistry” Fredric sanger                                                 Determination of base sequences in nucleic acids Stanislao Cannizzaro                                      The Cannizzaro reaction Thomas Graham                                             His work on the diffusion of gases and the application of dialysis Related chemistry links what is chemistry definitions, branches and types Article Name what is chemistry definitions, branches and types Chemistry is defined as the science that deals with the study of matter (properties and compounds), It studies properties of compounds, elements, and laws. Publisher Name Publisher Logo Back to top button
e43ae8213545805a
Published: Jan. 12, 2018 Quantum leap: Novel computational approach launches new paradigm in electronic structure theory Contact(s): Val Osowski College of Natural Science office: (517) 432-4561, Caleb Hoover Media Communications cell: (248) 939-7493 A group of Michigan State University researchers specializing in quantum calculations has proposed a radically new computational approach to solving the complex many-particle Schrödinger equation, which holds the key to explaining the motion of electrons in atoms and molecules. By understanding the details of this motion, one can determine the amount of energy needed to transform reactants into products in a chemical reaction, or the color of light absorbed by a molecule, and ultimately accelerate the design of new drugs and materials, better catalysts and more efficient energy sources. The work, led by Piotr Piecuch, university distinguished professor in the Department of Chemistry and adjunct professor in the Department of Physics and Astronomy in the College of Natural Science, was published recently in  Physical Review Letters. Also involved in the work are fourth-year graduate student J. Emiliano Deustua and senior postdoctoral associate Jun Shen. The group provides details for a new way of obtaining highly accurate electronic energies by merging the deterministic coupled-cluster approaches and stochastic sampling using probability concepts. “Instead of insisting on a single philosophy when solving the electronic Schrödinger equation, which has historically been either deterministic or stochastic, we have chosen a third way,” Piecuch said. “As one of the reviewers noted, the essence of it is remarkably simple: use the stochastic approach to determine what is important and the deterministic approach to determine the important, while correcting for the information missed by stochastic sampling.” Solving the Schrödinger equation for the many-electron wave function has been a key challenge in quantum chemistry for decades. Anything other than a one-electron problem, such as a hydrogen atom, requires resorting to numerical methods, converted into sophisticated computer programs, such as those developed by Piecuch and his group. The main difficulty has been the intrinsic complexity of the electronic motion, which quantum chemists and physicists call “electron correlation.” The new idea is to use the stochastic methods to identify the leading wave function components and the deterministic coupled-cluster computations, combined with suitable energy corrections, to provide the missing information. The merging of deterministic and stochastic approaches as a general method of solving the many-particle Schrödinger equation may also impact other areas, such as nuclear physics. “In the case of nuclei, instead of being concerned with electrons, one would use our new approach to solve the Schrödinger equation for protons and neutrons,” Piecuch said. “The mathematical and computational issues are similar. Just like chemists want to understand the electronic structure of a molecule, nuclear physicists want to unravel the structure of the atomic nucleus. Once again, solving the many-particle Schrödinger equation holds the key.”
f332102410bd9238
Buy generic klonopin with mastercard - Pill Shop, Guaranteed Shipping. purchase klonopin in japan Treatment includes medications such as antacids, H2 blockers, or proton pump inhibitors. Purdue University last year. No microbicide has yet been proven to effectively protect against the risks of unprotected anal intercourses, but advocates believe greater funding for research is needed since condom usage rates klonopin script online are so low. Communication of proper use and cautionary labels are also buy generic klonopin with mastercard regulated. Such a designation would achieve parity with other Big Ten schools that have student regents. the influence of unfavourable genes, by wounding trauma, by private pressures and most recently by the stress of working. At this point, the students are given their where to buy klonopin 1mg in mexico own offices and become more involved in hands-on patient care. This development arose through recognition by the Greeks of the strong relation between athletics, education and health. Hunter-gatherers used their emerging cognitive abilities to facilitate solving practical problems, such as basic needs for buy generic klonopin with mastercard nutrition, mating, and tool-making. Cannabis is one of several plants with unproven abuse potential and toxicity that Congress placed in Schedule I. If the application is approved, the social welfare board pay the treatment including all cost for staying at the treatment center for several months. Louis and now operates 100 stores in five states throughout the Midwest. After vasectomy, the buy generic klonopin with mastercard testes remain buy generic klonopin with mastercard in the scrotum where Leydig cells continue to produce testosterone purchase klonopin memphis and other male hormones that continue to be secreted into the blood-stream. If that was on the table when I made my decision, it certainly would have made me pause. Stretching increases range of motion, while strengthening hip adductors and abductors theoretically allows the piriformis to tolerate trauma more readily. This is because the chemical nature of the substance makes it easy to penetrate into the brain, and it also influences the phospholipid bilayer of neurons. It was found that women overestimated the actual size buy generic klonopin with mastercard of the penises they have experimented with when asked in a follow-up survey. Apothecaries in England had been competing with physicians since an act passed in 1542 permitted them to practice medicine along with anyone else. Following periods of leadership by Drs. But there is in buy generic klonopin with mastercard general no natural isomorphism between these two spaces. Physical buy generic klonopin with mastercard activity reduces inflammation in conjunction with or independent of changes in body weight. The buy generic klonopin with mastercard use of such substances in track and field is opposed on both ethical and medical grounds. Krazy-8 becomes suspicious of Jesse when Jesse attempts to sell him a new product after Emilio's arrest, and Krazy-8 forces Jesse to take him to Jesse's new partner. Disposal of large amounts of drugs can cause drug pollution and negatively impact the environment. Reformers wanted the same pay as men, equal rights in law, and the freedom to plan their families or not have children at all. Generally, the benefit of anticoagulation is prevention of or reduction of progression of a disease. Since the female body was badly disfigured, Yurovsky mistook her for Anna Demidova; in his report he wrote that he actually wanted to destroy Alexandra's corpse. Graduate education has since become an integral part of the institution. Mausers, one Smith & Wesson and seven Belgian-made Nagants. buy generic klonopin with mastercard As a medication, it is used to decrease pressure in the eyes, as in glaucoma, and to lower increased intracranial pressure. The wave model is derived from the wavefunction, a set of possible equations derived from the time evolution of the Schrödinger equation which is applied to the wavelike probability distribution of subatomic particles. In 1956 identification of the extra X chromosome was first buy generic klonopin with mastercard noticed. buy generic klonopin with mastercard Discrimination in health care settings takes many forms and is often manifested when an individual or group is denied access to health care services that are otherwise available to others. It has been observed that these certificates could be used to increase costs through weakened competition. RLA, is bound to the enzyme complexes prior to enzymatic insertion of the sulfur atoms. Florida pharmacists can write prescriptions for a limited set where to purchase clonazepam 1mg in canada of drugs. Because glass is very breakable, after the introduction of plastic, plastic was being used to replace glass in some cases. On one of their dates, Betty runs into them at a restaurant and reacts jealously. The risk of buy generic clonazepam 2mg with visa developing tardive dyskinesia after chronic typical antipsychotic usage varies on several factors, such as age and gender, as well as the specific antipsychotic used. But still the peasants get relatively poor recompense. Demographically, it appears that males, especially those under forty, are at greatest risk where to purchase clonazepam 1mg in london for developing NMS, although it is unclear if the increased incidence is a result of greater neuroleptic use in buy generic klonopin with mastercard men under forty. Public outcry has worked buy generic klonopin with mastercard Buy generic carisoprodol 500mg in singapore in many cases to control and even decide the pricing for buy generic klonopin with mastercard some drugs. Many people admitted for deliberate self-poisoning during a study by Eddleston et al. This is largely a reflection of cultural differences, as Black Caribs have retained much of cheapest generic klonopin in uk their original African culture. Its pursuit of profitability is based solely on criteria such as improving public health, creating jobs, reducing poverty and protecting the environment. Many organic compounds tend to decompose at high sustained temperatures. When purity dropped, so did the number of people in rehab and people admitted to emergency rooms with methamphetamine in their systems. Humidity levels in an indoor environment need to be accounted for based upon season and temperature. Edible salt can be iodised by spraying it with a potassium iodate or potassium iodide solution. buy klonopin online with visa It dropped from 20th to 27th in life expectancy at birth. Pregnancy buy generic klonopin with mastercard among inmates is a challenge. Tramadol for depression Phentermine 37.5mg prescription strength Klonopin prescription regulations Where to buy lorazepam 1mg online europe Order tramadol 50mg with american express purchase klonopin 1mg online no prescription Hatch Act of 1939: Due to the paucity of randomized controlled studies, there is limited evidence of the overall efficacy of individual therapy disciplines, though there is good buy generic klonopin with mastercard evidence that specific approaches, such as exercise, psychology therapies, particularly cognitive behavioral approaches and energy conservation instruction are effective. Low level laser therapy, administered at specific doses and wavelengths directly to the lateral elbow tendon insertions, offers short-term pain relief and less disability in tennis elbow, both alone and in conjunction with an exercise regimen. This includes upper buy generic klonopin with mastercard respiratory infections, otitis media, pharyngitis, and Epstein-Barr virus, Mycoplasma pneumoniae and cytomegalovirus infections. Tesfaye's father abandoned the family, resulting in Tesfaye being cared for by his grandmother. Orléans far from the mosque, and told them he was involved and wanted to surrender. Whitman considered himself a messiah-like figure in poetry. Even though Buy generic klonopin 1mg drugs come with instructions, it is best for patients to talk with their physician and their pharmacist about using a drug. Fascination with favela life can be seen in many paintings, photography, and reproductions of favela dwellings. buy generic klonopin with mastercard Evergreen College was established in 2003 as a career college, with its first buy drug clonazepam 1mg online legitimate campus in Toronto, Ontario. There are two types of vas-occlusive plugs: McCoy in his book on the Truman presidency:Harry Truman himself gave a strong and far-from-incorrect impression of being a tough, concerned and direct leader. This buy generic klonopin with mastercard works well for vehicles that drive longer distances with few stops compared to those that perform short trips with many starts and stops. In non-Western regions, males tend to have a health advantage over women due to gender discrimination, evidenced by infanticide, early marriage, and domestic abuse for females. The cerebrospinal fluid that surrounds the spinal cord is contained by the arachnoid mater. A 2008 study wanted to find out if there was any correlation Purchase generic alprazolam 1mg in the uk between sexual content shown in the media and teenage pregnancy. The objectivity of the analysis departs from moral considerations and moralistic views. Cigarette use by pregnant women has also been shown to cause birth defects, including low birth weight, fetal abnormalities, and premature birth. The notable buy generic klonopin with mastercard exception was Costa Rica, where lack of ready labor prevented the formation of large farms. They complete their objective, but find buy generic klonopin with mastercard themselves under attack by the Chinese Military and wanted for murder. Julie's mother Anita threatens to kill Bruce and disown Julie if she goes back to him. Motes was abusive and often buy generic clonazepam tablets online uk unemployed. In contrast, outbred populations are used when identical genotypes are unnecessary or a population with genetic variation is required, and are usually referred to as stocks rather than strains. After doing distracting math problems, participants saw the pictures again, but with information about the person's personality. In Australia in the 1990s a number of issues were raised about men's health. Many, but not all, of these allow legal abortions in a variety of circumstances. White was born in 1936, and received his initial education at the Sisters of Mercy convent in Sandgate, and St. Spironolactone is an effective treatment for acne in adult women, but unlike combination oral contraceptives, is not approved by the United States Food and Drug Administration for this purpose. So he was apprenticed to an apothecary, buy generic klonopin with mastercard reading widely and attending science lectures. More than 130 scientists signed a petition endorsing the commentary, which also criticized the government's evaluation of Insite as distortive where to purchase klonopin 1mg in the uk and politicized. Foster and Smith, which had gone online with a basic homepage a few years earlier, also where to purchase clonazepam 1mg in canada launched online ordering that year. Vietnam War, purchase klonopin tablets online automobile-dependent buy generic klonopin with mastercard lifestyles, and nuclear energy. However, her life and work were buy generic klonopin with mastercard embraced by early psychiatric buy cheap clonazepam 2mg in uk social workers, and she is considered one of the pioneers of psychiatric social work along with Elizabeth Horton, who buy generic klonopin with mastercard in 1907 was the first psychiatric buy generic klonopin with mastercard social worker in the New York hospital system, and others. In almost all want to buy klonopin 2mg in singapore countries, girls and women living in wealthier households experience lower levels of mortality and higher usage of health care services buy generic klonopin with mastercard than those living in the poorer households. Many animal species, such as bonobos and chimpanzees, are promiscuous as a rule; they do not form pair bonds. Before making any transactions online, make sure that you are able to locate somewhere on the website a sort of reassuring security attribute. It is also popular amongst college students, as a party drug. When the capsule is immersed in an aqueous solution, as happens when the capsule reaches the stomach, water enters the capsule by osmosis. However, a subsequent buy generic klonopin with mastercard report by Rudie Kortekaas, et al. Clinicians differentiate between casual users who have difficulty with drug screens, and daily heavy users, to a chronic user who uses multiple times a day. The injectors are held open by the fuel pressure. The disruption of friendships has been associated with increased guilt, anger and depression, and may be highly stressful events, especially in childhood. When geranyl pyrophosphate reacts buy generic klonopin with mastercard with isopentenyl pyrophosphate, the result is the 15-carbon farnesyl pyrophosphate, which is an intermediate in the biosynthesis purchase generic clonazepam online legitimate of sesquiterpenes such as farnesene. These students, if admitted, are required to enroll. They put their knowledge into an accessible format that served as a model for women who wanted to learn about themselves, communicate with doctors, and challenge the medical establishment to change and improve the health of where to purchase klonopin online legally from canada women everywhere. clonazepam fda approved pharmacy Can you buy valium over the counter in greece Buy Zolpidem Online Canada Cheap tramadol online in usa Phentermine without script What is similar to xanax Tramadol 200mg prescription online doctor no comments
048a30bbfb1ef091
Canonical commutation relation From formulasearchengine Revision as of 23:12, 15 December 2014 by en>YohanN7 (→‎Generalizations: φ → Φ in text, x → Φ in equation) Jump to navigation Jump to search In quantum mechanics (physics), the canonical commutation relation is the fundamental relation between canonical conjugate quantities (quantities which are related by definition such that one is the Fourier transform of another). For example, between the position Template:Mvar and momentum Template:Mvar in the Template:Mvar direction of a point particle in one dimension, where [x , px] = x pxpx x is the commutator of Template:Mvar and Template:Mvar, Template:Mvar is the imaginary unit, and is the reduced Planck's constant {{ safesubst:#invoke:Unsubst||$B=h/}} . In general, position and momentum are vectors and their commutation relation between different components of position and momentum can be expressed as This relation is attributed to Max Born (1925),[1] who called it a "quantum condition" serving as a postulate of the theory; it was noted by E. Kennard (1927)[2] to imply the Heisenberg uncertainty principle. Relation to classical mechanics By contrast, in classical physics, all observables commute and the commutator would be zero. However, an analogous relation exists, which is obtained by replacing the commutator with the Poisson bracket multiplied by i: This observation led Dirac to propose that the quantum counterparts Template:Mvar, Template:Mvar of classical observables Template:Mvar, Template:Mvar satisfy In 1946, Hip Groenewold demonstrated that a general systematic correspondence between quantum commutators and Poisson brackets could not hold consistently.[3] However, he did appreciate that such a systematic correspondence does, in fact, exist between the quantum commutator and a deformation of the Poisson bracket, the Moyal bracket, and, in general, quantum operators and classical observables and distributions in phase space. He thus finally elucidated the correspondence mechanism, Weyl quantization, that underlies an alternate equivalent mathematical approach to quantization known as deformation quantization.[3] The group H3(ℝ) generated by exponentiation of the Lie algebra specified by these commutation relations, [x, p] = i, is called the Heisenberg group. According to the standard mathematical formulation of quantum mechanics, quantum observables such as x and p should be represented as self-adjoint operators on some Hilbert space. It is relatively easy to see that two operators satisfying the above canonical commutation relations cannot both be bounded—try taking the Trace of both sides of the relations and use the relation Trace(A B ) = Trace(B A ); one gets a finite number on the right and zero on the left.[4] These canonical commutation relations can be rendered somewhat "tamer" by writing them in terms of the (bounded) unitary operators exp(i tx) and exp(i sp), which do admit finite-dimensional representations. The resulting braiding relations for these are the so-called Weyl relations exp(i tx) exp(i sp) = exp(−iℏ s t) exp(i sp) exp(i tx). The corresponding group commutator is then exp(i tx) exp(i sp) exp(−i tx) exp(−i sp) = exp(−iℏ s t). The uniqueness of the canonical commutation relations between position and momentum is then guaranteed by the Stone–von Neumann theorem. The simple formula valid for the quantization of the simplest classical system, can be generalized to the case of an arbitrary Lagrangian .[5] We identify canonical coordinates (such as Template:Mvar in the example above, or a field Φ(x) in the case of quantum field theory) and canonical momenta πx (in the example above it is Template:Mvar, or more generally, some functions involving the derivatives of the canonical coordinates with respect to time): This definition of the canonical momentum ensures that one of the Euler–Lagrange equations has the form The canonical commutation relations then amount to where δij is the Kronecker delta. Further, it can be easily shown that Gauge invariance Canonical quantization is applied, by definition, on canonical coordinates. However, in the presence of an electromagnetic field, the canonical momentum Template:Mvar is not gauge invariant. The correct gauge-invariant momentum (or "kinetic momentum") is   (SI units)        (cgs units), where Template:Mvar is the particle's electric charge, Template:Mvar is the vector potential, and c is the speed of light. Although the quantity pkin is the "physical momentum", in that it is the quantity to be identified with momentum in laboratory experiments, it does not satisfy the canonical commutation relations; only the canonical momentum does that. This can be seen as follows. The non-relativistic Hamiltonian for a quantized charged particle of mass Template:Mvar in a classical electromagnetic field is (in cgs units) where Template:Mvar is the three-vector potential and Template:Mvar is the scalar potential. This form of the Hamiltonian, as well as the Schrödinger equation = iħ∂ψ/∂t, the Maxwell equations and the Lorentz force law are invariant under the gauge transformation and Λ=Λ(x,t) is the gauge function. The angular momentum operator is and obeys the canonical quantization relations defining the Lie algebra for so(3), where is the Levi-Civita symbol. Under gauge transformations, the angular momentum transforms as The gauge-invariant angular momentum (or "kinetic angular momentum") is given by which has the commutation relations is the magnetic field. The inequivalence of these two formulations shows up in the Zeeman effect and the Aharonov–Bohm effect. Angular momentum operators From Lx = y pzz py, etc., it follows directly from the above that where is the Levi-Civita symbol and simply reverses the sign of the answer under pairwise interchange of the indices. An analogous relation holds for the spin operators. All such nontrivial commutation relations for pairs of operators lead to corresponding uncertainty relations,[6] involving positive semi-definite expectation contributions by their respective commutators and anticommutators. In general, for two Hermitian operators Template:Mvar and Template:Mvar, consider expectation values in a system in the state Template:Mvar, the variances around the corresponding expectation values being A)2Template:Langle(ATemplate:LangleATemplate:Rangle)2Template:Rangle, etc. where [A, B] ≡ A BB A is the commutator of Template:Mvar and Template:Mvar, and {A, B} ≡ A B + B A is the anticommutator. This follows through use of the Cauchy–Schwarz inequality, since |Template:LangleA2Template:Rangle| |Template:LangleB2Template:Rangle| ≥ |Template:LangleA BTemplate:Rangle|2, and A B = ([A, B] + {A, B})/2 ; and similarly for the shifted operators ATemplate:LangleATemplate:Rangle and BTemplate:LangleBTemplate:Rangle. (cf. Uncertainty principle derivations.) Judicious choices for Template:Mvar and Template:Mvar yield Heisenberg's familiar uncertainty relation for Template:Mvar and Template:Mvar, as usual. Here, for Template:Mvar and Template:Mvar,[6] in angular momentum multiplets ψ = |Template:Ell,mTemplate:Rangle, one has Template:LangleLx2Template:Rangle = Template:LangleLy2Template:Rangle = (Template:Ell (Template:Ell + 1) − m2) ℏ2/2 , so the above inequality yields useful constraints such as a lower bound on the Casimir invariant Template:Ell (Template:Ell + 1) ≥ m (m + 1), and hence Template:Ellm, among others. See also 1. Template:Cite doi 2. Template:Cite doi 3. 3.0 3.1 Template:Cite doi 4. More directly, note [xn, p] = i ℏ n xn − 1, hence 2 ‖ p ‖ ‖ x ‖nn ℏ ‖ x ‖n − 1, so that, n: 2 ‖ p ‖ ‖ x ‖ ≥ n ℏ. However, Template:Mvar can be arbitrarily large. Utilizing the Weyl relations, below, it can actually be shown that both operators are unbounded.
8451526a408dae2f
HomeAstrophysicsLOG#245. What is fundamental? LOG#245. What is fundamental? Some fundamental mottos: Fundamental spacetime: no more? Fundamental spacetime falls: no more? Fundamentalness vs emergence(ness) is an old fight in Physics. Another typical mantra is not Shamballa but the old endless debate between what theory is fundamental (or basic) and what theory is effective (or derived). Dualities in superstring/M-theory changed what we usually meant by fundamental and derived, just as the AdS/CFT correspondence or map changed what we knew about holography and dimensions/forces. Generally speaking, physics is about observables, laws, principles and theories. These entities (objects) have invariances or symmetries related to dynamics and kinematics. Changes or motions of the entities being the fundamental (derived) degrees of freedom of differente theories and models provide relationships between them, with some units, magnitudes and systems of units being more suitable for calculations. Mathematics is similar (even when is more pure). Objects are related to theories, axioms and relations (functors and so on). Numbers are the key of mathematics, just as they measure changes in forms or functions that serve us to study geometry, calculus and analysis from different abstract-like viewpoints. The cross-over between Physics and Mathematics is called Physmatics. The merger of physics and mathematics is necessary and maybe inevitable to understand the whole picture. Observers are related to each other through transformations (symmetries) that also holds for force fields. Different frameworks are allowed in such a way that the true ideal world becomes the real world. Different universes are possible in mathematics an physics, and thus in physmatics too. Interactions between Universes are generally avoided in physics, but are a main keypoint for mathematics and the duality revolution (yet unfinished). Is SR/GR relativity fundamental? Is QM/QFT fundamental? Are fields fundamental? Are the fundamental forces fundamental? Is there a unique fundamental force and force field? Is symplectic mechanics fundamental? What about Nambu mechanics? Is the spacetime fundamental? Is momenergy fundamental? Newtonian physics is based on the law (1)   \begin{equation*} F^i=ma_i=\dfrac{dp_i}{dt} \end{equation*} Relativistic mechanics generalize the above equation into a 4d set-up: (2)   \begin{equation*} \mathcal{F}=\dot{\mathbcal{P}}=\dfrac{d\mathcal{P}}{d\tau} \end{equation*} and p_i=mv_i and \mathcal{P}=M\mathcal{V}. However, why not to change newtonian law by (3)   \begin{equation*}F_i=ma_0+ma_i+\varepsilon_{ijk}b^ja^k+\varepsilon_{ijk}c^jv^k+c_iB^{jk}a_jb_k+\cdots\end{equation*} (4)   \begin{equation*}\vec{F}=m\vec{a}_0+m\vec{a}+\vec{b}\times\vec{a}+\vec{c}\times\vec{v}+\vec{c}\left(\vec{a}\cdot\overrightarrow{\overrightarrow{B}} \cdot \vec{b}\right)+\cdots\end{equation*} Quantum mechanics is yet a mystery after a century of success! The principle of correspondence (5)   \begin{equation*} p_\mu\rightarrow -i\hbar\partial_\mu \end{equation*} allow us to arrive to commutation relationships like (6)   \begin{align*} \left[x,p\right]=i\hbar\varepsilon^j_{\;\; k}\\ \left[L^i,L^j\right]=i\hbar\varepsilon_{k}^{\;\; ij}L^k\\ \left[x_\mu,x_\nu\right]=\Theta_{\mu\nu}=iL_p^2\theta_{\mu\nu}\\ \left[p_\mu,p_\nu\right]=K_{\mu\nu}=iL_{\Lambda}K_{\mu\nu} \end{align*} and where the last two lines are the controversial space-time uncertainty relationships if you consider space-time is fuzzy at the fundamental level. Many quantum gravity approaches suggest it. Let me focus now on the case of emergence and effectiveness. Thermodynamics is a macroscopic part of physics, where the state variables internal energy, free energy or entropy (U,H,S,F,G) play a big role into the knowledge of the extrinsinc behaviour of bodies and systems. BUT, statistical mechanics (pioneered by Boltzmann in the 19th century) showed us that those macroscopic quantities are derived from a microscopic formalism based on atoms and molecules. Therefore, black hole thermodynamics point out that there is a statistical physics of spacetime atoms and molecules that bring us the black hole entropy and ultimately the space-time as a fine-grained substance. The statistical physics of quanta (of action) provides the basis for field theory in the continuum. Fields are a fluid-like substance made of stuff (atoms and molecules). Dualities? Well, yet a mystery: they seem to say that forces or fields you need to describe a system are dimension dependent. Also, the fundamental degrees of freedom are entangled or mixed (perhaps we should say mapped) to one theory into another. I will speak about some analogies: 1st. Special Relativity(SR) involves the invariance of objects under Lorentz (more generally speaking Poincaré) symmetry: X'=\Lambda X. Physical laws, electromagnetism and mechanics, should be invariant under Lorentz (Poincaré) transformations. That will be exported to strong forces and weak forces in QFT. 2nd. General Relativity(GR). Adding the equivalence principle to the picture, Einstein explained gravity as curvature of spacetime itself. His field equations for gravity can be stated into words as the motto Curvature equals Energy-Momentum, in some system of units. Thus, geometry is related to dislocations into matter and viceversa, changes in the matter-energy distribution are due to geometry or gravity. Changing our notion of geometry will change our notion of spacetime and the effect on matter-energy. 3rd. Quantum mechanics (non-relativistic). Based on the correspondence principle and the idea of matter waves, we can build up a theory in which particles and waves are related to each other. Commutation relations arise: \left[x,p\right]=i\hbar, p=h/\lambda, and the Schrödinger equation follows up H\Psi=E\Psi. 4th. Relativistic quantum mechanics, also called Quantum Field Theory(QFT). Under gauge transformations A\rightarrow A+d\varphi, wavefunctions are promoted to field operators, where particles and antiparticles are both created and destroyed, via     \[\Psi(x)=\sum a^+u+a\overline{u}\] Fields satisfy wave equations, F(\phi)=f(\square)\Phi=0. Vacuum is the state with no particles and no antiparticles (really this is a bit more subtle, since you can have fluctuations), and the vacuum is better defined as the maximal symmetry state, \ket{\emptyset}=\sum F+F^+. 5th. Thermodynamics. The 4 or 5 thermodynamical laws follow up from state variables like U, H, G, S, F. The absolute zero can NOT be reached. Temperature is defined in the thermodynamical equilibrium. dU=\delta(Q+W), \dot{S}\geq 0. Beyond that, S=k_B\ln\Omega. 6th. Statistical mechanics. Temperature is a measure of kinetic energy of atoms an molecules. Energy is proportional to frequency (Planck). Entropy is a measure of how many different configurations have a microscopic system. 7th. Kepler problem. The two-body problem can be reduce to a single one-body one-centre problem. It has hidden symmetries that turn it integrable. In D dimensions, the Kepler problem has a hidden O(D) (SO(D) after a simplification) symmetry. Beyond energy and angular momentum, you get a vector called Laplace-Runge-Lenz-Hamilton eccentricity vector that is also conserved. 8th. Simple Harmonic Oscillator. For a single HO, you also have a hidden symmetry U(D) in D dimensions. There is an additional symmetric tensor that is conserved. 9th. Superposition and entanglement. Quantum Mechanics taught us about the weird quantum reality: quantum entities CAN exist simultaneously in several space position at the same time (thanks to quantum superposition). Separable states are not entangled. Entangled states are non-separable. Wave functions of composite systems can sometimes be entangled AND non-separated into two subsystems. Information is related, as I said in my second log post, to the sum of signal and noise. The information flow follows from a pattern and  a dissipative term in general. Classical geometry involves numbers (real), than can be related to matrices(orthogonal transformations or galilean boosts or space traslations). Finally, tensor are inevitable in gravity and riemannian geometry that follows up GR. This realness can be compared to complex geometry neceessary in Quantum Mechanics and QFT. Wavefunctions are generally complex valued functions, and they evolve unitarily in complex quantum mechanics. Quantum d-dimensional systems are qudits (quinfits, or quits for short, is an equivalent name for quantum field, infinite level quantum system): (7)   \begin{align*} \vert\Psi\rangle=\vert\emptyset\rangle=c\vert\emptyset\rangle=\mbox{Void/Vacuum}\ \langle\Psi\vert\Psi\rangle=\vert c\vert^2=1 \end{align*} (8)   \begin{align*} \vert\Psi\rangle=c_0\vert 0\rangle+c_1\vert 1\rangle=\mbox{Qubit}\\ \langle\Psi\vert\Psi\rangle=\vert c_0\vert^2+\vert c_1\vert^2=1\\ \vert\Psi\rangle=c_0\vert 0\rangle+c_1\vert 1\rangle+\cdots+c_{d-1}\vert d\rangle=\mbox{Qudit}\\ \sum_{i=0}^{d-1}\vert c_i\vert^2=1 \end{align*} (9)   \begin{align*} \vert\Psi\rangle=\sum_{n=0}^\infty c_n\vert n\rangle=\mbox{Quits}\\ \langle\Psi\vert\Psi\rangle=\sum_{i=0}^\infty \vert c_i\vert^2=1:\mbox{Quantum fields/quits} \end{align*} (10)   \begin{align*} \vert\Psi\rangle=\int_{-\infty}^\infty dx f(x)\vert x\rangle:\mbox{conquits/continuum quits}\\ \mbox{Quantum fields}: \int_{-\infty}^\infty \vert f(x)\vert^2 dx = 1\\ \sum_{i=0}^\infty\vert c_i\vert^2=1\\ L^2(\matcal{R}) \end{align*} 0.1. SUSY The Minimal Supersymmetry Standard Model has the following set of particles: To go beyond the SM, BSM, and try to explain vacuum energy, the cosmological constant, the hierarchy problem, dark matter, dark energy, to unify radiation with matter, and other phenomena, long ago we created the framework of supersymmetry (SUSY). Essentially, SUSY is a mixed symmetry between space-time symmetries and internal symmetries. SUSY generators are spinorial (anticommuting c-numbers or Grassmann numbers). Ultimately, SUSY generators are bivectors or more generally speaking multivectors. The square of a SUSY transformation is a space-time traslation. Why SUSY anyway? There is another way, at least there were before the new cosmological constant problem (lambda is not zero but very close to zero). The alternative explanation of SUSY has to do with the vacuum energy. Indeed, originally, SUSY could explain why lambda was zero. Not anymore and we do neeed to break SUSY somehow. Otherwise, breaking SUSY introduces a vacuum energy into the theories. Any superalgebra (supersymmetric algebra) has generators  P_\mu, M_{\mu\nu}, Q_\alpha. In vacuum, QFT says that fields are a set of harmonic oscillators. For sping j, the vacuum energy becomes (52)   \begin{equation*} \varepsilon_0^{(j)}=\dfrac{\hbar \omega_j}{2} \end{equation*} (53)   \begin{equation*} \omega_j=\sqft{k^2+m_j^2} \end{equation*} Vacuum energy associated to any oscillator is (54)   \begin{equation*} E_0^{(j)}=\sum \varepsilon_0^{(j)}=\dfrac{1}{2}(-1)^{2j}(2j+1)\displaystyle{\sum_k}\hbar\sqrt{k^2+m_j^2} \end{equation*} Taking the continuum limit, we have the vacuum master integral, the integral of cosmic energy: (55)   \begin{equation*} E_0(j)=\dfrac{1}{2}(-1)^{2j}(2j+1)\int_0^\Lambda d^3k\sqrt{k^2+m_j^2} \end{equation*} Develop the square root in terms of m/k up to 4th order, to get (56)   \begin{equation*} E_0(j)=\dfrac{1}{2}(-1)^{2j}(2j+1)\int_0^\Lambda d^3k k\left[1+\dfrac{m_j^2}{2k^2}-\dfrac{1}{8}\left(\dfrac{m_j^2}{k^2}\right)^2+\cdots\right] \end{equation*} (57)   \begin{equation*} E_0(j)=A(j)\left[a_4\Lambda^4+a_2\Lambda^2+a_{log}\log(\Lambda)+\cdots\right] \end{equation*} If we want absence of quadratic divergences, associated to the cosmological constant, and the UV cut-off, we require (58)   \begin{equation*} \tcboxmath{ \sum_j(-1)^{2j}(2j+1)=0} \end{equation*} If we want absence of quadratic divergences, due to the masses of particles as quantum fields, we need (59)   \begin{equation*} \tcboxmath{\sum_j(-1)^{2j}(2j+1)m_j^2=0} \end{equation*} Finally, if we require that there are no logarithmic divergences, associated to the behavior to long distances and renormalization, we impose that (60)   \begin{equation*} \tcboxmath{\sum_j(-1)^{2j}(2j+1)m_j^4=0} \end{equation*} Those 3 sum rules are verified if, simultaneously, we have that (61)   \begin{equation*} N_B=N_F \end{equation*} (62)   \begin{equation*} M_B=M_F \end{equation*} That is, equal number of bosons and fermions, and same masses of all the boson and fermion modes. These conditions are satisfied by SUSY, but the big issue is that the SEM is NOT supersymmetric and that the masses of the particles don’t seem to verify all the above sum rules, at least in a trivial fashion. These 3 relations, in fact, do appear in supergravity and maximal SUGRA in eleven dimensions. We do know that 11D supergravity is the low energy limit of M-theory. SUSY must be broken at some energy scale we don’t know where and why. In maximal SUGRA, at the level of 1-loop, we have indeed those 3 sum rules plus another one. In compact form, they read (63)   \begin{equation*} \tcboxmath{\sum_{J=0}^{2}(-1)^{2J}(2J+1)(M^{2}_J)^k=0,\;\;\; k=0,1,2,3} \end{equation*} Furthermore, these sum rules imply, according to Scherk, that there is a non zero cosmological constant in maximal SUGRA. \textbf{Exercise}. Prove that the photon, gluon or graviton energy density can be written in the following way In addition to that, prove that the energy density of a fermionic massive m field is given by Compare the physical dimensions in both cases. 0.2. Extra dimensions D-dimensional gravity in newtonian form reads: (64)   \begin{equation*} F_G=G_N(D)\dfrac{Mm}{r^{D-2}} \end{equation*} Compatifying extra dimensions: (65)   \begin{equation*} F_G=G_N(D)\dfrac{Mm}{L^Dr^2} \end{equation*} and then (66)   \begin{equation*} \tcboxmath{ G_4=\dfrac{G_N(D)}{L^D}} \end{equation*} or with  M_P^2=\dfrac{\hbar c}{G_N}, (67)   \begin{equation*} \tcboxmath{M_P^2=V(XD)M_\star^2} \end{equation*} Thus, weakness of gravity is explained due to dimensional dilution. Similarly, for gauge fields: (68)   \begin{equation*} \tcboxmath{ g^2(4d)=\dfrac{g^2(XD)}{V_X}} \end{equation*} View ratings Rate this article Leave a Reply 2 visitors online now 2 guests, 0 members Max visitors today: 4 at 03:33 am UTC This month: 9 at 08-05-2020 09:50 am UTC This year: 54 at 01-21-2020 01:53 am UTC All time: 177 at 11-13-2019 10:44 am UTC
c0cf304a3e061d76
Archive | Academic RSS feed for this section What is Loop Quantum Gravity? 15 Mar Loop Quantum Gravity (also known as Canonical Quantum General Relativity) is a quantization of General Relativity (GR) including its conventional matter coupling. It merges General Relativity and Quantum Mechanics without extra speculative assumptions (e.g., no extra-dimensions, just 4 dimensions; no strings; not assuming that space is formed by individual discrete points). LQG has no ambition to do unification of forces or to add more than 4 spacetime dimension, nor supersymmetry [1]. In this sense, LQG has a less ambitious research program than String Theory and is its biggest competitor. General Relativity envisages spacetime and the gravitational field as the same entity, “spacetime” itself, that, in many ways, can be seen as a physical object analog to the electromagnetic field. Quantum Mechanics (QM) was formulated by means of an external time variable t like it appears in Schrödinger equation Fig. 1 below shows the solutions of SE when applied to the simplest atom in Nature, the hydrogen atom, where the potential is V=\frac{1}{r}. Electrons that rotate around the positive nucleus have the energy is quantified according to the law $\latex E_n=\frac{13.6}{n^2}$ and the wave function $\latex \Psi$ are given by the mathematical functions given in Figure 1. However, in General Relativity (GR) this external time (represented above by the letter t) is incompatible because the role of time becomes dynamical in the framework of Minkowski spacetime. Time is no longer absolute (as Sir Isaac Newton once stated) but is relative to a frame of measurement. In addition, GR was formulated formerly by Albert Einstein in the framework of Riemannian geometry, where it is assumed that the metric is a smooth and deterministic dynamical field (Fig.2). Fig.2 – Example of Riemann surface. Image courtesy: (See for details about this specific surface here: This raises an immediate problem, since QM requires that any dynamical field be quantized, that is, be made of discrete quanta that follows probabilistic laws… This would mean that we should treat quanta of space and quanta of time… All the known forces in the universe have been quantized, except gravity. The first approach to quantization of gravity consists of writing the gravitational field as composed of the sum of two terms, a background field g_{background} and a perturbation h(x). So, its full metric g_{\mu \nu}:                        g(x)=\eta_{background} (x)+ h(x) where \eta_{\mu \nu} represents the background spacetime, normally Minkowski) and h_{\mu \nu} a perturbation of the field (representing the graviton). The Minkowski space united space and time as a single entity introducing the new concept of space-time manifold where two points are distant by               ds^2= c^2 dt^2-d \bf{x}^2. The problem resides in the intrinsic difficulty that this approach face when describing extreme astrophysical (near a black hole) or cosmological scenarios (Big Bang singularity). The inconsistency between GR and QM becomes more clear when looking at Einstein equation of GR:                R_{\mu \nu} - \frac{1}{2}gR=\kappa T_{\mu \nu}(g)  R_{\mu \nu} is the Ricci curvature tensor, R is the curvature, and T_{\mu \nu} is the energy-momentum tensor. \kappa \equiv \frac{8 \pi G}{c^4}. While the left-hand side is described by a classical theory of fields, the right-hand side is described by the quantum theory of fields… LQG avoids any background metric structure (described by the metric g), choosing a background independent approach, along the suggestion of Roger Penrose on the spin-networks where a system is supposed to be built of discrete “units” (anything from the system can be known on purely combinatorial principles) and all is purely relational (avoiding the use of space and time…) In GR spacetime is represented as a well-defined grid of lines, even if curved in the presence of a massive body, such as In LQG, spacetime is represented rather as background-independent, the geometry is not fixed, is a spin network of points defined by field quantities and angular momentum, more like a mesh of polygons; spacetime is more a derived concept rather than a pre-structure, pre-concept on which events take place, as shown here Image credit: This new representation of fields has the advantage of representing both their intrinsic attributes but also their induction attributes. That is, the field quantities depend not only of the point where it exist but also on the neighboring points connected by a line. That’s why the mathematical idea that best express this representation is the holonomy of the gauge potential A along the loop (line) \alpha, U(A,\alpha), which is given by the integral U(A,\alpha)=exp{\int_0^{2\pi} ds A_a(\alpha(s))\frac{d\alpha^a(s)}{ds}}. NB-These books can be downloaded free from the site [1] Carlo Rovelli, Quantum Gravity (Cambridge University Press, Cambridge, 2004) Tribalistic Science – what it is?… 25 Jun It is hard to believe, but Science is becoming a tribalistic activity, with hub inside groups of people with similar interests and that defend themselves and their publication like we were living in a doomsday era. Yes, we have a new type of science: the TRIBALISTIC SCIENCE. The index that institutions and governments use to measure the level of interestingness in tribalistic science are pseudo-indexes that measure: • the number of publications, disregarding the number of authors (those working at CERN can publish 100 or more papers in one year, among hundreds of co-authors, a must for scientific career) • the number of papers on the same subject, even if continued during a time life guarantee to the authors a place in the podium and possibly the Nobel Prize (this level is much harder to attain due to the related politics and involvement of governments and institutions, eager of prestige) • the way authors in the same exact field cite each others, increasing exponentially the number of citations, disregarding the real usefulness of the work (some of them explicitly ask you to cite them for better impact on their institutions) • …really don’t measure the real future impact of new ideas, their potential, or if the author is alone striving to invent or develop new concepts that may, in the future, or right now, contribute to the welfare of humanity. • possibly, more significant items could be added, it’s up to you, reader. This is my definition of the new type of science that is being done worldwide with huge success, even if destroying the public fate in science… Since the Bomb exploded over Hiroshima, the prestige of science in the United States has mushroomed like na atomic cloud. In schools and colleges, more students than ever before are choosing some branch of science for their careers. Military budgets earmarked for scientific research have never been so fantastically huge. Books and magazines devoted to science are coming off the presses in greater numbers than at any previous time in history. Even in the realm of escape literature, science fiction threatens seriously to replace the detective story. – Martin Gardner, in Fads and Fallacies in the name of science. [1] – Martin Gardner, Fads and Fallacies in the name of Science. [2] – Mario J. Pinheiro, The Art of Academia Guerrilla – how Academia can help society to progress (ISBN-13: 978-1514370612 (CreateSpace-Assigned) The Laws of Causality and… Synchronicity 21 Jun Human opinions are children’s toysHeraclitus -Yes? said Cranly absently. Wolfgang Pauli 45's birthday. Wolfgang Pauli 45’s birthday. [1] Oswald Spengler, Man and Technic [2] David Peat, Synchronicity {2} Excerpt of James Joyce’s Stephen Hero. {4} Towards One World Leo Szilard and the Foundation Mark Gable 16 Sep “Gloire et louange à toi, Satan, dans les hauteurs Du Ciel, où tu régnas, et dans les profondeurs De l’Enfer, où, vaincu, tu rêves en silence! Fais que mon âme un jour, sous l’Arbre de Science, Près de toi se repose, à l’heure où sur ton front Comme un Temple nouveau ses rameaux s’épandront! Charles Baudelaire, in Les Fleurs du Mal [1] Science is in danger of survival. Technology and Burocracy took its place. How to create anew the conditions that give birth to the wonderful discoveries of the last century that endure until the 1950’s with the discovery of the laser? This is an actual and fundamental problem that strikes our civilization. “Only daring have made the main contributions  to Science. Notwithstanding this, the contemporary heavy administration in the system of modern scientific education and scientific research still suppresses  everyone who wishes to develop new and productive ideas. That is why the  freedom  of scientific work and the free initiative of original studies should  undeniably be defended, because only these factors were and will be the most  productive sources of great progress of Science”—Louis de Broglie, Necessity of  Freedom of Scientific Work, Annales de la Fondation Louis de Broglie, vol. 4, nº  1, p.62 (1979) Fermi-Szilard neutronic Reactor. Image credit:Wikipedia But here we are talking about a singular man of science. Leo Szilard, the great physicist of Hungarian ascent after Hiroshima left physics to dedicate his interests to biology and science-fiction…Szilard was born in Hungary in 1898 and come to Berlin in 1919 to study with Albert Einstein, Max von Laue and become a friend with Eugen Wigner. In 1938 he went to the United States, rapidly recognizing the evil nature of nazism. In Chicago, he worked with Enrico Fermi at the Metallurgical Laboratory in the framework of the Manhattan Project. At this laboratory, Leo Szilard developed the concept of nuclear chain reaction into a nuclear  reactor, and he patented the idea together with another great physicist Enrico Fermi (this kind of people are disappearing in our societies unfortunately, there is no place for them), see their patent [1a]. US postage stamp honoring Enrico Fermi. Im credit: Leo Szilard. Image credit: At a certain moment, Szilard felt the need to eliminate the sinister side of his nuclear reactor and atomic bomb, not being pleased with the political turn of the Manhattan Project. What was his idea about Big Science? «It was not the kind of physics I like, and I even wonder if it is physics at all». In a  series of tales, Szilard, describes with humor a meeting with a millionaire, Mark Gable, that enriched with a bank of sperm. We refer now to a part of the tale appearing in Leo Szilard book “The voice of the dolphins and other stories” [2]. “Would you intend to do anything for the advancement of science?” I asked. “No”, Mark Gable said. “I believe scientific progress is too fast as it is.” “I share your feeling about this point,” I said with fervor of conviction, “but then why not do something about the retardation of scientific progress?” “That I would very much like to do,” Mark Gable said, “but how do I go about it?” “Well,” I said, “I think that shouldn’t be very difficult. As a matter of fact, I think it would be quite easy. You could set up a foundation, with an annual endowment of thirty million dollars. Research workers in need of funds could apply for grants, if they could mail out a convincing case. Have ten committees, each committee, each composed of twelve scientists, appointed to pass on these applications. Take the most active scientists out of the laboratory and make them members of these committees. And the very best men in the field should be appointed as chairman at salaries of fifty thousand dollars each. Also have about twenty prizes of one hundred thousand dollars each for the best scientific papers of the year. This is just about all you would have to do. Your lawyers could easily prepare a charter for the foundation. As a matter of fact, any of the National Science Foundation bills which were introduced in the Seventy-ninth and Eightieth Congress could perfectly well serve as a model.” This question is of huge importance nowadays and must be taken seriously by all of us. What kind of civilization are we leaving to our sons? [3] [1] Charles Baudelaire, Les Fleurs du Mal [1a] Enrico Fermi and Leo Szilard, Nutronic Reactor, Patent US2708656 [2] Leo Szilard, The voice of Dolphins and Other Stories [3] Tragic Science of Leo Szilard, by Roy Scott Sheffield The Art of Scientific Illustration 21 Apr  One Picture is Worth Ten Thousand Words – Confucius. Create, as if your life depends upon it – Jessie Shaw There is hardly a more familiar artifact of modern life than the so-called scientific illustrations. That is, the diagram or picture in isometric or linear perspective with notations for scale and measurement which show how machines or houses or even human beings are put together and taken apart and how they work. Who, indeed, has never depended on such an illustration for assembling a Christmas bicycle or a Sears & Roebuch porch swing (not to mention for constructing an atomic reactor or preparing for open heart surgery)? So taken for granted is the ubiquitous scientific illustration that few scholars have ever sensed that it has any historical interest. – Samuel Y. Edgerton, Jr. [1] Leonardo’s drawing of an ornithopter. He was inspired by the observation of birds flying, and this drawing is considered the first scientific illustration. Image credit: NASA/Photo Researchers, Inc. The art of scientific drawing is an irreplaceable method for the better apprehension of ideas and a way to prepare the ground for new discoveries. All along the history of science, we know of great genius that relied on drawing to discover or better express his/her ideas. The first men of science illustrating their writings were Leonardo da Vinci (see the ornithopter), Francis Bacon, Galileo, William Harvey, Descartes. Leonardo was very eager to keep his secrets, not wanting they fall into the wrong hands. For this motive, he had a preferred left-hand writing in his notebooks (called ‘mirror writing’), while employing his hand-writing in conventional communication (in his letters). Movement specialist Grant Ramey [2] sustains that Da Vinci uses ‘mirror writing’ because he was passionate by symmetry and the human form in art and science. Apparently, Da Vinci wrote in his notebooks from right to left, with his left hand, in order to keep thinking (instead of to remain focused on his own writing), see here. A scientific illustration is an important form of art, intending at the same time accurately transmit scientific knowledge. Interestingly, Goethe is quoted to have said that you really do not see a plant until you actually draw it… Why we should start to draw figures in our intellectual and aesthetic activities? In order to understand the power of drawing, let us start to quote here another great man, Thomas H. Huxley, since it is with them that we learn: «[…] I should, in the first place, secure that training of the young in reading and writing, and in the habit of attention and observation, both to that which is told them, and that which they see, which everybody agrees to. But in addition to that, I should make it absolutely necessary for everybody, for a longer or shorter period, to learn to draw.» In Meno, famous dialogue between Socrates and one of Meno’s slave (a boy), see Ref.[3] to read the complete dialogue, it is clear that by drawing and with the right questions, we may “recall” the knowledge we have in our minds. This famous dialogue depicts the problem of teaching science, in fact, a very old topic in philosophy of science, the problem of the “tacit knowledge” that we all may eventually possess. We quote next to a short part of this important dialogue, led by Socrates while drawing on the ground. «Meno: Yes, Socrates. But what do you mean by this, that we do not learn and what is called learning is recollection? Can you teach me that this is so? […] Meno: Certainly. Step forward here. Socrates: Now, is he Greek and speaks Greek? Meno: Absolutely. He was born in the house. Socrates: Then pay close attention to see whether he seems to recollect or to be learning from me. M: I certainly will. So: Tell me, boy, do you know that a square is like this? [Socrates draw a square on the ground, see 1] Slave: I do. So: And so a square has these lines, four of them, all equal? [see 2] Slave: Of course. So: And these ones going through the center are also equal? [see 3] Slave: Yes. So: And so there would be larger and smaller versions of this area? [see 4] Slave: Certainly. So: Now, if this side were two feet and this side two feet also, how many feet would the whole be? Look at it like this: if this one were two feet but this one only one foot, wouldn’t the area have to be two feet taken once? [see 5] Slave: Yes. So: When this one is also two feet, there would be twice two? Slave: There would. So: An area of twice two feet? Slave: Yes. So: How much are twice two feet? Calculate and tell me. [see 6] Slave: Four, Socrates. [see 7] So: Couldn’t there be one different from this, doubled, but of the same kind, with all the lines equal, as in that one? [see 8] Slave: Yes. So: And how many feet in the area? Slave: Eight. So: Now could one draw another figure double the size of this, but similar, that is with all its sides equal like this one? [see 9] Slave: Yes. So: How many feet will its area be? Slave: Eight. So: Now then, try to tell me how long each of its sides will be. The present figure has a side of two feet. What will be the side of the double-sized one? Slave: It will be double, Socrates, obviously. So: You see, Meno, that I am not teaching him anything, only asking. Now he thinks he knows the length of the side of the eight-feet square. MENO: Yes. So: But does he? Meno: Certainly not. So: He thinks it is twice the length of the other. MENO: Yes. So: Now watch how he recollects things in order — the proper way to recollect. Archimedes was also known to write in whatever surfaces he had at hand, on the sawdust-covered floors, on the sand, drawing geometric shapes on the extinguished fires. That ‘s why the majority of Archimedes drawings are forever lost. He spent hours, sited on the floor, like most geometers at his time did, since it was too expensive to scribbled on a papyrus and then thrown it away. Galileo Galilei with the help of his telescope (invented by him) also made the first drawing of the moon. The consequences were controversial since the Catholic Church saw in his drawings, his sketches of the moon, irregular surface, full of craters, a proof that the heavenly bodies were not perfect, as supposed before. But the dialectical fight between science and religion was just beginning {5}. the old war between the Catholic Church and science made the popes suspicious of the scientific findings and induce them to create the Vatican Observatory with headquarters at the papal summer residence in Castel Gandolfo, Italy, outside Rome. Quite surprisingly, they also have a research center, the Vatican Observatory Research Group, hosted by Steward Observatory at the University of Arizona, Tucson, USA. Located at the Mount Graham International Observatory, in southeastern Arizona, the Vatican possesses the 1.8m Alice P. Lennon Telescope with its Thomas J. Bannan Astrophysics Facility, known together as the Vatican Advanced Technology Telescope. Vatican astronomers said recently that it is okay that people believe in ET’s [5]. Hiero II calling Archimedes to fortify Syracuse. Archimedes was considered at the time a great mind in matters of military strategy. Painting by Sebastiano Ricci. Image credit: Galileo first drawing of the Moon. Remark that by drawing you may understand the Pythagoras theorem (see also here). We must not lose sight that analytical equations represent spatial structures. Our mind must deal with this “hidden” aspect of the mathematical formalism. This is most important for people working in visual science, like computers programming {2}. Researches were done by Professor Shaaron Ainsworth of the University of Nottingham’s School of Psychology, and colleagues from La Trobe and Deakin Universities in Australia, have shown that students learn better when they are endowed to draw, a method which helps students in visualizing abstract concepts, to recall and to more easily engage in communicating with each other. Teachers at school should endeavor to teach and encourage students to draw what they have learned since this is a powerful method to apprehend any subject, and a powerful process when aiming to transmit ideas to other people {4}. According to Horst Bredekamp [6], it was the ability to draw shown by Galileo that allowed him to better understand Nature; due to their artistic abilities, he could see better than others not gifted in the arts of illustration. [1] Samuel Y. Edgerton, Jr., in “The Renaissance Development of the Scientific Illustration” [2] Science and Education, Thomas H. Huxley [3] Meno, by Platon [4] Scientific Illustrations, by John L. Ridgway [5] Vatican Astronomers says its okay to believe in ET, by Nancy Atkinson [6] Galileo in Context, (p. 180) Edited by Jürgen Renn {1} The Craft of Scientific Illustrations [contains important advice on how to draw a scientific illustration] [] {2} The importance of drawing {3} Drawing pictures key to learn science {4} Drawing and doodling can help you learn science {5} Catholic Church and Science Superconductivity and its applications 15 Mar “If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations of creatures, what statement would contain the most information in the fewest words? I believe it is that…all things are made of atoms.” The idea of the absolute zero of temperature was advanced by J. A. C. Charles and J. L. Gay-Lussac (Fig.1). In fact, they have shown that by extrapolating their measurements of the volume of any gas as a function of the temperature (at constant pressure), their volume would tends toward a zero value at a temperature near -273 degree Celsius (see fig.2). Gay-Lussac. Image credits: By extrapolation of the curve of volume versus temperature, Charles and Gay-Lussac predicted the existence of the absolute zero. Im. credit: In 1911, Kamerlingh Onnes discovered that mercury suddenly lose its resisitivity  when the temperature dropped to -260 degree Celsius. Omnes started investigating the field of low-temperature physics in 1908, when he succeeded in liquifying helium. Whith these findings we may say that the field of superconductivity is open, bringing to us the compreension of astounding phenomena, characterized by the manifestation of the wave nature of particles at a macroscopic scale. It is not always fundamental to reduce friction to put the body at zero degrees Kelvin. In fact, russian scientists succeeded that achievement with a simple experiment. They launched a sphere of steel on a surface made of molybdenium and sulphur, while at the same time thay beam a cloud of electrons on the surface. They observe a strong reduction of friction from a coefficient 0.9 to 0.0015! Fundamental Properties of Superconductors superconducting or superfluid phase is considered a different thermodynamical phase when compared to the normal state that exists when the temperature T>T_c, where T_c means the critical temperature below which the superconducting or superfuid state appears. For example, liquid 4He under its vapor pressure becomes superfluid at T_c=2.17 K. So, the passage from one state to the other is called a phase transition, and it is accompanied by a strong increase of the specifi heat when the temperature is near T_c, with a strong release of entropy. This phenomena, experimentally seen, leads to the formation of a more ordered state of matter (see also precedent video). Fundamentally, a superconductor is a conductor that has undergone a phase transition to a lower energy state below the critical temperature T_c, characterized by the appearance of groups of paired electrons, the so called Cooper pairs, carrying electrical current without any resistance and responsible, among other properties, of perfect diamagnetism. We may distinguish two types of superconductors. A Type I supercondcutor, is characterized by the following properties [1]: • zero electrical resistance and perfect diamagnetism at a temperature below T_c. Normally, at temperatures above T_c this material is a normal metal, although not a very good conductor; • perfect diamagnetism, also called Meissner effect, the magnetic field stays outside the material, cann’t penetrate the material. Curiosly enough, if you appply an external magnetic field  B_app above a given critial magnetic field, B_c, the material suffers a transition from superconductor to normal state. An approximate functional dependence on temperature for this critical magnetic field is given by And what is the diamagnetic property of matter? This property  is a kind of negative magnetism. This effect was studied in the framework of  classical mechanical by Paul Langevin in 1905 [2], using previous and revolutionary ideas proposed formerly by André-Marie Ampère[3]  and Wilhelm Weber [4] (nowadays we barely talk about these two great men of science that really use their minds for the advancement of science and the progress of mankind…). Langevin found, in the classic framework provided by Ampère and Weber, that N electrons moving in orbits around the nucleus at an average distance <r²>, such that the (constant and negative) magnetic susceptibility χ is given by The magnetic susceptibility is the ratio of M/H, where M is the magnetization field and H the magnetic field. The susceptibility χ is slightly negative for diamagnets, but acquires small positive values for paramagnetic substances (e.g., ), and is strongly positive for ferromagnetics substances (e.g., Fe). It can be shown that a material constitute of paramagnetic ions with magnetic moment μ, obeys the Curie-Weiss law: where n means the concentration of paramagnetic ions, Θ is the Curie-Weiss constant () . when perfect diamagnetism is achieved, χ=-1, that is, the magnetization M is directed opposite to the H field, cancelling it, M=-H. For example, when a superconductor with spherical form is placed nearby the poles of a magnet, it results a superposition of the applied magnetic field B_app, and the resulting dipole field (Fig.1a) , giving a curvature of the magnetic field lines of the form as shown in Fig.1b. The dipole result when a uniform permanent magnetization M is parallel to the axis Oz (see Section 5.10 in the textbook of J. D. Jackson [4]). Besides perfect diamagnetism, the other important property of the superconducting state is its zero resistance. In ideal conditions, an electric current established in a loop of supercuncting wire will last indefenitely. The surface resistance of the material with a current flowing along a film of thickness d, must satisfy the condition where ρ is the electric resistivity, h is the Planck constant and e is the absolute charge of the electron. Methods of Quantum field Theory In quantum field theory (QFT), the Lagrangian plays a central role in working out mathematically the physical problems and from which the dynamic field equations are generated. In the Hamiltonian formulation of classical mechanics, the equations of motion of a system of particles can be obtained from a special function called the Action. Usually, the Lagrangian is dependent on the positions and velocities of the particles: Hamilton’s minimum action principle states that Nature prefer to follow movements that  extremize this quantity called action. In QFT, this formulation can be translated, replacing que classical quantities by a “field”, in the manner of the Table 1 below: In field theory, the fields φ(x,t) play the role of the generalized coordinates, {qi(t)}, where the discrete index i represents the number of discrete coordinates of the system. In local field theories the Lagrangian may be written as an integral over another function called the Lagrangian density, L, such as depending on the set of a possible fields present on the given point of space and their first derivatives: When seeking to extremize the functional called Action, it is obtained the so called Euler-Lagrange equations for any field:The Fig. below represents several possible “trajectories”, or “stories” associated to a given dynamical field. But Nature prefer only one of them, the one “story” which is represented by the above Euler-Lagrange equations… As written by Pierre Fermat in a letter of 1662 to M. de la Chambre: Natura operatur per modos faciliores et expeditiones (Nature works by the easiest and readiest means). Various possible "stories" that a physical system may undergo. However, Nature prefer the one that extremize the Action. Maupertius argued that the principle of least action (was the first to enunciate it), showed the wisdom of the Creator. Maupertuis believed that the vis viva (which today is twice the kinetic energy) should be minimal.}: “ The action is proportional to the product of mass and velocity and space. Now here this principle, so wise, so worthy of the Supreme Being: once a change occurs in nature, the amount of action employed for this variation is always as small as possible. ” Pierre Louis Moreau de Maupertuis was a French mathematician and astronomer. Born July 7, 1698 in Saint-Malo and died in the Bale July 27, 1759. Interestingly, he was the son of a pirate. Let us take as an example (although for non-mathematicians, a little bit too complex problem that can however be grasped as an example of the methods in QFT), the Lagrangian of a physical system composed by matter + electromagnetic field, such as described below, and we will recover essential properties of the superconductor state.                                                             matter field                                                                             EM field                   Current To follow [1] Handbook of Superconductivity, Charles P. Poole, Jr. (Academic Press, San Diego, 2000) [2] Paul Langevin [3] André-Marie Ampère, Essay sur la Philosophie des Sciences [4] J. D. Jackson, Classical Electrodynamics [FN1] Annales de Chimie et Physique, 5 (1905); see paper by Paul Langevin at p.70-127 PART 2 – To follow soon… Pataphysics: the beginning of a new science… 13 Feb All mankind have an instinctive desire of knowledge. This illustrated by our enjoyment of our sense-perceptions. – Aristotles, in his first book on Methaphysics. “Blind and unwavering undisciplined at all times constitutes the real strength of all free men.” – Alfred Jarry (1873 to 1907) “Duration is the transformation of a succession into a reversion. In other words: THE BECOMING OF A MEMORY.” – Alfred Jarry (1873 to 1907) from How to Construct a Time Machine “God is the tangential point between zero and infinity.” – Alfred Jarry (1873 to 1907) Aristotles wrote fourteen books on the ultimate conceptions of philosophy. In fact, he entitled the books with the word “the lectures that come after the lectures on Physics”  (the lectures on Physics are composed by eight books). He dedicated his thoughts about being, what is a being in regard to its existence, independently of any secondary quality that may qualify it. The name “Metaphysics” was given by the editor of his works on first philosophy, because they appear after the researches done on the physical world. Aristotle school at Athens, in a fresco painted by Rafael, located at the Apostolic Palace, Vatican city. Image credit: wikipedia Aristotles understood that just searching around the physical world would not give to him any clue about the universe and the being, about the essential attributes of the existence. Mathematics and Physics, only treat the laws of the universe based on ideal assumptions: line, points, plane, space, time. But the meaning of them are not enough to approach a functional understanding of life, its purpose, and the marvellous meaning of Being and Time. One wonderful example of thinking can be witnessed in this video with Martin Heidegger talking about his understanding of what is a being and our relationship with it. The problem of being is not only philosophical, it should be also a concern to physicists (how matter is created locally, how the universe appeared apparently from nothing, from a Big Bang?…). This is well stated by William James: “How comes the world to be here at all instead of the nonentity which might be imagined in its place? … from nothing to being there is no logical bridge.” But nobody could imagined that, later, Alfred Jarry will invent a new area of though called Pataphysics (a contraction of the pseudo-Greek term τὰ ἐπὶ τὰ μετὰφυσικά (ta epi ta metaphusika – “that which is above metaphysics“). Jarry intent with this denomination to suggest a humorous variation of Aristotle’s work “Metaphysics”, and so Pataphysics means “that which is above that which is after physics”. Jarry loved cycling.Despite the disaster that represented the opening of the piece Ubo Roi [2], and the shocking first sentence uttered , “merdre” (watch the video, in French), together with the scene of boxing that followed in the audience, there is an interesting tentative of conceptualization of a phenomenon rarely spoken and even less studied. The Pataphysics is actually a critique of science and an attempt “to subvert the procrusteans constraints of science” [3]. It is a critique of the society as a all, turned to useless purposes, plain of vanity, and with grandiloquent speeches, but with its “scientific effort” bringing no results with interest to the humankind. It is a critique of “science” as is done nowadays, searching for “reports” that end on the managers archives, useless, a “science” that is not concerned with  understanding Nature, nor the meaning of being (and its enormous importance to our achievement as human beings), but a “science” that serves a nomenclature that uses it for self-promotion, as a living commerce (science sells). In a dellusional work, criticising the scientific establishment, tempted by the big shows and impressing the public, fainting to be an astrophysicist or some specialist alike, he wrote “How to build a time machine”. Another work full of genial veine is “Exploits and Opinions of Dr. Faustroll, Pataphysician” [5]. Pataphysics will examine the laws governing exceptions, and will explain the universe supplementary to this one; or, less ambitiously, will describe a universe which can be – and perhaps should be – envisaged in the place of the traditional one,  since the laws that are supposed to have been discovered in the traditional universe are also correlations of exceptions, albeit more frequent ones, but in any case accidental data which, rediced to the status of unexcpetional exceptions, possess no longer even the virtue of originality. – Alfred Jarry, in Ref.[5], p. 854. It is not a surprise to see that, in this era of fake reality, where things appear to be serious due to marketing and the fragility of people minds, there is a growing “[…]body of art that adopts the language and trappings of officialdom. Examples include Los Angeles’s Center for Land Use Interpretation, the Center for Tactical Magic from Oakland, Calif., and, locally, the Institute for Infinitely Small Things, the National Bitter Melon Council, and the Institute for Applied Autonomy.”[Blog1]. These social/cultural phenomena are a reaction from the society to the bad use of science, or better, of the despise with which real science is treated by governments and “managers” looking for numbers, statistics, statistics of things they don’t fully grasp, statistics that only aim for the immediate use of technology and huge profits for corporations, instead to be concerned with the preparation of minds (particularly the young generation), and the quest of the fundamental laws of the universe, that might help humanity escape from war, poverty and delusion. That’s why the great French philosopher, Jean Braudillard, called Pataphysics as the imaginary science of excess of emptiness and insignificance [4]. Clearly this new kind of non-philosophy, is anti-establishment without being truly political. Two asteroids crossed the space between earth and moon recently. It could be too late to avoid disaster...Im. credit: It is but due to the benevolence of gods that we are still here, writing on the Web, reading, thinking, and fighting for a better and a more dignified world, since otherwise we are not really protected from cosmic disasters, as it remains clear from the recent event of two asteroids that crossed the space between earth and the moon. It could be too late to avoid disaster. Because science is not being taken seriously, as it should be. [1] Aristotles on his predecessors, being his firts book on Methaphysics [2] Ubo Roi, manuscript by Alfred Jarre [3] Christian Bök, Pataphysics: the poetics of an imaginary science. [4] Pathaphysica, by Cal Clements [5] Exploits and Opinions od Dr. Faustroll, Pathaphysician, in Poems for the Millenium: the University of California of Romantic and Post-Romantic Poetry, Vol. III Other Blogs: [Blog1] Cabinet of Wonders [1] Musée Patamécanique [1] Institutum Pathaphisicum Londiniense %d bloggers like this:
57fea7d30c55ba0c
World Library   Flag as Inappropriate Email this Article Atomic units Article Id: WHEBN0000174914 Reproduction Date: Title: Atomic units   Author: World Heritage Encyclopedia Language: English Subject: Conversion of units, Natural units, Ponderomotive energy, Phase space formulation, System of measurement Collection: Atomic Physics, Natural Units, Systems of Units Publisher: World Heritage Encyclopedia Atomic units In Hartree units, the speed of light is approximately 137. Atomic units are often abbreviated "a.u." or "au", not to be confused with the same abbreviation used also for astronomical units, arbitrary units, and absorbance units in different contexts. • Use and notation 1 • Fundamental atomic units 2 • Related physical constants 3 • Derived atomic units 4 • SI and Gaussian-CGS variants, and magnetism-related units 5 • Bohr model in atomic units 6 • Non-relativistic quantum mechanics in atomic units 7 • Comparison with Planck units 8 • See also 9 • Notes and references 10 • External links 11 Use and notation • "m = 3.4~m_\text{e}". This is the clearest notation (but least common), where the atomic unit is included explicitly as a symbol.[2] • "m = 3.4~\mathrm{a.u.}" ("a.u." means "expressed in atomic units"). This notation is ambiguous: Here, it means that the mass m is 3.4 times the atomic unit of mass. But if a length L were 3.4 times the atomic unit of length, the equation would look the same, "L = 3.4~\text{a.u.}" The dimension needs to be inferred from context.[2] • "m = 3.4". This notation is similar to the previous one, and has the same dimensional ambiguity. It comes from formally setting the atomic units to 1, in this case m_\text{e} = 1, so 3.4~m_\text{e} = 3.4.[3][4] Fundamental atomic units Fundamental atomic units Dimension Name Symbol/Definition Value in SI units[5] mass electron rest mass \!m_\mathrm{e} 9.10938291(40)×10−31 kg charge elementary charge \!e 1.602176565(35)×10−19 C electric constant−1 Coulomb force constant k_\text{e} = 1/(4 \pi \epsilon_0) 8.9875517873681×109 kg·m3·s−2·C−2 Related physical constants Some physical constants expressed in atomic units Name Symbol/Definition Value in atomic units proton mass m_\mathrm{p} m_\mathrm{p}/m_\mathrm{e} \approx 1836 Derived atomic units Below are given a few derived units. Some of them have proper names and symbols assigned, as indicated in the table. kB is the Boltzmann constant. Derived atomic units Dimension Name Symbol Expression Value in SI units Value in more common units length bohr \!a_0 4\pi \epsilon_0 \hbar^2 / (m_\mathrm{e} e^2) = \hbar / (m_\mathrm{e} c \alpha) 5.2917721092(17)×10−11 m[6] 0.052917721092(17) nm = 0.52917721092(17) Å energy hartree \!E_\mathrm{h} m_\mathrm{e} e^4/(4\pi\epsilon_0\hbar)^2 = \alpha^2 m_\mathrm{e} c^2 4.35974417(75)×10−18 J 27.211 eV = 627.509 kcal·mol−1 velocity a_0 E_\mathrm{h} / \hbar = \alpha c 2.1876912633(73)×106 m·s−1 force \! E_\mathrm{h} / a_0 8.2387225(14)×10−8 N 82.387 nN = 51.421 eV·Å−1 electric field \!E_\mathrm{h} / (ea_0) 5.14220652(11)×1011 V·m−1 5.14220652(11) GV·cm−1 = 51.4220652(11) V·Å−1 electric potential \!E_\mathrm{h} / e 2.721138505(60)×101 V electric dipole moment e a_0 8.47835326(19)×10−30 C·m 2.541746 D SI and Gaussian-CGS variants, and magnetism-related units There are two common variants of atomic units, one where they are used in conjunction with SI units for electromagnetism, and one where they are used with Gaussian-CGS units.[7] Although the units written above are the same either way (including the unit for electric field), the units related to magnetism are not. In the SI system, the atomic unit for magnetic field is (These differ by a factor of α.) \mu_\text{B} = \frac{e \hbar}{2 m_\text{e}} = 1/2 a.u. and in Gaussian-based atomic units,[9] \mu_\text{B} = \frac{e \hbar}{2 m_\text{e} c}=\alpha/2\approx 3.6\times 10^{-3} a.u. Bohr model in atomic units • Orbital velocity = 1 • Orbital radius = 1 • Angular momentum = 1 • Orbital period = 2π • Ionization energy = 12 • Electric field (due to nucleus) = 1 • Electrical attractive force (due to nucleus) = 1 Non-relativistic quantum mechanics in atomic units The Schrödinger equation for an electron in SI units is The same equation in au is \hat H = - \nabla^2} - {1 \over {4 \pi \epsilon_0}}, while atomic units transform the preceding equation into \hat H = - \nabla^2} - . Comparison with Planck units See also Notes and references • Shull, H.; Hall, G. G. (1959). "Atomic Units".   5. ^ "The NIST Reference on Constants, Units and Uncertainty". National Institute of Standard and Technology. Retrieved 1 April 2012.  6. ^ "The NIST Reference on Constants, Units and Uncertainty". National Institute of Standard and Technology. Retrieved 21 January 2014.  7. ^ "A note on Units" (PDF). Physics 7550 — Atomic and Molecular Spectra. University of Colorado lecture notes.  8. ^ Chis, Vasile. "Atomic Units; Molecular Hamiltonian; Born-Oppenheimer Approximation" (PDF). Molecular Structure and Properties Calculations. Babes-Bolyai University lecture notes. ) External links • CODATA Internationally recommended values of the Fundamental Physical Constants.
7f09e0df6937052f
Skip to content Quantum Refutations and Reproofs May 12, 2012 One of Gil Kalai’s conjectures refuted but refurbished Niels Henrik Abel is famous for proving the impossibility of solving the quintic equation by radicals, in 1823. Finding roots of polynomials had occupied mathematicians for centuries, but unsolvability had scant effort and few tools until the late 1700’s. Abel developed tools of algebra, supplied a step overlooked by Paolo Ruffini (whose voluminous work he did not know), and focused his proof into a mere six journal pages. Today our guest poster Gil Kalai leads us in congratulating Endre Szemerédi, who on May 22 will officially receive the 2012 prize named for Abel. He then revisits his “Conjecture C” from his first post in this series, in response to a draft paper by our other guest poster Aram Harrow with Steve Flammia. Szemerédi’s prize is great news for Discrete Mathematics and Theoretical Computer Science, areas for which he is best known, and this blog has featured his terrific work here and here. The award rivals the Nobel Peace Prize in funds and brings the same handshake from the King of Norway. Gil offers the analogy that Abel’s theorem showed why a particular old technology, namely solution by radicals, could not scale upward beyond the case of degree 4. The group-theoretic technology that superseded it, particularly as formulated by Évariste Galois, changed the face of mathematics. Indeed Abelian groups are at the heart of Peter Shor’s quantum algorithm. Not only did the work by Abel and Galois pre-date the proofs against trisecting angles, duplicating cubes, and squaring circles, it made them possible. Refutations and Revisions Gil’s analogy is not perfect, because quantum computing is hardly an “old” technology, and because currently there is no compelling new positive theory to supersede it. Working toward such a theory is difficult, and there are places where it might be tilting against the power stations of quantum mechanics itself. In this regard, Aram and Steve’s paper provides a concrete counter-example to a logical extension of Gil’s conjecture for the larger quantum theory, in a way that casts doubt on the original. The refutation and revision of conjectures is a big part of the process described by Imre Lakatos in his book Proofs and Refutations, which was previously discussed in this blog. Here, the conjectures are physics conjectures, related to technological capability, and the “proof” and “reproof” process refers to confronting formal mathematical models with (counter-)examples and various checks by observations of Nature. After two sections by Aram and Steve explaining their paper and its significance, Gil assesses the effect on his original Conjecture C and re-assesses its motivation. The latter is reinforced by a line of research begun in 1980 with the following question by Sir Anthony Leggett, who won the Nobel Prize in Physics in 2003: How far do experiments on the so called “macroscopic quantum systems” such as superfluids and superconductors test the hypothesis that the linear Schrödinger equation may be extrapolated to arbitrary complex systems? Leggett’s “disconnectivity measure” in his 1980 paper, “Macroscopic Quantum Systems and the Quantum Theory of Measurement,” was an early attempt to define rigorously a parameter that distinguishes complicated quantum states. (source, ref1, ref2) In this light, Gil formulates two revisions of his conjecture that stay true to his original intents while avoiding the refutation. Then I (Ken) review lively comments that continue to further the debate in previous posts in our series. Aram Harrow, with Steve Flammia Recall that Gil defined an entanglement measure {K(\rho)} (there called {ENT}) on a quantum state {\rho} in a particular standard manner, where {\rho} signifies a possibly-mixed state. The statement of Conjecture C then reads, There is a fixed constant {c}, possibly {c = 2}, such that for states {\rho_n} produced by feasible {n}-qubit quantum computers, \displaystyle K(\rho_n) = O(n^c). Here the technical meaning of “feasible” depends on which models of noisy quantum computers reflect the true state and capability of technology, and is hard for both sides to pin down. We can, however, still refute the conjecture by finding states {\rho} that by consensus ought to be feasible—or at least to which the barriers stated by Kalai do not apply—for which {K(\rho)} is large. Our point of attack is that there is nothing in the definition of {K(\rho)} or in the motivation expressed for the conjecture that requires {\rho} to be an {n}-fold aggregate of binary systems. Quantum systems that represent bits, such as up/down or left/right spin, are most commonly treated, but are not exclusive to Nature. One can equally well define basic ternary systems, or 4-fold or 5-fold or {d}-fold, not even mandating that {d} be prime. Ternary systems are called qutrits, while those for general {d} are called qudits. The definition of {n}-qudit mixed states {\rho} allows {K(\rho)} to be defined the same way, and we get the same conjecture statement. Call that Conjecture C’. As Gil agrees, our note shows unconditionally that Conjecture C’ is false, for any {d} as low as {d = 8}. Theorem 1 There exist intuitively feasible {n}-qudit states {\rho_n} on a 2-dimensional grid for which {K(\rho) = 2^{2n/3 - o(n)}}. It is important to note that with {d=8} we cannot simply declare that we have a system on {3n} qubits, because we cannot assume a decomposition of a qudit state via tensor products of qubit states. Indeed when the construction in our note is attempted with qubits, the resulting states {\rho'_n} have {K(\rho'_n) \sim n^2.} However, our construction speaks against both the ingredients and the purpose of the original Conjecture C. What the Conjecture is Driving At Conjectures of this kind, as Steve and I see it, are attempts at what Scott Aaronson calls a “Sure/Shor separator.” By his definition that would distinguish states we’ve definitely already seen how to produce from the sort of states one would require in any quantum computer achieving an exponential speedup over (believed) classical methods. It represents an admirable attempt to formulate QC skepticism in a rigorous and testable way. However, we believe that our counterexamples are significant not especially because they refute Conjecture C, but because they do so while side-stepping Gil’s main points about quantum error correction failing. More generally, we think that it’s telling that it’s so hard to come up with a sensible version of Conjecture C. In our view, this is because quantum computers harness phenomena, such as entanglement and interference, that are already ubiquitous. Nature makes them relatively hard to control, but it is also hard to focus sensibly on what about the control itself is difficult. The formulations of Conjecture C and related obstacles instead find themselves asserting the difficulty of creating rather than controlling. Of course they are trying to get at the difficulty of creating the kinds of states needed for controlling, but the formulations still wind up trying to block the creation of phenomena that “just come naturally.” In our view, the situation is similar to ones in classical computing. A modern data center exists in a state of matter radically unlike anything ever seen in pre-industrial times. But if you have to quantify this with a crude observable, then it’s hard to come up with anything that wasn’t already seen in much simpler technology, like light bulbs. Our note can be thought of as showing that Conjecture C refers to a correlation measure that is high not only for full-scale quantum computers, but even for the quantum equivalent of light bulbs—technology that is non-trivial, but by no means complex. Gil Again: Revising Conjecture C One of the difficult aspects of my project is to supply mathematical engines for the conjectures, which were initially expressed in informal English terms and with physical intuition. For example, in Conjecture 4 we need to define “highly entangled qubits” and “error-synchronization” formally. This crucial technical part of the project, which is the most time-consuming, witnessed much backtracking. It happened with initial formulations for Conjecture 4 that failed when extended from qubits to qudits, which was indeed a reason for me to dismiss them and look for a more robust one, and this has guided me with other conjectures. Aram and Steve’s example suffices to look for another formal way to express the idea behind Conjecture C. While rooted in quantum computer skepticism, Conjecture C expresses a common aim to find a dividing line between physical quantum states in the the pre- and post-universal quantum computers eras. When Aram’s grandchildren will ask him, “Grandpa, how was the world before quantum computers?” he could reply: “I hardly remember, but thanks to Gil we had some conjectures regarding the old days”—and the grandchildren will burst in laughter about the old days of difficult entanglements. Conjecture C expresses the idea that “complicated pure states” cannot be approached by noisy quantum computers. More specifically, the conjecture asserts that quantum states that can be realistically created by quantum computers are “{k}-local” where {k} is bounded (and perhaps is even quite small). But to formally define {k}-locality is a tricky business. (Joe Fitzsimons’ 2-locality suggestions in comments beginning here and extending a long way down are related to this issue.) We can be guided by the motivation stated on the first page of the paper by Anthony Leggett mentioned above, for his “disconnectivity measure” which intends to distinguish two kinds of quantum states: Familiar “macroscopic quantum phenomena” such as flux quantization and the Josephson effect [correspond to states having very low] disconnectivity, while the states important to a discussion of the quantum theory of measurement have a very high value of this property. Leggett has stayed active with this line of work in the past decade, and it may be informative to develop further the relation to his problems of quantum measurement and problems in quantum computation. In this general regard, let me discuss possible new mathematical engines for the censorship conjecture. Conjecture C For Codes Error-correcting codes are wonderful mathematical objects, and thinking about codes, is always great. Quantum error-correcting codes will either play a prominent role in building universal quantum computers or in explaining why universal quantum computers cannot be built, whichever comes first. The map I try to draw is especially clear for codes: Conjecture C for codes: For some (small) constant {c}, pure states representing quantum error correcting codes capable of correcting {c}-many errors cannot be feasibly approximated by noisy quantum computers. Like in the original version of Conjecture C our notion of approximation is based on qubit errors. Conjecture 1 in the original post asserts that for every quantum error-correcting code we can only achieve a cloud of states, rather than essentially a Dirac delta function, even if we use many qubits for encoding. The expected qubit errors of the noisy state compared to the intended state can still be a small constant. Conjecture C for codes asserts that when the code corrects many errors than this cloud will not even concentrate near a single code word. Here “many” may well be three or even two. Conjecture D for Depth Conjecture C for codes deals only with special types of quantum states. What can describe general pure states that cannot be approximated? Conjecture D: For some (small) constant {d}, pure states on {n} qubits that can be approximated by noisy quantum computers can be approximated by depth-{d} quantum circuits. Here we adopt the ordinary description of quantum circuits where in each round some gates on disjoint sets of one or two qubits are performed. Unlike the old Conjecture C which did not exclude cluster states, and thus could not serve as a Sure/Shor separator in Scott Aaronson’s strict sense, the new Conjecture D may well represent such a separator in the strict sense that it does not allow efficient factoring. It deviates from the direction of earlier versions of Conjecture C since it is based on computational theoretic terms. The new Conjecture D gives a poetic justice to bounded depth circuits. In classical computation, bounded-depth circuits of polynomial size give a mathematically fascinating yet pathetically weak computational class. In quantum computation this may be a viable borderline between reality and dream. In the Comments The comments section of the “Quantum Super-PAC” post has seen an extremely lively discussion, for which we profusely thank all those taking part. We regret that currently we can give only the barest enumeration of some highlights—we envision a later summary of what has been learned. Discussion of a possible refutation of Gil’s conjectures via 2-local properties started in earnest with this comment by Joe Fitzsimons. See Gil’s replies here and here, and further exchanges beginning next. John Sidles outlined a mathematical approach to the conjectures beginning here. Hal Swyers moved to clarify the physics involved in the discussions. Then John Preskill reviewed the goings-on, including 2-locality and the subject of Lindblad evolution as used by Gil and discussed extensively above, and continued here to head a new thread. Swyers picked up on questions about the size of controllable systems here and in a second part here. Gil outlined a reply recently here. Meanwhile, Gil rejoined a previous post’s discussion of the rate of error with a comment in the “Super-PAC” post here. Alexander Vlasov re-opened the question of whether the conjectures don’t already violate linearity. Sidles raised a concrete example related to earlier comments by Mikhail Katkov here. Then Gil related offline discussions with David Deutsch here. Gil has recently reviewed the debate on his own blog. He and Jim Blair also mentioned some new papers and articles beginning here. On the technological side, Steve Flammia noted on the Quantum Pontiff blog that ion-trap technology has taken a big leap upward in scale for processes that seem hard to simulate classically, though the processes would need to be controlled more to support universal quantum computation. Open Problems Propose a version for Conjecture C or D, or explain why such a conjecture is misguided to start with. 108 Comments leave one → 1. May 12, 2012 9:41 am I wish some computer scientist would write about Alexander Grothendieck. • John Sidles permalink May 12, 2012 7:18 pm Do I feel my leg being gently pulled? 🙂 • Serge permalink May 12, 2012 7:52 pm Ah yes, what a genius! I agree he’d have done marvels in computer science, even though his revolutionary achievements in algebra, geometry, topology, categories, philosophy, let alone his political involvements, are well enough for one lifetime. 🙂 • John Sidles permalink May 12, 2012 9:56 pm Serge, a newly published book well-worth reading is Elaine Riehm’s and Frances Hoffman’s Turbulent times in mathematics: the life of J. C. Fields and the history of the Fields medal (2012) in which we find the following quotation from David Hilbert, who at the 1928 ICM endeavored to restore the bonds of mathematical collegiality that had been shattered by the First World War: “Let us consider that we as mathematicians stand on the highest pinnacle of the cultivation of the exact sciences. We have no other choice than to assume this highest place, because all limits, especially national ones, are contrary to the nature of mathematics. It is a complete misunderstanding of our science to construct differences according to people and races, and the reasons for which this has been done are very shabby ones. Mathematics knows no races. … For mathematics, the whole cultural world is a single country. The sobering failure of Hilbert’s 1948 efforts was to become evident evident in the sad circumstances of Hilbert’s own death, and the desperate circumstances of Grothendieck’s own childhood, in the heart of the Third Reich. Are present-day circumstances any less sobering, than those of Hilbert’s era, and of Grothendieck’s era? Is the role of mathematics any less central? Appreciation and thanks are due to Elaine Riehm and Frances Hoffman, for writing a book that helps us to ponder these questions. • May 12, 2012 10:04 pm We have a scheme to talk about Grothendieck sometime in the summer. • John Sidles permalink May 13, 2012 11:30 am Hoorah! Ken, that’s exciting to look forward to! Serge’s post was correct (IMHO)  the successes and failures of Grothendieck’s wonderful enterprises are confounding, delightful, disturbing, and instructive for all STEM disciplines. • Serge permalink May 13, 2012 1:20 pm • May 13, 2012 1:37 pm 2. ramirez permalink May 12, 2012 10:43 am Quantum space is defined as the extra space when the integration of the matrix P=1 gets off the boundaries. as in P=1 square . the inverse functions on radicals have to comply with the equivalence. not as an Arch function were the real numbers change value on the negative sector. positive negative positive as logic gates for a nano circuit, are god for a logic circuit in a binary system. on the hyperbolic functions c=2 has an exponential on real numbers and a radical in prime numbers. this problem gave to Enrrico Fermi the ability to create a fermion. and Einstein created the solution using 1/2 of the sine 1/2 cosine to create a prime number that can have a solution as a radical and the quantum space could exist without problem. P=NP eliminating the bipolarity on the second factor. when reaches a speeds faster than the speed of light. the particle accelerator does not reach speeds faster than the light. using C=2 if C is the constant of the speed of light as seen in the equation of Apple computers robot Jeffrey 5000 creates a gravitational force. but does not reach the prime interface as described by Einstein were Gravity creates the inverse Matrix of p=1. is different than the Arch Matrix of P=1. 3. Rachel permalink May 12, 2012 12:25 pm This is a physics problem, and making conjectures with no basis in physics does not make sense. Are you really suggesting that Nature sees that you are trying to prepare a state encoded in a quantum error-correcting code and decides to stop you? I strongly disagree with calling these random, unsupported guesses “conjectures.” A conjecture should have at least some reason behind it, not just “gives poetic justice to bounded-depth circuits.” • May 12, 2012 4:46 pm Hi Rachel Good question! It is a natural question to ask if in nature we can witness approximately pure states manifesting long (or high-depth) quantum processes. (Let us even allow unlimited computation power to control the process.) After all, unless there is some fault-tolerant machinary it is hard to see how “long” quantum process can stay approximately pure. So bounded-depth processes is a natural proposal for the limit of quantum processes that do not manifest quantum fault-tolerance. 4. Serge permalink May 12, 2012 4:16 pm The impossibility of deciding whether P=NP is a direct consequence of Heisenberg’s uncertainty principle. • May 13, 2012 1:56 pm • May 13, 2012 1:57 pm why would anyone even study such a possibility? It is like saying I cannot count n! quickly because of Cauchy-Schwarz! • May 13, 2012 2:00 pm why would anyone even study such a possibility? It is like saying to know whether I cannot count n! quickly because of Cauchy-Schwarz is undecidable! • Serge permalink May 13, 2012 3:03 pm Not exactly. With P=NP you have a speed” – that of computer processes – and a “position” – the accuracy of the output result. I believe that the product of the probability for an algorithm to output accurate results by the probability for it to be efficient is lower than some fixed constant. I claim that both phenomenons – the one about quantum particles and the one about computer processes – are implied by some more general principle, though I can’t write out the details of a relationship between my principle and Heisenberg’s. • ramirez permalink May 13, 2012 7:33 pm Couchy effect as an absorption coefficient can be used on different conditions, but specially under a gravitational force. as the Schwarzchild equation measures the compression state of the space under gravity force. Here is when Einstein observes that light behaves as matter and is affected by gravity also.calculates the strength of the inertial force produced by the black holes. as a P=NP its assumed that NP can coexist in the same time space, and this condition presumes the existence of a great gravitational force, in the form of antimatter . here N would be the Anti-quark, or anti-proton and P the real number(X), that exists inside the schwarz ring of gravity. Couchy measures the intensity and speed of the absorption. However to consider that the quantum space does exist without that intense gravity would be uncertain as trying to decide the sex of a baby. here a radical have to be a proportional exponential to the times square. means we have to compress two dimensions, one for N and one for P.when C=Constant of the speed of Light and you take C=2 you create this uncertainty dilemma. its only when you consider C=Csquare when the time space paradox allows you N for Quantum space and a real space for P=1. see Schrodingers cat paradox. The Linear space with Reimman and Euler.The integration of two dimensions that allows you the existence of antimatter is still under test in the Linear particle accelerator that has fail to prove the existence of antimatter in the Higgs Boson concept The Tevatron cannot go faster than the speed of Light. • Serge permalink May 13, 2012 7:42 pm 5. John Sidles permalink May 12, 2012 7:14 pm On Shtetl Optimized, in response to a well-posed question from Ajit R. Jadhav, I described a toolkit for quantum dynamical simulation in which Conjecture C holds true, and yet the framework is sufficiently accurate for many (all?) practical quantum simulation purposes. A bibliography is included. The aggregate toolkit contains perhaps not even one original idea … still it is fun, and useful too, to appreciate how naturally many existing dynamical ideas mesh together. As for whether Nature simulates herself via this toolkit, who knows? The post does sketch an alternative-universe version of the Feynman Lectures that encompasses this eventuality. 6. May 13, 2012 2:11 pm Dear all, Greetings from Lund. I am here for the Crafoord days celebrating the Crafoord Prize being given to Jean Bourgain and Terry Tao. There is a symposium entitled “from chaos to harmony” celebrating the occasion with live video of the five lectures here . Here are some questions I am curious about regarding the topic of the post: 1) Can somebody explain Leggett’s parameter precisely? I remember that when I tried to understand it (naively perhaps) the parameter was large for certaib systems with large classical correlations. In any case, I would be happy to see a clear explanation what the parameter is. 2) What could be potential counterexamples to the suggestion that all natural (approximately) pure evolutions are of (uniformly) bounded depth. 3) Is the note by Aram and Steve gives a convincing evidence regarding Conjecture C in its original form. I am very thankful to Aram and Steve and overall I was quite convinced. But I am not entirely sure. This has two parts: a) Is the state W realistic? b) Is Conjecture C’ in the form refuted by them (and there is no dispute that their example refute Conjecture C’) the right extension to qudit-operated QC of the qubit version. • May 13, 2012 2:17 pm To add to 1), consider a quantum circuit that maps the all-|0> state to a state f. Is there an easy way—preferably gate-by-gate inductive—to compute Leggett’s D-measure of f? • May 13, 2012 10:27 pm a) Is the state W realistic? I would find it hard to think of a reason why it wouldn’t be. It’s essentially what you get when a single photon is absorbed by a gas cloud, or when you put a single photon through a defraction grating. • aramharrow permalink May 13, 2012 11:33 pm Joe, you probably know this stuff better than me, but for a gas cloud of N atoms, doesn’t the temperature have to scale like 1/log(N)? For photons that’s also true, but I think with a better prefactor. For example, modes of an X-ray probably have very little thermal noise in them. • May 13, 2012 11:49 pm Hi Aram, I was thinking of things like vapor cell quantum memories, which store the quantum state essentially as a w-state (see and have been demonstrated with reasonable fidelities. While certainly these are essentially constant sized devises, the constant is enormous. • aramharrow permalink May 13, 2012 11:51 pm Cool, thanks! • ramirez permalink May 15, 2012 12:17 pm The W- state as a receptor, it does absorb wave length frequencies and they are used as synthetic retina for digital cameras, they do absorb light and releases it, the main trick here is that the photon is turned into an electric current as in the solar panel arrays. so in this way the encoded information can be transcribed into zeros and ones.Bose-Einstein condensate obtains the harmonic state of some gases when they are under pressure and a near to absolute Zero K temperature.The solid state receptors for Wide Band Antenna does work on Microwaves capturing and releasing the information that is in the air. however the antenna position losses its grip to the sine of the wave so the new W-state receptors have multiple position on fractal arrays to correct this problem, as in your cellular. • aramharrow permalink May 13, 2012 11:47 pm For Leggett’s parameter, it’s crucial that the parameter “a” be taken to be <1/2, so that classical systems always have disconnectivity equal to 0. If you take it to be 1/3, then this says that D is the largest N such that for all subsets of N qubits and all divisions of those qubits into subsystems A, B, we have S(AB) <= (S(A) + S(B))/3, where S() is the von Neumann entropy. For evidence of depth, I think that the presence of iron in the Earth is pretty good evidence. The only natural process we know for creating it is stellar nucleosynthesis, which (a) takes a very long time, and (b) requires quantum mechanics (and (c), I had to look up the name of on wikipedia..). Because of (a) and (b), we have evidence of deep quantum processes. I can't prove this, since I can't rule out the possibility of a low-depth classical method of producing lots of iron. Rather I think the evidence for it is like the evidence for evolution, which is that it's the only plausible theory that is consistent with the data, and that the theory alone has predicted things that weren't originally used to derive the theory. Note that I didn't say anything about any states being pure. This is because purity is subjective, and I don't know of a way for our physical theories can meaningfully depend on this. This of course is a common theme in my (and Rachel's and Peter's and others) objections to Gil's conjectures, which is that they are phrased in ways that suggest Nature may have to know which states we prefer the system to be in. • Gil Kalai permalink May 16, 2012 8:14 am Dear Aram, this is a great comment with a lot of interesting things to think about. I am enthusiastic to see this clear explanation of Leggett’s parameter (and it would be nice to discuss this parameter), and the iron as evidence for depth is exciting and we should certainly discuss it. I suppose I do not understand the point about purity. What do you mean by purity being subjective, what is “this” that physical theory cannot depend on and what is the critique on my conjecture that is referred to. • May 16, 2012 3:32 pm Hi Aram, here is a remark regarding Leggett’s parameter as you described it. In the context of Conjecture 4 and the notion of “highly entangled state”, one idea was to base the notion on entanglement between partitions of the qubits into two parts. A counterexample for this idea but for qudits that came up in a discussion with Greg Kuperberg some years ago looks like this: let G be an expanded graph with valence 3 and with 2n vertices. Take 3n Bell pairs and arrange them into 2n qudits with d=8 according to the pattern of the graph G. Then at the d=8 qudit level, this state has a lot of excess entanglement for partitions into two parts. This is achieved simply by grouping the halves of the Bell pairs and not by doing any true quantum information processing. So maybe this is an example also of a very mundane state that represents high value of Leggett’s D-parameter. • June 22, 2012 1:16 am Regarding the expander counter-example: there’s something a little ambiguous about Leggett’s definition in that he states it for states that are symmetric under permutation, so that the reduced state of any N particles depends only on N. Probably the right pessimistic interpretation for non-symmetric states is that you want to choose the worst subset. So then it becomes “D := max N s.t. for any S with |S|=N there exists T\subset S s.t. H(rho_S) <= delta (H(rho_T) + H(rho_{S-T})) " But if that's the definition, then D will be very low for this expander construction you described And for most non-symmetric states I can think of. It doesn't feel like a very robust definition, though. If we replace |S|=N with |S|<=N then you get something different. Presumably also we should restrict N to be << system size. And certainly replacing "for any S" with "exists S" would be far too permissive; then just having a bunch of EPR pairs would count. • John Sidles permalink May 19, 2012 2:19 pm Gil asks: Can somebody explain Leggett’s parameter D precisely? I am struggling with this too. Hopefully the following LaTeX will be OK (apologies in advance if there are bugs). An explicit definition of D is given in Leggett’s article (eqs. 3.1-3), and three concrete examples are worked out (the first example is marred by a typographic error: S_2=1 should be S_1=S_2=0). The part of the definition that I struggle with is the pullback-and-partition of the entropy S onto subsystems. In particular, the post-pullback partition into (spatially separated? weakly coupled?) subsystems is problematic … and such partitions are problematic in classical thermodynamics too. Presciently, Leggett’s article authorizes us to adjust the definition of D as needed: We want D to be a measure of the subtlety of the correlations we need to measure to distinguish a linear superposition from a mixture. A variety of definitions will fulfil this role; for the purpose of the present paper (though quite possibly not more generally) the following seems to be adequate … We continue as follows, with a view toward eliminating problematic references to separation. Let a system S be simulated on a Hilbert space \mathcal{H} by unraveling an ensemble of Lindbladian dynamical trajectories, and let \rho_{\mathcal{H}} be the density matrix of the trajectory ensemble thus simulated. Pullback the Lindbladian equations of motion and dynamical forms onto a rank-r tensor product manifold \mathcal{K} and let \rho_{\mathcal{K}} be the density matrix of the trajectory ensemble thus simulated. By analogy to the Flammia/Harrow measure \Delta, define a rank-dependent Kalai-style FT separator measure \Delta'(r) to be a minimum over the choice of tensor bases of the trace-separation \Delta'(r) = \underset{\mathrm{bases\ of}\ \mathcal{K}}{\min}\ \Vert\rho_{\mathcal{H}}-\rho_{\mathcal{K}(r)}\Vert_{\mathrm{tr}} Then a Leggett-style rank-based variant of Kalai Conjecture C is Kalai-type Conjecture C’ For all physically realizable n-qubit trajectory ensembles, and for any fixed trace fidelity epsilon, there is a polynomial P(n) such that \Delta'(P(n)) \lt \epsilon This conjecture possesses the generic virtue of most tensor-rank conjectures: computational experiments are natural and (relatively) easy. It also has the generic deficiency of tensor-rank conjecture: it is not obvious (to me) how the conjecture might be rigorously proved. • John Sidles permalink May 19, 2012 2:24 pm Close … let’s try again … “Then a Leggett-style rank-based variant of Kalai Conjecture C is:” Kalai-type Conjecture C’ For all physically realizable n-qubit trajectories, and for any fixed trace fidelity \epsilon, there is a polynomial P(n) such that \Delta'(P(n)) \le \epsilon. Apologies for the \text{\LaTeX} glitches! 🙂 7. John Sidles permalink May 13, 2012 2:30 pm Aram and Steve refer in several places to “our note” … it would be helpful if a link were provided to this (otherwise mysterious) note. Or have I just overlooked a link? • May 13, 2012 2:33 pm “Note” and “paper” are synonymous—the post went thru a long edit cycle, and their April 16 ArXiv upload came in the middle of that. The link is at the top:, and actually until three days ago it was at a less-stable “arxaliv” link. • John Sidles permalink May 13, 2012 9:48 pm Thank you, Ken, for clarifying that! The Flammia/Harrow note “Counterexamples to Kalai’s Conjecture C” looks exceedingly interesting & employs several novel constructions … plausibly it will require comparably to digest as it took to conceive! 🙂 8. John Sidles permalink May 14, 2012 7:16 am As I slowly digest Steve and Aram’s (really excellent and enjoyable!) arXiv note “Counterexamples to Kalai’s Conjecture C” (arXiv:1204.3404v1]), one concern that arises is associated to the restriction “states ρ which have been efficiently prepared.” In designing an apparatus for efficient state preparation, it is natural to begin by generalizing the apparatus shown in Figure 1 (page 6) of Pironio et al’s much-cited “Random Numbers Certified by Bell’s Theorem” (arXiv:0911.3427v3). The natural generalization is conceptually simple: specify more ion cells, that generate more outgoing photons, such that state preparation is heralded by higher-order coincidence detection, as observed through unitary-transform interferometers having larger numbers of input/output channels. Visually speaking, just add more rows to Figure 1! AFAICT, in the large-n qubit limit this natural generalization is robust with respect to validation (that is, the state heralding is reliable, when we see it) but it is exponentially inefficient (the mean waiting time for state heralding is exponentially long in n). We might hope that this efficiency obstruction is purely technical, to be overcome (e.g.) with greater detection efficiency and lower-loss optical coupling between ions and detectors. But this limit is of course the limit of strong renormalization, and it is not obvious (to me) that the qubit physics remains intact following strong renormalization. These are hard questions. Over on Shtetl Optimized, where these same issues are being discussed, I had occasion to quote the following passage: “Non-physicists often have the mistaken idea that quantum mechanics is hard. Unfortunately, many physicists have done nothing to correct that idea. But in newer textbooks, courses, and survey articles, the truth is starting to come out: if you wish to understand the central ‘paradoxes’ of quantum mechanics, together with almost the entire body of research on quantum information and computing, then you do not need to know anything about wave-particle duality, ultraviolet catastrophes, Planck’s constant, atomic spectra, boson-fermion statistics, or even Schrödinger’s equation.” (from arXiv:quant-ph/0412143v2). Among practicing researchers, this comforting belief — which has the great merit of being immensely inspiring to beginning students — was perhaps more widely held in the 20th century than at present … because the immensely long, immensely difficult struggle to build working quantum computers has slowly and patiently been teaching us humility. That the Kalai/Flammia/Harrow Conjecture C includes the phrase “efficiently prepared” (as contrasted with “efficiently described” for example) is evidence that these lessons-learned are being assimilated and acted-upon. Surely there is a great deal more to be said regarding these issues and obstructions, and we can all hope that one outcome of this debate will be a jointly-written note from Aram and Steve and Gil, that surveys and summarizes (for 21st century students especially) the wonderfully interesting challenges and opportunities that are associated to this fine debate. • aramharrow permalink May 14, 2012 7:42 am Hi John, those are some good points, which I won’t fully address. But I do agree that experiments that wait for multiple coincidences are not scalable, and wouldn’t work for this kind of thought experiment. On the other hand, something like what Boris Blinov’s group is doing (using entangled photons to entangle distant ions) would, I believe, address this problem. Obviously doing such an experiment once isn’t easy, and doing it N times in parallel is only harder, but it’s almost certainly harder only by a linear factor. • John Sidles permalink May 14, 2012 8:41 am Aram, a reference would be very helpful. I had a professor who was fond of quoting Julian Schwinger to the effect that certain facts were “well known to those who knew them well.” In a similar vein, the arXiv note refers to “states whose physical plausibility is relatively uncontroversial” … and so it is natural and legitimate to wonder whether this opinion is shared by folks whose job it is to prepare these states. • aramharrow permalink May 14, 2012 9:05 am Some of this work is planned for the future, but this paper describes those future plans. I think it’s uncontroversial that the states are physically plausible, and that any fundamental obstacle to their creation would be extraordinarily surprising, like discovering new energy levels for the Hydrogen atom. But that is consistent with the fact that doing the experiment once is going to be very hard, and doing it N times will be something like N times as hard. • John Sidles permalink May 14, 2012 9:55 am Aram, I will look carefully at the link you provided. As we both appreciate, large-n entanglement obstructions typically are associated with the adverse scaling (1-\epsilon)^n \simeq e^{-\epsilon n} where \epsilon is some (finite) single-qudit single-operation error probability, and the proposed remediations of this adverse scaling typically are equivalent to some variant of quantum error correction … even in experiments whose intrinsic dynamics seemingly is non-computational. If there is any way to evade this generic mechanism, then I am eager to grasp it! • aramharrow permalink May 15, 2012 10:45 pm I guess one thing I should add is that our counterexamples construct states with high entanglement (according to Gil’s measure) *without* getting into the challenging parts of scalable FTQC. So our point is not a very deep statement, it’s simply that conjecture C is unrelated to the question of whether FTQC can work. As for your point about epsilon vs n, note that for photons, \epsilon goes like e^{-\hbar\omega / k_B T}, which is one sense constant, but in another sense, exponentially small, and in practice can really be very small. • aramharrow permalink May 16, 2012 10:28 pm One more thing along these lines. John S. points out that the tensor rank is low for the W state, meaning that they are relatively uninteresting from the perspective of quantum computing. Based on this, you could view our counter-example as saying that Gil’s entanglement measure counts too many things as entangled, including things that are so lightly entangled as to not provide computational advantage. Thus, it does not provide the quantitative Sure/Shor separator that he is looking for. 9. Serge permalink May 14, 2012 3:18 pm Let me explain the analogy a bit further. Heisenberg’s uncertainty principle is due to the fact that, in order to locate a particle, you must shed light on it. Unfortunately, light is made of photons and photons are also particles. Similarly, in order to settle that a program is correct you have to write a proof. Unfortunately, proofs are also programs and this results in the following fact: “The more you know about the correctness of a program, the less you become able to know about its complexity class, and vice versa.” This is, IMHO, the reason why all efficient “solutions” to SAT are not known to solve every instance. They only have an acceptable probability of correctness – they’re called heuristic algorithms. Conversely, the algorithms used in artificial intelligence are often proven mathematically correct… but very little is ever said about their efficiency. • Serge Ganachaud permalink May 14, 2012 7:05 pm I wouldn’t insist, but my preceding comment is a step towards P=NP being undecidable. 🙂 • Serge permalink May 26, 2012 1:39 pm In that regard, NP-completeness could be viewed as computer science’s counterpart of the quantum level. • Serge permalink May 26, 2012 8:20 pm … and the analogy goes further, as the macroscopic level is made of the quantum level just like NP problems are polynomially reducible to NP-complete problems. I really think that defining suitable distances or topologies on the sets of problems, of proofs and of programs would suffice to prove that P=NP can’t be proved. 10. May 15, 2012 1:26 am Aram and Steve’s state W and related states The parameter K[ρ]. Here is a reminder what K(ρ) is. Given a subset B of m qubits, consider the convex hull F[B] of all states that, for some k, factor into a tensor product of a state on some k of the qubits and a state on the other m-k qubits. When we strart with a state ψ on B we consider D(ψ,F[B]) the trace distance between ψ and F[B]. When we have a state ρ on n qubits we define K(ρ) as the sum over all subsets B of qubits, of D(ρ[B], F[B]). Here ρ[B] is the restriction of ρ to the Hilbert space describing the qubits in B. The states W. Next let me remind what are the states we talk about. We consider the state W_n=1\sqrt n |000\dots 1\rangle +1/\sqrt n |000\dots 01\rangle +\dots +1/\sqrt n |1000\dots 0\rangle. Let us also consider the more general state W_{n,k} which is the transposition of all vectors |\epsilon _1\epsilon_2\dots \epsilon_n \rangle where \epsilon_i are 0 or 1 and precisely k of them are 1. (So W_n=W_{n,1}.) Dycke state. In my paper I considered the state W_{2n,n} as a potential counterexample to Conjecture C. Again let me remind you that conjecture C asserts that for a realistic quantum states ρ, K(ρ) attains a small value (polynomial in n). I thought about W_{2n,n} as a simulation of 2n bosons each having a ground state |\,0\rangle and an excited state |\,1\rangle, such that each state has occupation number precisely n. While K(W_{2n,n}) is exponentially large in n a rather similar pure state , the tensor product of n copies of (1/\sqrt 2) (|0\rangle + |1\rangle ), is not entangled and for it K is n. So it is quite important to understand well what is the state which is experimentally created. What conjecture 1 says: Already Conjecture 1 is relevant to Aram and Steve’s W_n (and the more general W_{n,k}). The conjecture predicts that the noisy W_n states are mixture of different W_{n,k}, where k is concentrated around 1. It can be, say, the mixture, denoted by W_n[t] of W_{n,1} with probability 1-t and W_{n,o} with probability t. (Perhaps, with addition ordinary independent noise on top). So we can ask two questions: 1) Is the noisy $W_n$ states created in the laboratory in agreement with Conjecture 1? If we realize W_{n,k} by k photons the question is if the number of the photons itself is stable. Joe, when you refer to the state W_n that was constructed with reasonable fidelities, in the paper you have cited, what are the mixed state which are actually been created? 2) The second question is about a mathematical computation that extends what Aram and Steve did: What is the value of K(W_n[t])? Namely, if we have a noisy W_n of the type I described above what will be its value of K? is it still exponential in n? Leggett’s disconnectivity parameter. If somebody is willing to write down what is Leggett’s definition of his disconnectivity parameter and explain it this will make it easier to discuss Leggett’s parameter. The definition is short but I don’t understand it that well. • May 15, 2012 3:19 am Hi Gil, The paper I refered to was a survey paper, not any one experiment. However, in quantum memory type experiments they aren’t actually explicitly trying to generate w-states. Their ultimate goal is to basically absorb a photon for some period of time and then emit it. The physics of the situation is such that the state of the vapour is pretty close to a w-state, but that isn’t really what they care about (although it is maybe what we care about), it is simply the mechanism for their trick to work. The fidelity I was talking about was of the emited photon. This is only indirect evidence of the w-state, and measuring the state of the vapor itself seems likely to be beyond our current technological capabilities, but I believe it is reasonable evidence that we can create w-like states on a large scale. • May 15, 2012 5:16 pm Hi Aram Joe all, The motivation behind the parameter K(ρ) was indeed coming from error correcting codes that correct c errors. There, small subsets of qubits behave like product states but for larger sets (of size c+1 if I remember right ) you will get substantial contribution to the terms defining K. As Aram and Stave showed much more mundane states like W_n have expoenential value for the parameter K. I certainly agree that W does not look as expressing exotically strong entanglement. And I tend to agree that W-like states can be created. What we can think about is if such expected W-like states like those I described above also have an exponential value for K. Also I am not sure if we exausted listing all possible ways that the state W can be implemented. • May 15, 2012 11:31 pm Hi Gil, I was just trying to answer your question “is the W state realistic?”. I think we have pretty strong evidence that it is even at extremely large scales. Certainly we have not considered even a small fraction of the ways it can arise with relative ease, but I would have thought even the few examples considered thus far should be convincing enough on their own. • Gil Kalai permalink May 16, 2012 7:53 am Right, Joe, thanks. I am quite convinced about W being realistic. My follow up question was how realistic W-like states look like. In particular, how do they look as mixed state approximations of W (This may differ for different examples which is why I thought it will be useful to consider more examples.) This follow-up question is relevant to first being sure about the parameter K and also to Conjecture 1 which predicts how mixed state approximations of W look like. • aramharrow permalink May 16, 2012 8:04 am I think that what makes the photon-based W states realistic is that the per-qubit noise decreases exponentially with photon frequency, or more precisely, photon frequency divided by temperature. (And this noise should be on the order of 1 / #qubits.) So large, nearly-pure, W states should be feasible. If I’ve calculated right, then this ratio should be on the order of 100, when the photons are visible light, and the temperature is room temperature. This means that thermal noise per qubit would be e^{-100}. I guess this means that other sources of noise will be more relevant. Although photon loss isn’t much of a risk. I’m not sure what the dominant source of noise would be, really. • May 16, 2012 12:30 pm Aram: Once you bring detectors into the picture, you would need to worry about dark counts, which would become more and more important as the probability of there being a real photon in that mode decreases. This doesn’t alter the underlying state, but would decrease the fidelity of any reconstructed density matrix. • aramharrow permalink May 16, 2012 12:39 pm Sure, but for the purpose of the conjecture, we don’t need detectors; we just need to believe that at some point, the state existed. • ramirez permalink May 15, 2012 10:17 pm Leggett´s Dis-connectivity parameter (off line). begins when we are trying to create some programing strings on a programatic lang like de C plus , or C plus plus. it does not define the postions on a Matrix. we are considering that the zeros an ones are traveling at the speed of light. Eisenberg’s uncertainty says that in order to read a bit you have to write it first. there is nothing faster than the speed of light traveling in the empty space, once you go off boundaries of the domain of the Matrix P=1 the recording bits are unreachable. according to Bayes any statistical notacion or bit recorded on spaces (sideral) out of the reach of a logic gate disrupts the real time connectivity, lets say that you send an spatial probe out of the system and you need to comunicate in real time with it, you send the information of the programatic strings wherever they are, but the distance the probe is moving is a couple o billion light yeas away simply the information wouldn’t be there on time and several years later you receive some static final. what does it happen? you loose a grip of real time. Here Eistein talks about the light bending coefficient when the radius of the matrix integration domain goes off the limits of connectivity of communication of a logic gate. The generated inertial force at the end of the string will catch a gravitational force as the spin on the radius goes on. this is going to create an inverse value that is considered as antimatter modifying its structure . Einstein Observes the star lights of the super nova’s detonation(gamma rays) and sees that the exploding stars generate a light pulse that travels faster than the Limit of the light constant measured in an empty space. at this condition Eistein calls it Quanta. and states that Star Light travels in Quanta. so he does not takes C plus plus. as a solution for the dis-connectivity problem he divides P=1 between the sideral time and real time, obtaining B as a radical of the space-time P=1 equals P=-1 or inverse logic gate. as a radical of 1 he obtains K=0 because the bit traves in linear regression. this considerations came with the relativity theory where the speed of the light is relative to the conductor where it does travel. and to reconnect the logic gates in time space( two computers in sideral distances) needs to accelerate to C square times. through a gravitational compression that can liberate a propulsion force that breaks the speed of the Light. (Quanta is considered a worm hole), to create stacks for programing strings According to Bayes theorem requires to consider the variance and the deviation standard. here de Hue or deepness are a primordial problem due to the current flow in a quantum solid state receptor. the compression state for the Large hadron Collider that canot reach speeds faster than the Light. Tera-Electron-Volt cannot generate this antimatter needed for this kind of propulsion. 11. John Sidles permalink May 16, 2012 10:29 am Please let me say that I too regard W-states as being realistic (that is, experimentally feasible). For me, the salient feature of W-states is not their exponential-in-n K-value, but rather their polynomial-in-n tensor rank. Respecting tensor rank as a natural measure of quantum state feasibility, two recent survey articles (that IMHO are very interesting and well-written) are Cirac, Verstraete, and Murg “Matrix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems” (arXiv:0907.2796v1) and also Cirac and Verstraete “Renormalization and tensor product states in spin chains and lattices” (arXiv:0910.1130v1). In the former we read: As it turns out, all physical states live on a tiny submanifold of Hilbert space. This opens up a very interesting perspective in the context of the description of quantum many-body systems, as there might exist an efficient parametrization of such a submanifold that would provide the natural language for describing those systems. and in the latter we read: The fact that [low rank] product states in some occasions may capture the physics of a many-body problem may look very surprising at first sight: if we choose a random state in the Hilbert space (say, according to the Haar measure) the overlap with a product state will be exponentially small with N. This apparent contradiction is resolved by the fact that the states that appear in Nature are not random states, but they have very peculiar forms. This is so because of the following reason … These considerations from Cirac, Verstraete, and Murg suggest that perhaps Gil Kalai’s K-measure might usefully be evolved into a rank-sensitive R-measure … the granular details of this modification are what I am presently thinking about. Please let me thank everyone for helping to sustain this wonderful dialog! 🙂 • John Sidles permalink May 16, 2012 2:01 pm As an addendum, it turns out that the above-referenced Cirac / Verstraete / Murg preprint arXiv:0907.2796v1 is substantially the same work as referenced as [3] of the Flammia/Harrow note “Counterexamples to Kalai’s Conjecture C” (arXiv:1204.3404v1). It is striking that one-and-the-same article serves simultaneously to: (1) inspire counterexamples to Gil Kalai’s specific conjectures, and (2) inspire confidence too that the overall thrust of these conjectures is plausibly correct. As usual, Feynman provides an aphorism that is à propos: “A great deal more is known than has been proved” In the present instance, this would correspond to “We (believe that we?) know that Gil’s thesis is correct (but in what form?) however we have not (as yet?) proved it.” • John Sidles permalink May 17, 2012 4:41 am Further respecting tensor rank, I tracked down the provenance of the Feynman quote (or misquote, as it turns out). From Feynman’s Nobel Lecture “The Development of the Space-Time View of Quantum Electrodynamics”: With regard to product states, we have works like J. M. Landsberg’s “Geometry and the complexity of matrix multiplication” (Bull. AMS, 2008) to remind us of how very much is known about these state-manifolds, and equally strikingly, how very much is not known. And so it seems (to me) that Gil’s conjectures are very much in accord with this honorable tradition of mathematics and physics, of seeking to state concretely and prove rigorously, an understanding for which there exists an impressive-but-imperfect body of evidence. 12. May 18, 2012 4:40 am Quantum FT separators, the parameter K(ρ) and the state W. 1) Conjecture C is meant to draw a line (of asymptotic nature) between states whose construction does not require quantum fault tolerance and states which require quantum fault tolerance. We will call such a proposed separation a quantum FT-seperator. 2) The border line was supposed to leave out quantum error correction codes that correct a number of errors that grows to infinity with the number of qubits. 3) The parameter K(ρ) was based on this idea since for an error correction code correcting c errors on n qubits its value is roughly n^c. However, as Aram and Steve showed a much more mundane state W has exponentially large value of K. This means that K does not capture what it was supposed to capture. 4) As mundane W is, it is interesting to examine how can it be implemented and what are the mixed state approximations that we can expect for W. (This is relevant also for my Conjecture 1.) In order to be sure that K is not appropriate to draw any reasonable line of the intended sort it will be useful to compute K for such W-like states, e.g. what I called W_n[t]. Aram and Steve’s qudit example and Leggett’s disconnectivity parameter 5) Aram and Steve’s qudit construction is based on the idea that for a state which can be created without quantum fault tolerance the parameter K(ρ) (extended to qudits) should remain low in every way we group qubits together into bounded size blocks. They exhibit a state which certainly can be prepared on 3n qubits so that when the qubits are grouped into sets of 3, the parameter K(ρ) for the qudits becomes exponential. 6) It is certainly a nice property for a parameter proposed as a quantum FT separator to remain low under such grouping. I am not sure that it is conceptually correct to make this a requirement and I will discuss this matter in a separate comment. 7) It is noted in this remark that a different qudit example in a similar spirit to Aram and Steve’s example may be used to exhibit a very mundane state with high value of Leggett’s disconnectivity parameter (in the way Aram described this parameter in this comment). Let’s discuss it! Bounded depth circuit as FT separators 8) The principal proposed FT separator described in the post is based on bounded depth computation. With the exception of a comment by Aram, we did not discuss this proposal so far. One counter argument raised by Aram is based on nature’s ability to create heavy atoms. This is a terrific idea. It can be interesting to discuss if the process leading to heavy atoms requires some sort of quantum FT, requires “long” (high-depth) evolutions, or perhaps even exhibits superior computational power. (I am skeptical regarding these possibilities.) 9) It will be interesting to describe experimental processes that may exhibit or require long quantum evolutions. 10) One of the nice things about bounded depth classic computation is that it leads to functions with very restricted properties. Bounded total influence (Hastad-Boppana); exponentially decaying Fourier coefficients (Linial-Mansour-Nisan) etc. Are there analogous results in the quantum case? 11) The bounded depth parameter satisfies the grouping requirement because we can regard qubits operations as qudit operations and replace each computer cycle on the qubit levels by several qudit cycles. • John Sidles permalink May 18, 2012 7:32 am Gil, thank you for this very fine summary. For me, the most natural candidate for an FT separator is the tensor rank, that is, “n-qubit states require FT iff their rank is exponential-in-n.” Perhaps the main objection to this separation is not that it is implausible, but rather that (with our present mathematical toolkit) it is so very difficult to prove rigorous theorems relating to tensor rank. Christopher Hillar and Lek-Heng Lim’s preprint “Most tensor problems are NP-hard” (arXiv:0911.1393v3 [cs.CC]) provides an engaging discussion of these issue, with the witty conclusion: “Bernd Sturmfels once made the remark to us that ‘All interesting problems are NP-hard.’ In light of this, we would like to view our article as evidence that most tensor problems are interesting.” From this perspective, perhaps progress in establishing (rigorous) Sure/Shor separations is destined to parallel progress in establishing (rigorous) complexity class separations. To put it another way, we would be similarly astounded at any of the following four announcements: • a rigorous resolution of \text{P}\overset{?}{=}\text{NP}, or • a rigorous proof of quantum computing infeasibility, or • a practical demonstration of a large-latex n$ quantum computation, or • experimental evidence that Nature’s state-manifold is non-Hilbert. And the mathematical reasons for our amazement would be similar in all four cases. • John Sidles permalink May 18, 2012 7:53 am Hmmm … the concluding four-item list was truncated by a LaTeX error. The intended list was: • a rigorous rank-based FT separation, or • experimental demonstration of a large-n quantum computer, or 13. May 18, 2012 8:11 am Dear John, Lets me just first make sure that we all understand the term tensor-rank in the same way. It is the minimum number of product pure states required to represent your pure state as a linear combination. (Am I right?) Since we talk abbout approximation we perhaps better replace “represent” by “approximate”. Anyway, tensor rank seems a natural thing to think about. (And I dont remember if I did not or just forgot.) I would worry that Aram and Steve’s qudit example may have large tensor rank in the qudits tensor structure. I prefer talking about FT-separators and not about Sure/Shor separator mainly to avoid computational complexity issues. The issue of Sure/Shor and FT separators is much simpler and clearer if we talk about noisy states and not about pure states. The simplest FT separator is a (protected) essentially noiseless qubit. It is an interesting problem (first to put on formal grounds and then to solve) if a single protected (essentially) noiseless qubit is a Sure/Shor separator. 14. John Sidles permalink May 18, 2012 9:02 am Yes, let’s affirm that “tensor rank” shall accord with wikipedia’s definition of tensor rank (which is the same as your post’s definition) and that “approximate” is a more precise description of what we want than “represent”. • John Sidles permalink May 18, 2012 11:39 am Hmmm … some subtleties associated to tensor rank, that are not mentioned in the Wikipedia article — in particular the distinction between the rank and the border rank of a tensor — are discussed in J. M. Landsberg’s “Geometry and the complexity of matrix multiplication” (Bull AMS, 2008). AFAICT the rank/border-rank distinction is not materially significant to rank-based FT separations. But who knows? I have myself encountered numerical instabilities that are associated to this distinction. The main point is that Landsberg’s definition of tensor rank, given as Definition 2.01 in his article, provides a rigorous entré into a mathematical literature that is vast, broad, and deep. To thoroughly grasp FT separations, it seems plausible (to me) that we will have to swim in Landsberg’s mathematical ocean … or at least wade in it.  🙂 15. ramirez permalink May 18, 2012 8:25 pm Tensor is the term used as “dynamic energy tension” inside the electron structure. there are two considerations about it, one is the electromagnetic field that considers the electron spin due to its structure, its divided in cycles, sine and cosine and its divided into bits and Bytes.4, 8, 16, 32,64,etc. logaritm progression, and the variant and covariant tensors that do not poses an electromagnetic field, but a gravitational tension. this inertial force creates a linear regression on the atom, not necessarily on the electron, this tensor can be found on the small particles as the Gluon, muon, etc. The term Quantic does not exist before Einstein comes out with the relativity theory of time and space, and writes a chapter making evident this difference between electromagnetism and gravity. The Q-bit o Quantum bit is the subatomic charge that can be recorded in a tight space as Hilbert`s calculations, however since the gravitacional compression came to the electronics field we can store more information in the same space, one picture just to fit in a 2 mega bites chip, now the same chip can be used for 2, 4, 8 giga bites, and so on. this compression rate allows to store more information, but to use the term Quantum bit, is needed to obtain the radical compression of 1=C constant of the speed of light so the exponential should be a quadratic equation. in this case we would be recording with radicals smaller than than the nano. P=1 as a matrix value has to be a cubic exponential. This Quantum bit o q.bit would be an antiquark or antiproton inside the programatic stack, the hertz wave runs at 2.4 giga hertz but this speed would not be fast enough to bridge a logic gate for distances where C the constant of the speed of the light is squared. we will get the Femto, Yocto atomic weight. usually this is the nuclear radiation value, this anti-q.bit its aut of the boundaries real numbers( Manifolds dilemma). the quantum bit is in what is called linear regression on time and space. The polinomial equations on integrals X,Y,Z. on a Matrix P=1 square. C=square create Linear regression strings on programming quantum bits that are defined By Plank, Einstein, with the term” Momenta” and Niels Bohr has to admit one antiproton in his atomic model. The standard deviation and variance creates the indexes that are considered “Jakobs Ladder equations” deviations on time and space. 16. May 20, 2012 2:46 am Another parameter which can be relevant for distinguishing “simple” and “complicated” quantum states is based on the expansion to multi-Pauli operators. Suppose that your quntum computer is supposed to perform the unitary operatior U, and let U=\sum \alpha_S S be the multi-Pauli expansion of U , namely the expansion in terms of tensor products of Pauli operators. Here S is a word of length n in the 4-letter alphabet I, X, Y, Z, and \alpha_S is a complex number. For a word S we denote by |S| the number of consonants in S. Define the Pauli influence of U by: I(U) = \sum \alpha^2_S|S|. We can consider I(U) as a complexity parameter. The advantage of using Pauli expansion is that it is much simpler compared to parameters like my K, and tensor-rank. (In some other cases it turned out that multi-Pauli expansion is the best way to express mathematically my conjectures.) • John Sidles permalink May 21, 2012 11:21 am Gil, supposing that for an n-spin system, a family of unitary transforms U(n) is given such that I(U(n)) is \mathcal{O}(P(n)) for some polynomial P(n). Then does it follow too that I(\log U(n)) also is \mathcal{O}(P'(n)) for some (different) polynomial P'(n)? Here the physical motivation is that \log U is a Hamiltonian that generates U. • May 21, 2012 2:53 pm Such complexity I(H) for “true quantum” gate H=(I+iX)/\sqrt{2} would be less than for “pseudo-classical (swap)” gate X. Is it O’K? PS. Naive technical remark, is Y considered as consonant? • May 22, 2012 9:33 am PS2: May be in expression for I(U) should be used |\alpha_S|^2 ? 17. ramirez permalink May 20, 2012 9:39 pm Pauli’s structure of analysis thrives on the electrons in the atomic level. The expansion factor of the atom when is stable and when is in expansion(explosion). Alfred Nobel obtains its fortune discovering the dinamite, the expansion factor when the atoms are at rest and in Pauli’s exclusion principle of the subatomic structure becomes a quantum step towards the principle of energy tensor. however the exponential reaction during its combustion (released energy) generates a wave length proportional to its compression state(solid quantum state). James clerk Maxwell separates the electricity from magnetism, what’s the deal, the electromagnetic field around an iron pigtal intensifies its force (K is the magnetic constant) this kind of compression does not have a quantum space (defined as an empty space in motion according to the relativity theory), the electron has a curly tensor as defined by Richard Feyman in Caltech in his book “The beat of a different hart “, during its expansion state releases a spin force(rip a curl), Einstein’s conjectures (Opinions) conduced him to split the atom and avoid the Fermion problem, because the tensor structure of the energy traveling in an empty space in motion had to carry a linear energy release effect and avoid the heating of the atom before it does releases the total amount of its energy,at the same time avoid the boundaries problems (manifolds) the total release of the energy in a chain reaction (obstructions on the combustion).over heating like in a nuclear reactor. The expansion factor is proportional at the inverse tensor (covariant), when this tension is on K=0 the inertial force forces the atom to travel backwards on time( this energy moving in an empty space in motion) creates a nonmagnetic tensor like antimatter. means that the nitroglycerin is condensed expansive fuel, as actinides, when they are in the presence of a detonator its energy release has a wave propulsion similar to the communications gate in your celular, reaches speeds almost like the speed of light, this is what is called a quantum bit o Q-bit. Pauli subatomic structure does not have quantic form. the nuclear energy release in a chain reaction inserts itself in the nucleus of the other atoms reproducing the same effect as splitting the atom due to the reason that is traveling faster than the speed of light (this is when is considered a quantic operator), Quantum computers use the same principle in the Hubble sideral observations, the Microchip that measures its operations in Gigahertz (speed), the Q-bits on memory stacks are compressed in giga bites, the programming string codes only create an assigned value of an operator, but if those operators are not defined on the memories arrays you can have a dis-functional programatic response. Quantum Physics are being used to simplifie multiple and complex operations. 18. May 21, 2012 1:31 am “…we conclude because A resembles B in one or more properties, that it does so in a certain other property.” John Stuart Mill, “System of Logic Ratiocinative and Inductive[1843]”, Chapter XX on analogies. Learning from analogies is a difficult matter, and often discussing analogies is not productive as it moves the discussion away from the main conceptual and technical issues. But it can also be interesting, and being as far as we are in this debate, while concentrating on the rather technical matters around Conjecture C, we can mention a few analogies. (Studying analogies was item 21 on my list of issues that were raised in our discussion.) 1) Digital computers Scott Aaronson: When people ask me why we don’t yet have quantum computers, my first response is to imagine someone asking Charles Babbage in the 1820s: “so, when are we going to get these scalable classical computers? by 1830? or maybe 1840?” In that case, we know that it took more than a century for the technology to catch up with the theory (and in particular, for the transistor to be invented). The main analogy of quantum computers is with digital computers, and of the quantum computer endeavor is with the digital computer endeavor. This is, of course, an excellent analogy. It may lead to some hidden assumptions that we need to work out. 2) Perpetual motion machines The earliest mention of this analogy (known to me) is in 2001 by Peter Shor (here): Nobody has yet found a fundamental physical principle that proves quantum computers can’t work (as the second law of thermodynamics proves that perpetual motion machines can’t work), and it’s not because smart people haven’t been looking for one. I was surprised that this provocative analogy is of some real relevance to some arguments raised in the debate. See e.g. this comment, and this one. 3) Heavier-than-air flight Chris Moore: “Syntactically, your conjecture seem to be a bit like this: ‘We know that the laws of hydrodynamics could, in principle, allow for heavier-than-air flight. However, turbulence is very complicated, unpredictable, and hard to control. Since heavier-than-air flight is highly implausible, we conjecture that in any realistic system, correlated turbulence conspires to reduce the lift of an airplane so that it cannot fly for long distances.’ Forgive me for poking fun, but doesn’t that conjecture have a similar flavor? “ This is also an interesting analogy. The obvious thing to be said is that perpetual motion machines and heavier-than-air flights represent scientific debates of the past that were already settled. 4) Mission to Mars Scott: Believing quantum mechanics but not accepting the possibility of QC is somewhat like believing Newtonian physics but not accepting the possibility of humans traveling to Mars. 5) Permanents/determinants 2-SAT; XOR-SAT This is a very nice analogy which gives a very good motivation and introduction to Aram’s first point. I also related to it in this comment. Of course, unlike the P=NP problem, or the question about solving equations with radicals, feasibility of universal QC is not a problem which can be decided by a mathematical proof. 6) Solving equations with radicals When it come to the content, I do not see much similarity between QC and solving polynomial equations. But there are two interesting points that this analogy does raise: 1) Can we work in parallel? Is it possible to divide (even unevenly) the effort and attention between two conflicting possibilities? It is quite possible that the answer is “no,” because of a strong chilling effect of uncertainty. (See e.g. this comment.) 2) The failure for the centuries-long human endeavor of finding a formula for solving general degree-5 equations with radicals is not just “a flaw.” It was not the case that the reason for this impossibility was a simple matter that mathematicians overlooked. The impossibility is implied by deep reasons and represents a direction that was not pursued. It required the development of a new theory over years with considerable effort. 7) The unit-cost model Leonid Levin (here): This development [RSA and other applications of one-way functions] was all the more remarkable as the very existence of one-way (i.e., easy to compute, infeasible to invert) functions remains unproven and subject to repeated assaults. The first came from Shamir himself, one of the inventors of the RSA system. He proved in [Inf.Process.Lett. 8(1) 1979] that factoring (on infeasibility of which RSA depends) can be done in polynomial number of arithmetic operations. This result uses a so-called “unit-cost model,” which charges one unit for each arithmetic operation, however long the operands. Squaring a number doubles its length, repeated squaring brings it quickly to cosmological sizes. Embedding a huge array of ordinary numbers into such a long one allows one arithmetic step to do much work, e.g., to check exponentially many factor candidates. The closed-minded cryptographers, however, were not convinced and this result brought a dismissal of the unit-cost model, not RSA. This is an interesting analogy. 8) Analog computers This is analogy that is often made. See, for example, these lecture notes by Boris Tsirelson, where Boris’s conclusion was that the analogy between quantum computers and both digital and analog computers are inadequate and quantum computers should be regarded as a new unchartered territory. I find what Boris wrote convincing. (I never understood though what is wrong with analog computers.) In Boris’s own words: A quantum computers are neither digital not analog: it is an accurate continuous device. Thus I do not agree with R. Landauer whose section 3 is entitled: Quantum parallelism: a return to the analog computer. We do not return, we enter an absolutely new world of accurately continuous devices. It has no classical counterparts. 9) Magic noise-cancelling earphones Here is an analogy of my own: We witness in the market various noise cancelling devices that reduces the noise up to 99% or so. Is it possible in principle to create computer-based noise cancelling earphones that will cancel essentially 100% of the noise? More precisely, the earphones will reduce the average noise level over a period of time T to O(1/n) times the original amount, where n is the number of computer cycles in T. • John Sidles permalink May 22, 2012 2:15 pm Another analogy is that we are struggling with a mismatch between “technology push” and “requirements pull”. At present the “requirements pull” is relatively weak — there isn’t much market-place demand for fast factoring engines, and as for quantum dynamical simulations, during the past 20 years the Moore-exponent of improvements in classical simulation capability has substantially outstripped the Moore-exponent of improvements in quantum simulation capability … and there is no obvious end it sight. As for the proof technology push, here too we have only barely begun to integrate existing quantum algebraic-informatic tools with differential-dynamic tools. As Vladimir Arnold expressed it: Conclusion: We stand in need of a version of Conjecture C that is designed so as to be simultaneously: (1) concretely responsive to the “requirements pull” of the 21st century, and (2) creatively amenable to an Arnold-style “technology push.” • May 22, 2012 2:22 pm Another very good analogy in my view is with Bose-Eisntein condensation. An idea that was theoretically proposed in 1924-25 and was first realized experimentally in 1995 after attempts to do so from the mid fifties. This is a great “role model” for the QC endeavor and it is also related to various technical issues related to our discussion. (Also some of the heroes of the BE story are now part of the QC efforts.) • March 20, 2014 2:42 am Another interesting analogy with alchemy and the goal of transmuting gold into led was raised e.g. by Scott Aaronson in this discussion over Shtetl Optimized. What is interesting here is that the principle for why led cannot be turned into gold given by atomic theory and modern chemistry were of course of huge importance in  science, and yet, one could say that now with subsequent further understanding one can argue that it is actually possible “in principle” (but we no longer care) to turn led into gold. (You can even say that understanding the principle for why it is impossible was crucial for understanding later on the principles for why it is possible.)  See here for a related remark by Dick Lipton over my blog also referring to perpetual motion machine. • March 20, 2014 6:51 am History repeats itself. Not even sure that these three problems – lead transmuting into gold, P=NP, large-scale quantum computing – are theoretically so distinct from each other… 19. May 29, 2012 3:05 pm I am looking at Conjecture C for codes I cannot help but think about separability of pure states, and since we are talking qudits now, it is worth noting that it is possible to place upper and lower bounds on separable states around maximally mixed states. It is also worth reviewing If we think of noisy quantum systems as non-separable where system and environment are entangled, then the question seems to be whether we can identify some separable pure subsystem of the noisy system of some size with a measure c of the number of errors the subsystem can correct. My thinking is that we really can’t answer this question without thinking dynamically, e.g. without thinking about the time dependence of the system. If we are thinking in terms of computing, we have to place a time envelope around the beginning and end time of the computation. So in this sense, one can think about there being a bubble in the mixed system that has sufficient life to complete some sort of operation. This make one want to use constant c in the context of a measure of temporal pure states that follow a decay function like EXP(-ct) where the upper bound of c is limited largely by the remaining entropy growth in the larger noisy system. Quantum teleportation gets around no cloning by destroying one copy and creating another with a spacetime gap between the two copies. So it doesn’t seem counterintuitive to suggest that we can introduce similar temporal restrictions on the code. So if we dump any notion of eternally pure states, and begin asking questions about the scalability of more temporal pure states, I think the size of the separable pure states will be largely dictated by the size of the larger noisy system and where that system is in its evolution with respect to some observer. 20. May 31, 2012 12:57 pm A piece of news related to the debate: Here at the Hebrew University of Jerusalem the new Quantum Information Science Center had its kick-off workshop, Yesterday, May 30, and I gave a lecture on the debate with Aram. Here are the slides of the lecture. It covered my initial post and Aram’s three posts but did not go into the rejoinders and the discussion. There were several interesting comments related to the discussion which I will try to share later. As quantum information is an interdisciplinary area in the true sense of the word and also in the good sense of the word, an area that involves several cutting edge scientific topics, it is only natural that HU’s authorities enthusiastically endorsed the initiative to establish this new center. The entire workshop was very interesting, Dorit Aharonov gave a beautiful talk claiming that a new scientific paradigm of verification is needed to check quantum mechanics beyond BPP. This talk is quite relevant to a discussion we had here on the post: “Is this a quantum computer“. The other talks were also very interesting. 21. ramirez permalink June 4, 2012 11:44 am I’ve seen this dialog going from Polynomials to Black holes and seems to me that you are looking the “God”el’s refers in modern physics, the Dialog with “Aram” aic on the signature of the glyph-encrypted-black door.that talks about gods paradise is very similar to the quantum time-space on the “Tora” talks about creation “one empty space where he created the materials things”. Einstein said the same thing ” an empty space but he said (in motion)”.the Godels polynomial on the Quintic equation on a bi-dimensional where you take one ordinal line X to an exponential 5 “from one to five there is a gap to reach that place (here there is motion) ” this was not solved in that time. this gap created the Ana “frank”-incense room where it was an annex to the house where she was hiding. this gap factor is the same principle of the Quintic “mistere” of the Golden Shrine and the small temple behind it in Israel. This equation was discovered by the German Officers upon its denunciation. the separation of the origin of X to its radical, created a linear gap called hue, or deepness.So Einstein had to go on a Tri-dimensional ordinal sketch and Use the principle of “gravity as two forces that attract and repel each other” where the radical is a compression state (Quantum state). some get confused on this assumption saying gravity is = 2, where in reality is one in ground zero. so he decides to split the exponential function of one. and have 1/2 of the sine . 1/2 cosine =1. On the Quintic equation the observer sees the variable X moving from left to right or right to left. so he changes the position to Z variable. and he sees that the light bends upon the inertial force when is moving from X zero to X5 what you are seeing is that the point of origin moved because the place you are standing is moving also.(Galileo) paradox. so Einstein made some calculations and he discovered those two forces that counteract and he uses the equation on 8Times the radius or the speed of light, creating the “Octil” parameter. (Octli) so this inertial force would create the gravity needed to create a Quantum State of particles in Motion in an empty space.(Bose-Einstein Principle), this Gravity Shield (SchwarzChild ring of gravity equation), influenced by the spinning of the particles on a distant Polynomial would create a gravity line on X zero to X-1= radical.The first gravity force on “Z” when the integration of the variables on a displacement towards the point of 8Times the radius at the speed of light, has had surpassed the boundaries of the real numbers (manifolds).The numeric perception becomes not real and 1/2 can be considered different than the original values.(values of perception Jean Piaget). The quantum space is considered an extra dimension when the integration values(samples) are far more away of the distance of the speed of light.The uncertainty of finding the same spot in the same time space, in another dimension has brought the idea of Quantum Bits to make Tera Hertz fast microchips to make recording quantum-bits. however one bit second in the fire might seem like a year, or one billion year with Ruth might feel like a second.This are the principles of the Aramaic encryption lang.later on the Hittite Lang Shows some modification on time space, from the present to the future. Any space that you can see and perceive in your eyes has an arch function inside your mind. The phrase ” Victory is for Her whom wins with her sight and ankles” was used during the Roman Empire as a symbol of power. Who she is?. The Tevatron is trying to generate this gravitational force to accelerate the hydrogen in the fuel cell. and use water as fuel. The water as Fuel has to have this pressure, The fuel pressure sensor has to indicate the fission point where the hydrogen atom jumps from one dimension to another generating friction and temperature within himself.its called the Mikamoka antimatter bit. The Borgia Codex shows some codifications that talk about this empty space in motion but is similar to the old Babylonian cuneiform script of Mount Sinai. Shalom. 22. June 4, 2012 1:02 pm One interesting issue that was raised by Nir Davidson in our quantum information center kick-off workshop is the “paradox” regarding chaotic behavior in quantum and in classic systems. In a classic chaotic system two nearby initial states may be carried far apart by the dynamics. In fact, their location may become statistically independent (in a certain well defined sense). In “contrast” for two nearby states in a quantum evolution their distance (trace-distance) remains fixed along the evolution. Nir described a formulation by Asher Peres for the “paradox” as well as how to solve it, and some related experimental work. This issue is relevant also to classical and quantum computers. If we corrupt a single bit in a digital computer then as the computation proceeds we can assume that this error will infect more and more bits so that the entire computer’s memory will be corrupted. In “contrast” if we let a quantum noise effect a single qubit and continue the quantum evolution without noise, then the trace distance between the intended state and the noisy state remains the same. What can explain this difference? The answer (I think) is quite simple. It has to do with the distinction between measuring noise in terms of trace-distance and measuring it in terms of qubit-errors. When you corrupt one qubit and let the error propagate in a complicated noiseless quantum computation the trace distance between the intended and noisy states will be fixed but the number of qubit-errors will grow with the computation, so just like in the classical computer case the noise will effect the entire computer memory. This is related to the fact that the main harm in correlated errors is that the error-rate itself scales up. 23. Serge permalink June 6, 2012 5:13 am Had computer science been a business of engineers and physicists right from its beginnings, I think that greater emphasis would have been put on processes rather than on programs. Processes are physical objects whereas programs are just mathematical ones – and processes are everywhere in Nature. For example, the fact that it’s much more difficult to factor a large compound number than it is to multiply two large primes is somewhat reminiscent of the nuclear force that glues the protons together inside atoms. Breaking a nucleus apart requires a lot of energy as well. When the unsolved problems of complexity theory are considered more systematically with a physicist’s eye, maybe new laws for the physics of computing will be discovered instead of new axioms and proofs about algorithms. • Serge permalink June 6, 2012 10:07 am To put it differently: trying to guess the behavior of a process by means of its program is like trying to guess somebody’s life by means of their DNA code. Processes are executed by physical devices which are themselves subject to the laws of physics. That doesn’t answer the PvsNP question – which I believe is undecidable. But it might explain why the world seems to behave as though P!=NP. • June 6, 2012 10:31 am Hi Serge, regarding the P=NP problems and your beliefs about it. The possibility that the question is undecidable was raised and there is some related research agenda. Unfortunately, proving definite results in this direction appears to get “stucked” even a bit earlier than proving definite results about computational complexity hardness. (If you want to check your reasonings regarding P=NP being undecidable one standard things to do it to try to see the distinction with problems like 2-SAT that are known to be feasible.) You mainly raise two other issues which seem interesting. The first is about our inability to predict the evolution of a computer program (described, say by the DNA code) when the evolution depends on unpredictable stochastic inputs. The second is about our inability to predict the evolution of a computer program (again a DNA code is an example) when we do not know precisely what the program is. (Also, the analogy between factoring and breaking a nucleus to parts is cute, but its is not clear how useful it can be.) The distinction between (physics and engineering) processes and (mathematical) programs is not clear. • Serge permalink June 6, 2012 11:48 am Hi Gil, thank you very much for your interesting answer. A clear distinction between programs and processes is useful in operating systems, a process being a specific execution of a program. One program leads to infinitely many possible executions of it. When mathematicians speak of a program, I think they also mean all its potential executions. Regarding PvsNP, there might exist a polynomial algorithm for SAT but executing it would go counter physical limits, such as a program too big to fit into memory for example. Or maybe our brains just couldn’t understand it and therefore nor even design it. In addition to the unpredictability of the behavior of programs due to unpredictable stochastic inputs or to unknown code, in some cases that behavior could be undecidable itself. I’m thinking of the algorithm that Ken commented on in “The Traveling Salesman’s Power”, saying there’s an already-known algorithm A accepting TSP such that if P=NP then A runs in polynomial time. 24. June 10, 2012 7:44 am John Preskill’s recent paper Quantum computing and the entanglement frontier touches on many issues raised in our debate. Very much recommended! • John Sidles permalink June 10, 2012 9:30 am Gil, please let me commend too this same Preskill essay. In it we read the following passage thought-provoking passage (p. 5): “A quantum computer simulating evolution … might not be easy to check with a classical computer; instead one quantum computer could be checked by another, or by doing an experiment (which is almost the same thing).” Adopting Preskill’s language to express the intuition that motivates Kalai Conjecture C (as I read it) leads us to the notion classical computers suffice to verifiably reproduce any-and-all simulations of quantum computers, insofar as those simulations apply to feasible physical experiments. And here the notion feasible physical experiment is to be taken to mean concretely, any-and-all physical system whose Hamiltonian / Lindbladian generators are stationary. In the preceding, the stipulation stationary is chosen deliberately, with a view toward crafting a concrete presentation of Conjecture C that affords ample plausible scope for near-term advances in practical simulation, without definitively excluding a longer-term role for quantum computational simulation. As a colleague of mine from Brooklyn was fond of saying, such a conjecture would be “better than ‘purrfect’, it would be ‘poifect’!”   🙂 • June 12, 2012 7:11 am Dear John, I have similar sentiments regarding the role and scope of Conjecture C. The draft of my post had a long obituary of Conjecture C (in the form originally made), starting with: “Conjecture C, while rooted in quantum computers skepticism, was a uniter and not a divider! It expressed our united aim to find a dividing line between the pre- and post- universal quantum computer eras.” Following Ken’s mathematical-formulations-as-cars’-engines metaphor, the following picture of me and Conjecture C was proposed • John Sidles permalink June 12, 2012 8:27 am LOL … Gil, perhaps Conjecture C may yet be reborn as a phoenix arising from the ashes! 🙂 • June 12, 2012 9:03 am Indeed, we have good reasons to give up on the parameter K(ρ), but we did raise some appealing alternative parameters. In particular, the conjecture that the depth of quantum processes is essentially bounded is interesting both from the conceptual and technical points of view. (The idea that the emergence of iron is a counter-example is terrific, but I do not think that it is correct…) 25. June 10, 2012 11:03 am As I am reading the the Preskill paper, my thoughts are wondering to questions of examples of brute force quantum computers. The idea is this, think of LHC. What is it actually doing? It is trying to identify particles predicted by various models of particle physics, and it is also verifying production cross sections of those particles. So in some sense, we have models that can make predictions that are in some way computable using a classical computer, and we are building a machine that can verify that those models are accurate. So what is the LHC? Is it a machine or a Brute Force Quantum Computer? No one is questioning that by accelerating particles and smashing them together we are generating new particles that follow some sort of function, however neither is any one questioning that what the LHC is simulating is an earlier state of the universe (and that might be a good question to ask). Another more accessible potential example is found in the study of fluid dynamics. Although we have fairly good classical formulas for modeling fluid flow in several situations, The modeling of complicated turbulent systems is extraordinarily difficult, and in many cases scale models must be produced in order to measure the “real” fluid flow of the system. Again, if we accept a quantum existence, what have we actually built with our model? We have resorted to a type of brute force method in order to solve a real world computational problem. As I think further about the question of QECC, I can’t help but think of the similarity between the difficulty of developing QECC and the difficulty in building stable fusion reactors. In a fusion reactor the goal is to build a stable, long lasting state of matter, invariably we can see that state as a quantum state, and the problem is similar. How do we keep the state stable so that “noise” from the environment doesn’t collapse the state? Once again, we are looking for a brute force method to solving an otherwise computational problem. Freeman Dyson recently published a book review where he compared string cosmologists to natural philosophers and other “creative” thinkers. However, what he failed to recognize is that questions being asked in those explorations do have intersections to real questions in quantum computing, such as the relationship between axions and anyons, as highlighted by Wilczek [2]. This brings me some of the more current question regarding the debate surrounding SUSY and theory that rely upon its existence. I look at the recent Straub paper [3] and see a graph with the SM as a point in vastly larger parameter space. Although by design, all the other potential models contain the SM as a shared common point, I can’t help by think about the situation coming from the other direction an looking at all the potential models that have the SM as a common point. Although I am not a subscriber to any notion of a multiverse as envisioned by sci-fi and pop sci writers, I am interested in this idea of other stable solutions, or perturbtions of our particular stable solution. Preskill does and excellent job of highlighting the question of what can’t be simulated on a quantum computer. We can’t give mass to a simulation in a quantum computer, however we know that there are several solutions out there that could be explored that do not require mass, and I think those are something worth exploring. 26. June 12, 2012 2:14 am The universe: is it noisy? Is it a quantum computer? why not two non-interacting quantum computers? The idea of the entire universe as a huge quantum computer was mentioned in several comments (and is an item on our long agenda). Also, the universe being described by a pure quantum evolution was mentioned and was related to Aram’s second thought experiment. It feels rather uncomfortable to talk about the entire universe, or to draw conclusions from it, but let me try to make some comments. 1) The claim that the entire universe runs a pure evolution seems reasonable but not particularly useful. (There are theories suggesting otherwise which are outside of quantum mechanics.) 2) The claim that the entire universe is a huge (noiseless) quantum computer which computes its own evolution is also made quite often. Again, it is not clear how useful this point of view is. And I am not sufficiently familiar with the literature on this. The universe as a huge noiseless quantum computer can be regarded as an argument against the claim that quantum computers are inherently noisy. 3) As we noted already, quantum computers are based on local operations and therefore the states that can be reached by quantum computers are tiny part of all quantum states. For example, a state described by a generic unitary operators is unfeasible. (In our off-line discussions we raised the question if such non-local states appear in nature.) 4) An appealing possibility (in my view) for our universe is that of two (or several) non-interacting (or, more precisely, extremely weakly interacting) quantum computers. We can have on the same Hilbert space two different independent tensor product structures so that every state is a superposition of two states, each described by one of two quantum computers. In this case, states achievable by one quantum computer will be nearly orthogonal to states achieved by the other. (This possibility does not rely on the hypothesis of no quantum error-correction, although it will be “easier” for two quantum computers not to be able to interact when there is no quantum-error correction around.) 5) The idea of the universe as a quantum computer which runs quantum error-correction is used in the paper Black holes as mirrors: quantum information in random subsystems by Hayden and Preskill. For what I understand, in this paper, certain quantum states in a black hole are required to behave like generic unitary states, and since such states are infeasible, states with similar properties arising from quantum error-correction are proposed instead. It will be interesting to examine if Hayden-Preskill’s idea can work with quantum error-correction being replaced by a two non-interacting quantum computers solution. 27. ramirez permalink June 12, 2012 10:51 am Usually the chinese people writes “peoples” as plural when it is already “people” plural. the same mistake was done in Mao Tze Doung’s biography. we are acostumed to take somebody’s else mistakes as truthful.”Weylan” means labor camp woman. Mao’s mother, and Bolchevique’s holy Icon that represents the mother’s nation of the of the truthful patriots. Charles Marx was a German Jew that wrote “Das Kapital”, Einstein was a German-Jew also. both theories shocked the world with conjectures on Human quality recognition and equal distribution of the income. while the occidental countries constructed their kingdoms based on Slavery and human degradation arguing that they are doing good to humanity. Why two supercomputers can’t be enabled to work together?. their programers keep the security codes on the so called star wars. where the code couldn’t be cracked, hijacked, or erased. in order to deviate their commanding source and get rid of them in case of a confrontation. What is the problem with the quintic equation being solve by radicals? that we do not have an exact number on the square of 2 or 1. all the operators are built in hertzian operations. how fast an electric current travels through a conductor of logic gates flipping them on zeros or ones. the Quantum bit here has been recorded in a different wave lengt not in a different code source, this wave lengt is exclusive of the Pentagon or the Kremlin to operate their military satellites. its something like the chess board, it dos have to parts that are interacting with each other to find the ponderation of the code encrypted in each memory stack. however the quantum bit presents a conflict. where the antimatter is present as an antiquark in a wave lengt.Einstein’s equation E=MC square caused international Mockery and Histeria between the Mathematicians and Physicists, Why? the Godel’s inability to solve the quintic equation was solved upon a logic aberration. C= constant of the speed of light its the maximum speed of the light in an empty space, so how are you going to accelerate faster than the speed of light to get C square?. somehow the universe is noisy because they found sounds of exploding stars and this shock waves travel faster than the speed of light. its what is called Quanta something like ether, or antimatter (The Micamocka chocolate chip), its the same antimatter quantum bit that the Tevatron is looking for in the Large hadron Collider, and is obtained in a Higgs equation through a massive collision of particles where the expansion wave has to be similar to a super nova star, however they do not have the expected results. this event should create a time space distortion where two or more atoms are trying to occupy the same dimensional time space, this is called atomic fission and is found in radioactive materials that eat the surrounding material (Chernobyl).there is an angle deviation in the equations (Bishop) that acts as a counter weight in the atom spin when reaches C square. that factor is what is called gravitational spin. The negroe playing chess with the Rabie is symbolic of the arch of the alliance, but that does not make them geniuses like you say. Bobby Fisher, Karpov, Kasparov and many others work on an equilibrium equation where any step of a horse changes algorithmically all the equation. Mans visual field is 20-20 while the Horse is Greater 30-30 so this difference gives you a linear regression. Check once you are the king your place its the “Ara” aramaic. That becomes a black hole to the gravitational field when you are out of boundaries. Quantum bit (Antiquark) that is present before the integration of the mass hertzian wave. Eistein used 8Times the radius of the speed of a light emitter to create the gravity field where you can encrypt any antimatter code. Megabucks trick,National Lotto, and other crap games. The tower J is the Jocker’s Club, Who’s club is the Tower B?. 28. ramirez permalink June 18, 2012 7:15 pm Heisenberg’s uncertainty is about How sure are you about hitting the nucleus of an atom in a chain reaction if you cannot comeback to the same place you left when you went up to a quintic polinomial, when it does involve exponentials on Csquare.The radicals are afected inversely. 29. June 11, 2013 12:59 am I gave a talk at the HUJI CS theory seminar on matters related to my conjectures and the debate and there were several interesting comments by Dorit Aharonov, Michael Ben-Or, Nadav Katz , and Steve Wiesner. Dorit suggested that experimental cat states with huge number of qubits are  counterexamples for the conjecture on bounded depth computation. This is a Good point!! I should certainly look at it. 30. January 23, 2014 9:13 am One thing I never explained is why I considered Aram and Steve’s example as a counterexample to my conjecture C. The setting of conjecture C was to find limitations for states achieved by noisy quantum computers with realistic noise models. The prior assumption you need to make is that the noise on gate is of arbitrary nature. (And, in fact, for my full set of conjecture you need to assume that information leaks on gated qubits/qudits are positively correlated). Aram and Steve had two examples. The first is based on qudits. This is an interesting example, and certainly my conjecture C should extend to qudits. But in Aram and Steve’s example the noise on gates is not of general nature but rather of a very structural nature. So this does not apply to the right extension of Conjecture C to qudits, although it does impose an interesting condition on “censorship conjectures.” The second qubit example is more convincing. (Ironically it is quite similar to an example I proposed myself in 2007.) A&F proposed a pure state which seems easy to approximate where my entropic parameter is exponential. What happens for mixed states which represent realistic approximation for this state? If the parameter is exponential for them this is a counterexample for my example. If it is not, it shows that the entropic parameter I defined is seriously flawed. (It will be interesting to know which possibility is correct, but in both cases I regarded my original entropy-based parameter as inappropriate.) 1. Sometimes They Come Back | Are You Shura? 2. The Quantum Fault-Tolerance Debate Updates | Combinatorics and more 5. Giants Known For Small Things « Gödel’s Lost Letter and P=NP 6. Quantum Repetition « Gödel’s Lost Letter and P=NP 7. Lucky 13 paper dance! | The Quantum Pontiff 8. Quantum Supremacy or Classical Control? « Gödel’s Lost Letter and P=NP 9. The Quantum Debate is Over! (and other Updates) | Combinatorics and more 10. My Quantum Debate with Aram III | Combinatorics and more 11. Happy 100th Birthday, Paul Erdős | Gödel's Lost Letter and P=NP Leave a Reply You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s
7466238d7dea0a49
Presentation is loading. Please wait. Presentation is loading. Please wait. Modern Physics 342 References : 1.Modern Physics by Kenneth S. Krane., 2 nd Ed. John Wiley & Sons, Inc. 2.Concepts of Modern Physics by A. Beiser, 6 th. Similar presentations Presentation on theme: "Modern Physics 342 References : 1.Modern Physics by Kenneth S. Krane., 2 nd Ed. John Wiley & Sons, Inc. 2.Concepts of Modern Physics by A. Beiser, 6 th."— Presentation transcript: 1 Modern Physics 342 References : 1.Modern Physics by Kenneth S. Krane., 2 nd Ed. John Wiley & Sons, Inc. 2.Concepts of Modern Physics by A. Beiser, 6 th Ed. (2002), McGraw Hill Com. 3.Modern Physics for Scientists and Engineers by J. Taylor, C. Zafiratos and M. Dubson, 2 nd Ed, 2003. Chapters: 5 (Revision), 6 (6.4), 7, 8, 10, 11 and 12 2 Ch. 5 (Revision) Schrödinger Equation 3 Schrödinger Equation Requirements 1.Conservation of energy is necessary: Kinetic energy Potential energy Total energy 4 The kinetic energy K is conveniently given by Where P is the momentum = m v 5 Schrödinger Equation Requirements (continued) 2.Consistency with de Broglie hypothesis Where, λ and k are, respectively, the wavelength and the wave number. 6 Schrödinger Equation Requirements (continued) 3.Validity of the equation The solution of this equation must be valid everywhere, single valued and linear. By linear we meant that the equation must allow de Broglie waves to superimpose properly. The following is a mathematical form of the wave associating the particle. 7 To make sure that the solution is continuous, its derivative must have a value everywhere. 8 Time-Independent Schrödinger Equation 9 Probability, Normalization and Average The probability density is given by Which is the probability of finding a particle in the space dx. The probability of finding this particle in a region between x 1 and x 2 is 10 By normalization we mean the total probability allover the space is 1, so that, 11 The mean value ( the expectation value) of x, If the wave function is normalized, therefore, 12 Applications The Free Particle ( a particle is moving with no forces acting on it ) U(x)= constant anywhere=0 (arbitrarily) The solution of such deferential equation is given in the form Ψ(x)=A sin(kx)+B cos(kx) (8) 13 Particle in a one dimensional box Finding A Ψ(0)=A sin(kx) + B cos(kx)=0 and for this, B=0 Therefore, ψ(x)=A sin(kx)(9) 14 Ψ(L)=0, therefore A sin(kL)=0 since A≠0, sin(kL)=0 kL=  n  Normalization condition (10) 15 The wave function is now given by, The energy is given by using (8) and (10) together, (11) (12) 16 The ground state (lowest ) energy E O is given by We used n=1 The allowed energies for this particle are 17 The wave function is shown here for n=1 and n=2 Ψ(x) x n=1 n=2 18 Example 5.2 P149 An electron is trapped in a one-dimensional region of length 1X10 -10 m. How much energy must be supplied to excite the electron from the ground state to the first excited state? In the ground state, what is the probability of finding the electron in the region from 0.09 X 10 -10 m to 0.11 X 10 -10 m? In the first excited state, what is the probability of finding the electron between x=0 and x=0.25 X 10 -10 m? 19 Example 5.3 P151 Show that the average value of x is L/2, for a particle in a box of length L, independent of the quantum state (not quantized). Since the wave function is And the average value is defined by 20 A particle in a two dimensional box The Schrödinger equation in two dimensions is U(x,y)=0 inside the box (0≤x≤L) & (0≤y≤L) U(x,y)=∞ outside the box 21 The wave function ψ(x,y) is written as a product of two functions in x and y, Since ψ(x,y) must be zero at the boundaries, ψ(0,y) =0ψ(L,y) =0ψ(x,0) =0ψ(x,L) =0 22 Therefore, A sin k x (0)+ B cos k x (0)=0 which requires B=0 In the same way For x=L, f(x)=0 and y=L, g(y)=0 This requires k x L=n x  with n=1,2,3 and 23 To find the constant A’, the wave function should be normalized This integration gives 24 The energy states of a particle in a two dimensional box Substituting about the wave function ψ(x,y) in Schrödinger equation, we find Which after simplification becomes 25 Chapter 7 The Hydrogen Atom Wave Functions The Schrödinger Equation in Spherical Coordinates The Schrödinger equation in three dimensions is The potential energy for the force between the nucleus and the electron is This form does not allow to separate wave function Ψ into functions in terms of x, y and z, so we have to express the whole equation of Schrödinger in terms of spherical coordinates, r, θ, and φ. 26 Cartesian and spherical coordinates θ φ r x y z electron r sin θ r sin θ cos φ r cos θ r sin θ sin φ 27 x= r sin θ cos φ y= r sin θ sin φ z= r cos θ And Schrödinger equation becomes This wave function can be written in terms of 3 functions in their corresponding variables, r, θ and φ 28 Hydrogen wave functions in spherical coordinates R(r) is called radial function Θ(θ ) is called polar function and Φ (φ) is called azimuthal function when solving the three differential equations in R(r), Θ(θ ) and Φ (φ), l and m l quantum numbers were obtained in addition to the previous principal quantum number n obtained before. 29 n 1 2 2 2 n the principal quantum number 1, 2, 3, … l angular momentum quantum number 0, 1, 2, ……±(n-1) m l magnetic quantum number0, ±1, ±2, ……± l 30 The energy levels of the hydrogen atom The allowed values of the radius r around the nucleus are given by Bohr radius (r at n=1) is denoted by a O and is given by 31 The Radial Probability Density P(r) The radial probability density of finding the electron at a given location is determined by The total probability of finding the electron anywhere around the nucleus is The limits of the integration depend on the conditions of the problem 32 Example 7.1 Prove that the most likely distance from the origin of an electron in the n=2, l= 1 state is 4a O. At n=2 and l =1, R 2,1 (r) is given by The most likely distance means the most probable position. The maximum value of the probability is obtained if r=4a O. To prove that, the first derivative of P(r) with respect to r is zero at this value. 33 Simplifying this result we get 34 Example 7.2 An electron in the n=1, l= 0 state. What is the probability of finding the electron closer to the nucleus than the Bohr radius a O ? The probability is given by 32.3 % of the time the electron is closer than 1 Bohr radius to the nucleus. 35 Angular Momentum We discussed the radial part R(r) of Schrodinger equation. In this section we will discuss the angular parts of the Schrodinger equation. The classical angular momentum vector is given by During the variables separation of wave functions in Schrodinger equation, angular momentum quantum number l was produced. The length of the angular momentum vector L is given by The z-components of L are given by where m l is the magnetic quantum number 0, ± l 36 The angular momentum vector components For l =2, m l = 0, ±1, ±2 The angle  is given by 37 Intrinsic Spin Angular momentum vector Magnetic moment due to electric current i Using q=-e the charge of the electron, and rp= L, we get The negative singe indicates that µ L and L work in opposite directions. 38 When the angular momentum vector L is inclined to the direction of the z-axis, the magnetic moment µ L has a z-component given by Remember, m l =0, ± l 39 An electric dipole in a uniform and non-uniform electric field A magnetic dipole in a non-uniform magnetic field The electric dipole has its moment p rotates to align with the direction of the electric field 40 Two opposite dipoles in the same non-uniform electric field are affected by opposite net forces that lead to displacing each dipole up and down according to their respective alignments. 41 Similarly, the magnetic dipoles are affected in the same way. When an electron with an angular momentum inclined to the magnetic filed, it may move up or down according to the direction of rotation around the nucleus. 42 A beam of hydrogen atoms is in the n=2, l = 1 state. The beam contains equal numbers of atoms in the m l = -1, 0, and +1 states. When the beam passes a region of non-uniform magnetic field, the atoms with m l =+1 experience a net upward force and are deflected upward, the atoms with m l =-1 are deflected downward, while the atoms with m l =0 are undeflected. Stern-Gerlach Experiment 43 After passing through the field, the beam strikes a screen where it makes a visible image. 1.When the filed is off, we expect to see one image of the slit in the center of the screen 2.When the field is on, three images of the slit on the screen were expected – one in the center, one above the center (m l =+1 ) and one below (m l =-1). The number of images is the number of m l values = 2 l +1= 3 in our example. In the Stern - Gerlach experiment, a beam, of silver atoms is used instead of hydrogen. While the field is off, and instead of observing a single image of the slit, they observed two separate images. A new quantum number is introduced, which is specifies the electron, the spin quantum number s. It may have two values ±½. The magnitude of its vector is given by Its z component to the magnetic field direction is 44 The experiment 45 Example 7.6 In a Stern – Gerlach type of experiment, the magnetic field varies with distance in the z direction according to The silver atoms travel a distance x=3.5 cm through the magnet. The most probable speed of the atoms emerging from the oven is v=750 m/s. Find the separation of the two beams as they leave the magnet. The mass of a silver atom is 1.8 X 10 -25 kg, and its magnetic moment is about 1 Bohr magneton. 46 3.5 cm v O =750 m/s. The force applied to the beam must be obtained 47 The force is the change of potential energy U with distance z. The potential energy U is given by This is the vertical force due to the effect of the magnetic field on the electron magnetic dipoles. Using the law of motion at constant acceleration,  z=v O t + ½ a t 2. The initial vertical speed was zero before the effect of the magnetic field. We can find the vertical deflection  z above the horizontal level of the beam.  z= ½ a t 2 =7.5X10 -6 m, the beam separation is 2(7.5X10 -5 m)=1.6X10 -4 m 48 Energy Levels and Spectroscopic Notation The notation for the quantum state of an electron is now described by the four quantum numbers n, l, m l and m s. For example, the ground state of the hydrogen is labeled as (n, l,m l,m s )=(1,0,0,±½) which means the are two quantum states that can be occupied by the electron; (1,0,0,+½) or (1,0,0,-½). Degeneracy of the atomic levels The ground state energy level is now degenerated into two quantum states. The 1 st excited state of the hydrogen atom has (n, l,m l,m s )= (2,1,1,+½), (2,1,1,-½), (2,1,0,+½), (2,1,0,-½), (2,1,-1,+½), (2,1,-1,-½), (2,0,0,+½), (2,0,0,-½). The degeneracy is 8, or 2n 2, where n is the principal quantum number. 49 Some degenerate states after spin quantum number n=1,2,.. l =0,1,2,..,n-1m l =0,± l m s =±½Degeneracy 2n 2 100 +½ 2 -½ 2 00 +½ 8 -½ 1 1 +½ -½ 0 +½ -½ +½ -½ 50 Spectroscopic Notation Magnetic quantum numbers, m l and m s have no necessity to be mentioned unless a magnetic field is applied to the atom. In normal cases, a certain notation is used to specify the energy levels of the electron in the atom. Also, it is important to specify the number of electrons occupying such state. This notation depends on the l values as follows: Value of l 012345 notationspdfgh For the electron in the n=1 level, it is in the state s and the level occupied by this electron is denoted as (1s 1 )or generally (ns 1 ) 51 Electronic configuration of some elements element Number of electrons n 123 l 001012 H11s 1 He21s 2 Li31s 2 2s 1 Be41s 2 2s 2 B51s 2 2s 2 2p 1 Ne61s 2 2s 2 2p 2 Na111s 2 2s 2 2p 6 3s 1 K181s 2 2s 2 2p 6 3s 2 3p 6 52 Selection rules Transitions between energy states of the atom are governed by the condition of  l =±1. The transition from 4s state is possible to the 3p state because  l =+1, but not possible between 4s and 3s, 2s or 1s. In addition to this condition, there is a rule against m l differences (  m l ). 53 n 4 3 2 1 l 0123 spdf 54 Zeeman Effect A hydrogen atom is prepared in a 2p ( l =1) level, and is placed in an external uniform magnetic field B. The orbital angular momentum magnetic moment  L interacts with the field and the potential energy of this interaction U is given by In addition to this, there is the energy of this level, E O, for example. So on turning the magnetic field on, the energy of this electron is given by Since m l in this example has the values +1, 0, and -1, there will be 3 energy values, 3 different wave lengths emitted. 55 Wavelength notation +10 0 0 56 Problem 22 p 233 A hydrogen atom is in an excited 5g state, from which it makes a series of transitions, ending in the 1s state. Show on an energy levels diagram the sequence of transitions that can occur. Repeat the last steps if the atom begins in the 5d state. 1s 2s 3s 5g5f5p5d5s 4f4d4s4p 3p3d 2p 57 1s 2s 3s 5g5f5p5d5s 4f4d4s4p 3p3d 2p 58 Problem 23 p 233 Consider the normal Zeeman effect applied to 3d to 2p transition. (a) sketch an energy-level diagram that shows the splitting of the 3d and 2p levels in an external magnetic field. Indicate all possible transitions from each m l state of the 3d level to each m l state of the 2p level. (b) which transitions satisfy the  m l =±1 or 0 selection rule? (c) Show that there are only three different transitions energies emitted. 3d 2p mlml +2 +1 0 -2 0 +1 59 Since there are three different values of  m l namely +1, 0 and -1, there will be three different energies emitted. Let the energy of the 2p state at m l =0 be denoted by E op and that of the 3d state at m l =0 be denoted by E od. The following equation can be used to calculate any energy released from allowed transitions. The difference E od –E op is constant, while  m l has 3 different values, therefore,  E has only three different values. 60 Many Electron Atoms Pauli Exclusion Principle It was believed that different atoms in the ground states have all their electrons dropped down in the 1s state. This means they all must have the same physical properties. This is not the case, in fact. A conclusion was drawn by Pauli that states that: No two electrons in a single atom can have the same set of quantum numbers (n, l, m l, m s ). Similar presentations Ads by Google
ba316ba8781b1368
Top-quark pair production near threshold Y. Sumino, K. Fujii, K. Hagiwara, H. Murayama, C. K. Ng 研究成果: Article査読 85 被引用数 (Scopus) We present a novel formalism to calculate the total and the differential cross sections for heavy unstable top-quark pair production near threshold. Within the context of the nonrelativistic quark model, we introduce the running toponium width ΓFTHETA(E,p) in the Schrödinger equation for the three-point Green's function that governs the tt̄ contribution to the e+e- annihilation process. The effect of the running of the width is found to be significant in two aspects: (i) it takes account of the phase-space volume for the decay process tt̄→bW+b̄W- and provides a consistent framework for calculating the differential cross sections; and (ii) it reduces the widths of the low-lying resonances to considerably less than 2Γt(mt2). Furthermore, the running of the width causes the total cross section to decrease significantly at c.m. energies below the first ''resonance'' enhancement, whereas it makes the ''peak'' cross section more distinct than is obtained in the fixed toponium width approximation. We use the two-loop-improved QCD potential in our calculation, and the αs(mZ)MS̄ dependences of the total and differential cross sections are studied quantitatively, where MS̄ denotes the modified minimal subtraction scheme. We find that the correlations in the αs and mt measurements are opposite in the total and differential cross sections, and the simultaneous measurements would lead to an accurate determination of both parameters. ジャーナルPhysical Review D 出版ステータスPublished - 1993 1 1 ASJC Scopus subject areas • Nuclear and High Energy Physics • Physics and Astronomy (miscellaneous) フィンガープリント 「Top-quark pair production near threshold」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。
3b55dc3c00fc4c3e
TY - JOUR AB - Alignment of OCS, CS2, and I2 molecules embedded in helium nanodroplets is measured as a function of time following rotational excitation by a nonresonant, comparatively weak ps laser pulse. The distinct peaks in the power spectra, obtained by Fourier analysis, are used to determine the rotational, B, and centrifugal distortion, D, constants. For OCS, B and D match the values known from IR spectroscopy. For CS2 and I2, they are the first experimental results reported. The alignment dynamics calculated from the gas-phase rotational Schrödinger equation, using the experimental in-droplet B and D values, agree in detail with the measurement for all three molecules. The rotational spectroscopy technique for molecules in helium droplets introduced here should apply to a range of molecules and complexes. AU - Chatterley, Adam S. AU - Christiansen, Lars AU - Schouder, Constant A. AU - Jørgensen, Anders V. AU - Shepperson, Benjamin AU - Cherepanov, Igor AU - Bighin, Giacomo AU - Zillich, Robert E. AU - Lemeshko, Mikhail AU - Stapelfeldt, Henrik ID - 8170 IS - 1 JF - Physical Review Letters SN - 00319007 TI - Rotational coherence spectroscopy of molecules in Helium nanodroplets: Reconciling the time and the frequency domains VL - 125 ER - TY - COMP AU - Hauschild, Robert ID - 8181 TI - Amplified centrosomes in dendritic cells promote immune cell effector functions ER - TY - CONF AB - Numerous methods have been proposed for probabilistic generative modelling of 3D objects. However, none of these is able to produce textured objects, which renders them of limited use for practical tasks. In this work, we present the first generative model of textured 3D meshes. Training such a model would traditionally require a large dataset of textured meshes, but unfortunately, existing datasets of meshes lack detailed textures. We instead propose a new training methodology that allows learning from collections of 2D images without any 3D information. To do so, we train our model to explain a distribution of images by modelling each image as a 3D foreground object placed in front of a 2D background. Thus, it learns to generate meshes that when rendered, produce images similar to those in its training set. A well-known problem when generating meshes with deep networks is the emergence of self-intersections, which are problematic for many use-cases. As a second contribution we therefore introduce a new generation process for 3D meshes that guarantees no self-intersections arise, based on the physical intuition that faces should push one another out of the way as they move. We conduct extensive experiments on our approach, reporting quantitative and qualitative results on both synthetic data and natural images. These show our method successfully learns to generate plausible and diverse textured 3D samples for five challenging object classes. AU - Henderson, Paul M AU - Tsiminaki, Vagia AU - Lampert, Christoph ID - 8186 T2 - Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition TI - Leveraging 2D data to learn textured 3D mesh generation ER - TY - CONF AB - Fixed-point arithmetic is a popular alternative to floating-point arithmetic on embedded systems. Existing work on the verification of fixed-point programs relies on custom formalizations of fixed-point arithmetic, which makes it hard to compare the described techniques or reuse the implementations. In this paper, we address this issue by proposing and formalizing an SMT theory of fixed-point arithmetic. We present an intuitive yet comprehensive syntax of the fixed-point theory, and provide formal semantics for it based on rational arithmetic. We also describe two decision procedures for this theory: one based on the theory of bit-vectors and the other on the theory of reals. We implement the two decision procedures, and evaluate our implementations using existing mature SMT solvers on a benchmark suite we created. Finally, we perform a case study of using the theory we propose to verify properties of quantized neural networks. AU - Baranowski, Marek AU - He, Shaobo AU - Lechner, Mathias AU - Nguyen, Thanh Son AU - Rakamarić, Zvonimir ID - 8194 SN - 03029743 T2 - Automated Reasoning TI - An SMT theory of fixed-point arithmetic VL - 12166 ER - TY - CONF AB - This paper presents a foundation for refining concurrent programs with structured control flow. The verification problem is decomposed into subproblems that aid interactive program development, proof reuse, and automation. The formalization in this paper is the basis of a new design and implementation of the Civl verifier. AU - Kragl, Bernhard AU - Qadeer, Shaz AU - Henzinger, Thomas A ID - 8195 SN - 0302-9743 T2 - Computer Aided Verification TI - Refinement for structured concurrent programs VL - 12224 ER - TY - JOUR AB - This paper aims to obtain a strong convergence result for a Douglas–Rachford splitting method with inertial extrapolation step for finding a zero of the sum of two set-valued maximal monotone operators without any further assumption of uniform monotonicity on any of the involved maximal monotone operators. Furthermore, our proposed method is easy to implement and the inertial factor in our proposed method is a natural choice. Our method of proof is of independent interest. Finally, some numerical implementations are given to confirm the theoretical analysis. AU - Shehu, Yekini AU - Dong, Qiao-Li AU - Liu, Lu-Lu AU - Yao, Jen-Chih ID - 8196 JF - Optimization and Engineering SN - 1389-4420 TI - New strong convergence method for the sum of two maximal monotone operators ER - TY - GEN AB - In this work, we investigate how the critical driving amplitude at the Floquet MBL-to-ergodic phase transition differs between smooth and non-smooth driving over a wide range of driving frequencies. To this end, we study numerically a disordered spin-1/2 chain which is periodically driven by a sine or a square-wave drive, respectively. In both cases, the critical driving amplitude increases monotonically with the frequency, and at large frequencies, it is identical for the two drives in the appropriate normalization. However, at low and intermediate frequencies the critical amplitude of the square-wave drive depends strongly on the frequency, while the one of the cosine drive is almost constant in a wide frequency range. By analyzing the density of drive-induced resonance in a Fourier space perspective, we conclude that this difference is due to resonances induced by the higher harmonics which are present (absent) in the Fourier spectrum of the square-wave (sine) drive. Furthermore, we suggest a numerically efficient method to estimate the frequency dependence of the critical driving amplitudes for different drives, based on measuring the density of drive-induced resonances. AU - Diringer, Asaf A. AU - Gulden, Tobias ID - 8198 T2 - arXiv TI - Robustness of the Floquet many-body localized phase in the presence of a smooth and a non-smooth drive ER - TY - JOUR AB - We investigate a mechanism to transiently stabilize topological phenomena in long-lived quasi-steady states of isolated quantum many-body systems driven at low frequencies. We obtain an analytical bound for the lifetime of the quasi-steady states which is exponentially large in the inverse driving frequency. Within this lifetime, the quasi-steady state is characterized by maximum entropy subject to the constraint of fixed number of particles in the system's Floquet-Bloch bands. In such a state, all the non-universal properties of these bands are washed out, hence only the topological properties persist. AU - Gulden, Tobias AU - Berg, Erez AU - Rudner, Mark Spencer AU - Lindner, Netanel ID - 8199 JF - SciPost Physics SN - 2542-4653 TI - Exponentially long lifetime of universal quasi-steady states in topological Floquet pumps VL - 9 ER - TY - JOUR AB - Using inelastic cotunneling spectroscopy we observe a zero field splitting within the spin triplet manifold of Ge hut wire quantum dots. The states with spin ±1 in the confinement direction are energetically favored by up to 55 μeV compared to the spin 0 triplet state because of the strong spin–orbit coupling. The reported effect should be observable in a broad class of strongly confined hole quantum-dot systems and might need to be considered when operating hole spin qubits. AU - Katsaros, Georgios AU - Kukucka, Josip AU - Vukušić, Lada AU - Watzinger, Hannes AU - Gao, Fei AU - Wang, Ting AU - Zhang, Jian-Jun AU - Held, Karsten ID - 8203 IS - 7 JF - Nano Letters SN - 1530-6984 TI - Zero field splitting of heavy-hole states in quantum dots VL - 20 ER - TY - JOUR AB - We consider the following setting: suppose that we are given a manifold M in Rd with positive reach. Moreover assume that we have an embedded simplical complex A without boundary, whose vertex set lies on the manifold, is sufficiently dense and such that all simplices in A have sufficient quality. We prove that if, locally, interiors of the projection of the simplices onto the tangent space do not intersect, then A is a triangulation of the manifold, that is, they are homeomorphic. AU - Boissonnat, Jean-Daniel AU - Dyer, Ramsay AU - Ghosh, Arijit AU - Lieutier, Andre AU - Wintraecken, Mathijs ID - 8248 JF - Discrete and Computational Geometry SN - 0179-5376 TI - Local conditions for triangulating submanifolds of Euclidean space ER - TY - JOUR AB - Antibiotics that interfere with translation, when combined, interact in diverse and difficult-to-predict ways. Here, we explain these interactions by “translation bottlenecks”: points in the translation cycle where antibiotics block ribosomal progression. To elucidate the underlying mechanisms of drug interactions between translation inhibitors, we generate translation bottlenecks genetically using inducible control of translation factors that regulate well-defined translation cycle steps. These perturbations accurately mimic antibiotic action and drug interactions, supporting that the interplay of different translation bottlenecks causes these interactions. We further show that growth laws, combined with drug uptake and binding kinetics, enable the direct prediction of a large fraction of observed interactions, yet fail to predict suppression. However, varying two translation bottlenecks simultaneously supports that dense traffic of ribosomes and competition for translation factors account for the previously unexplained suppression. These results highlight the importance of “continuous epistasis” in bacterial physiology. AU - Kavcic, Bor AU - Tkačik, Gašper AU - Bollenbach, Tobias ID - 8250 JF - Nature Communications SN - 2041-1723 TI - Mechanisms of drug interactions between translation-inhibiting antibiotics VL - 11 ER - TY - JOUR AB - Dentate gyrus granule cells (GCs) connect the entorhinal cortex to the hippocampal CA3 region, but how they process spatial information remains enigmatic. To examine the role of GCs in spatial coding, we measured excitatory postsynaptic potentials (EPSPs) and action potentials (APs) in head-fixed mice running on a linear belt. Intracellular recording from morphologically identified GCs revealed that most cells were active, but activity level varied over a wide range. Whereas only ∼5% of GCs showed spatially tuned spiking, ∼50% received spatially tuned input. Thus, the GC population broadly encodes spatial information, but only a subset relays this information to the CA3 network. Fourier analysis indicated that GCs received conjunctive place-grid-like synaptic input, suggesting code conversion in single neurons. GC firing was correlated with dendritic complexity and intrinsic excitability, but not extrinsic excitatory input or dendritic cable properties. Thus, functional maturation may control input-output transformation and spatial code conversion. AU - Zhang, Xiaomin AU - Schlögl, Alois AU - Jonas, Peter M ID - 8261 IS - 6 JF - Neuron SN - 0896-6273 TI - Selective routing of spatial information flow from input to output in hippocampal granule cells VL - 107 ER - TY - JOUR AB - Modern scientific instruments produce vast amounts of data, which can overwhelm the processing ability of computer systems. Lossy compression of data is an intriguing solution, but comes with its own drawbacks, such as potential signal loss, and the need for careful optimization of the compression ratio. In this work, we focus on a setting where this problem is especially acute: compressive sensing frameworks for interferometry and medical imaging. We ask the following question: can the precision of the data representation be lowered for all inputs, with recovery guarantees and practical performance Our first contribution is a theoretical analysis of the normalized Iterative Hard Thresholding (IHT) algorithm when all input data, meaning both the measurement matrix and the observation vector are quantized aggressively. We present a variant of low precision normalized IHT that, under mild conditions, can still provide recovery guarantees. The second contribution is the application of our quantization framework to radio astronomy and magnetic resonance imaging. We show that lowering the precision of the data can significantly accelerate image recovery. We evaluate our approach on telescope data and samples of brain images using CPU and FPGA implementations achieving up to a 9x speedup with negligible loss of recovery quality. AU - Gurel, Nezihe Merve AU - Kara, Kaan AU - Stojanov, Alen AU - Smith, Tyler AU - Lemmin, Thomas AU - Alistarh, Dan-Adrian AU - Puschel, Markus AU - Zhang, Ce ID - 8268 JF - IEEE Transactions on Signal Processing SN - 1053587X TI - Compressive sensing using iterative hard thresholding with low precision data representation: Theory and applications VL - 68 ER - TY - CONF AB - We study turn-based stochastic zero-sum games with lexicographic preferences over reachability and safety objectives. Stochastic games are standard models in control, verification, and synthesis of stochastic reactive systems that exhibit both randomness as well as angelic and demonic non-determinism. Lexicographic order allows to consider multiple objectives with a strict preference order over the satisfaction of the objectives. To the best of our knowledge, stochastic games with lexicographic objectives have not been studied before. We establish determinacy of such games and present strategy and computational complexity results. For strategy complexity, we show that lexicographically optimal strategies exist that are deterministic and memory is only required to remember the already satisfied and violated objectives. For a constant number of objectives, we show that the relevant decision problem is in NP∩coNP , matching the current known bound for single objectives; and in general the decision problem is PSPACE -hard and can be solved in NEXPTIME∩coNEXPTIME . We present an algorithm that computes the lexicographically optimal strategies via a reduction to computation of optimal strategies in a sequence of single-objectives games. We have implemented our algorithm and report experimental results on various case studies. AU - Chatterjee, Krishnendu AU - Katoen, Joost P AU - Weininger, Maximilian AU - Winkler, Tobias ID - 8272 SN - 03029743 T2 - International Conference on Computer Aided Verification TI - Stochastic games with lexicographic reachability-safety objectives VL - 12225 ER - TY - JOUR AB - Drought and salt stress are the main environmental cues affecting the survival, development, distribution, and yield of crops worldwide. MYB transcription factors play a crucial role in plants’ biological processes, but the function of pineapple MYB genes is still obscure. In this study, one of the pineapple MYB transcription factors, AcoMYB4, was isolated and characterized. The results showed that AcoMYB4 is localized in the cell nucleus, and its expression is induced by low temperature, drought, salt stress, and hormonal stimulation, especially by abscisic acid (ABA). Overexpression of AcoMYB4 in rice and Arabidopsis enhanced plant sensitivity to osmotic stress; it led to an increase in the number stomata on leaf surfaces and lower germination rate under salt and drought stress. Furthermore, in AcoMYB4 OE lines, the membrane oxidation index, free proline, and soluble sugar contents were decreased. In contrast, electrolyte leakage and malondialdehyde (MDA) content increased significantly due to membrane injury, indicating higher sensitivity to drought and salinity stresses. Besides the above, both the expression level and activities of several antioxidant enzymes were decreased, indicating lower antioxidant activity in AcoMYB4 transgenic plants. Moreover, under osmotic stress, overexpression of AcoMYB4 inhibited ABA biosynthesis through a decrease in the transcription of genes responsible for ABA synthesis (ABA1 and ABA2) and ABA signal transduction factor ABI5. These results suggest that AcoMYB4 negatively regulates osmotic stress by attenuating cellular ABA biosynthesis and signal transduction pathways. AU - Chen, Huihuang AU - Lai, Linyi AU - Li, Lanxin AU - Liu, Liping AU - Jakada, Bello Hassan AU - Huang, Youmei AU - He, Qing AU - Chai, Mengnan AU - Niu, Xiaoping AU - Qin, Yuan ID - 8283 IS - 16 JF - International Journal of Molecular Sciences SN - 16616596 TI - AcoMYB4, an Ananas comosus L. MYB transcription factor, functions in osmotic stress through negative regulation of ABA signaling VL - 21 ER - TY - JOUR AB - Multiple resistance and pH adaptation (Mrp) antiporters are multi-subunit Na+ (or K+)/H+ exchangers representing an ancestor of many essential redox-driven proton pumps, such as respiratory complex I. The mechanism of coupling between ion or electron transfer and proton translocation in this large protein family is unknown. Here, we present the structure of the Mrp complex from Anoxybacillus flavithermus solved by cryo-EM at 3.0 Å resolution. It is a dimer of seven-subunit protomers with 50 trans-membrane helices each. Surface charge distribution within each monomer is remarkably asymmetric, revealing probable proton and sodium translocation pathways. On the basis of the structure we propose a mechanism where the coupling between sodium and proton translocation is facilitated by a series of electrostatic interactions between a cation and key charged residues. This mechanism is likely to be applicable to the entire family of redox proton pumps, where electron transfer to substrates replaces cation movements. AU - Steiner, Julia AU - Sazanov, Leonid A ID - 8284 JF - eLife TI - Structure and mechanism of the Mrp complex, an ancient cation/proton antiporter VL - 9 ER - TY - JOUR AB - We demonstrate the utility of optical cavity generated spin-squeezed states in free space atomic fountain clocks in ensembles of 390 000 87Rb atoms. Fluorescence imaging, correlated to an initial quantum nondemolition measurement, is used for population spectroscopy after the atoms are released from a confining lattice. For a free fall time of 4 milliseconds, we resolve a single-shot phase sensitivity of 814(61) microradians, which is 5.8(0.6) decibels (dB) below the quantum projection limit. We observe that this squeezing is preserved as the cloud expands to a roughly 200  μm radius and falls roughly 300  μm in free space. Ramsey spectroscopy with 240 000 atoms at a 3.6 ms Ramsey time results in a single-shot fractional frequency stability of 8.4(0.2)×10−12, 3.8(0.2) dB below the quantum projection limit. The sensitivity and stability are limited by the technical noise in the fluorescence detection protocol and the microwave system, respectively. AU - Malia, Benjamin K. AU - Martínez-Rincón, Julián AU - Wu, Yunfan AU - Hosten, Onur AU - Kasevich, Mark A. ID - 8285 IS - 4 JF - Physical Review Letters SN - 00319007 TI - Free space Ramsey spectroscopy in rubidium with noise below the quantum projection limit VL - 125 ER - TY - CONF AB - We consider the following dynamic load-balancing process: given an underlying graph G with n nodes, in each step t≥ 0, one unit of load is created, and placed at a randomly chosen graph node. In the same step, the chosen node picks a random neighbor, and the two nodes balance their loads by averaging them. We are interested in the expected gap between the minimum and maximum loads at nodes as the process progresses, and its dependence on n and on the graph structure. Variants of the above graphical balanced allocation process have been studied previously by Peres, Talwar, and Wieder [Peres et al., 2015], and by Sauerwald and Sun [Sauerwald and Sun, 2015]. These authors left as open the question of characterizing the gap in the case of cycle graphs in the dynamic case, where weights are created during the algorithm’s execution. For this case, the only known upper bound is of 𝒪(n log n), following from a majorization argument due to [Peres et al., 2015], which analyzes a related graphical allocation process. In this paper, we provide an upper bound of 𝒪 (√n log n) on the expected gap of the above process for cycles of length n. We introduce a new potential analysis technique, which enables us to bound the difference in load between k-hop neighbors on the cycle, for any k ≤ n/2. We complement this with a "gap covering" argument, which bounds the maximum value of the gap by bounding its value across all possible subsets of a certain structure, and recursively bounding the gaps within each subset. We provide analytical and experimental evidence that our upper bound on the gap is tight up to a logarithmic factor. AU - Alistarh, Dan-Adrian AU - Nadiradze, Giorgi AU - Sabour, Amirmojtaba ID - 8286 SN - 18688969 T2 - 47th International Colloquium on Automata, Languages, and Programming TI - Dynamic averaging load balancing on cycles VL - 168 ER - TY - CONF AB - Reachability analysis aims at identifying states reachable by a system within a given time horizon. This task is known to be computationally expensive for linear hybrid systems. Reachability analysis works by iteratively applying continuous and discrete post operators to compute states reachable according to continuous and discrete dynamics, respectively. In this paper, we enhance both of these operators and make sure that most of the involved computations are performed in low-dimensional state space. In particular, we improve the continuous-post operator by performing computations in high-dimensional state space only for time intervals relevant for the subsequent application of the discrete-post operator. Furthermore, the new discrete-post operator performs low-dimensional computations by leveraging the structure of the guard and assignment of a considered transition. We illustrate the potential of our approach on a number of challenging benchmarks. AU - Bogomolov, Sergiy AU - Forets, Marcelo AU - Frehse, Goran AU - Potomkin, Kostiantyn AU - Schilling, Christian ID - 8287 KW - Reachability KW - Hybrid systems KW - Decomposition T2 - Proceedings of the International Conference on Embedded Software TI - Reachability analysis of linear hybrid systems via block decomposition ER - TY - COMP AB - Automated root growth analysis and tracking of root tips. AU - Hauschild, Robert ID - 8294 TI - RGtracker ER -
faaa824019f2d0e7
DeepTFactor predicts transcription factors A joint research team from KAIST and UCSD has developed a deep neural network named DeepTFactor that predicts transcription factors from protein sequences. DeepTFactor will serve as a useful tool for understanding the regulatory ... Artificial intelligence solves Schrödinger's equation A team of scientists at Freie Universität Berlin has developed an artificial intelligence (AI) method for calculating the ground state of the Schrödinger equation in quantum chemistry. The goal of quantum chemistry is to ... Natural fluid injections triggered Cahuilla earthquake swarm A naturally occurring injection of underground fluids drove a four-year-long earthquake swarm near Cahuilla, California, according to a new seismological study that utilizes advances in earthquake monitoring with a machine-learning ... Listening for right whales in the ocean deeps Scientists are using algorithms and machine learning to listen for the distinct calls of one of the world's most endangered animals in a bid to identify where they are and shield them from one of their greatest threats. Putting artificial intelligence to work in the lab An Australian-German collaboration has demonstrated fully-autonomous SPM operation, applying artificial intelligence and deep learning to remove the need for constant human supervision. page 1 from 13
f2b66f7da6a4c30a
Download Khatua, Bansal, and Shahar Reply: The preceding yes no Was this document useful for you?    Thank you for your participation! Document related concepts Hydrogen atom wikipedia, lookup Path integral formulation wikipedia, lookup Electron configuration wikipedia, lookup Symmetry in quantum mechanics wikipedia, lookup Quantum state wikipedia, lookup Renormalization group wikipedia, lookup Magnetic monopole wikipedia, lookup Hidden variable theory wikipedia, lookup Renormalization wikipedia, lookup Richard Feynman wikipedia, lookup Delayed choice quantum eraser wikipedia, lookup Propagator wikipedia, lookup Canonical quantization wikipedia, lookup Feynman diagram wikipedia, lookup Wave–particle duality wikipedia, lookup Magnetoreception wikipedia, lookup Matter wave wikipedia, lookup T-symmetry wikipedia, lookup Wheeler's delayed choice experiment wikipedia, lookup Bell test experiments wikipedia, lookup History of quantum field theory wikipedia, lookup Scalar field theory wikipedia, lookup Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup Electron scattering wikipedia, lookup Introduction to gauge theory wikipedia, lookup Ferromagnetism wikipedia, lookup Quantum electrodynamics wikipedia, lookup Aharonov–Bohm effect wikipedia, lookup Double-slit experiment wikipedia, lookup PRL 113, 158902 (2014) Khatua, Bansal, and Shahar Reply: The preceding Comment [1] on our Letter [2] does not discuss any technical or mathematical aspects of the experiment or analysis but remarks on interpretational issues. These remarks are in turn based on the critique of Feynman’s thought experiment itself [3]: “a thorough reappraisal of Feynman’s arguments” is deemed necessary [1]. The fundamental objection that Tiwari [1] raises is based on the incorrect assumption that, “In a strictly quantum domain of the double-slit experiment which-path information probed by magnetic field would destroy the interference phenomenon” [emphasis added]. A static magnetic field does not collapse the wave function. Secondly, a distinction must be made between the “Aharonov-Bohm [AB] effect” and the quantum-mechanical “Aharonov-Bohm phase.” We have consistently used only the latter phrase in our Letter [2]. Absence of the magnetic field (with nonzero vector potential) is a sufficient but not a necessary condition for an electron to pick up the Aharonov-Bohm phase [4]. Quantum mechanically, the only way the magnetic field can act on an electron (ignoring spin) is through the minimal coupling of the charge to the vector potential, and the consequence of this is that its wave function acquires an additional (Aharonov-Bohm) phase. Furthermore, for any physically observable phenomenon the vector potential comes within a line integral (as a necessary requirement of gauge invariance) [4]. Hence, what is important is the enclosed flux. That identical answers are obtained from the quantum-mechanical calculations based on the experimental situations in Fig. 15–7 and 15–8 in Ref. [3], as long as the enclosed flux is the same, is thus neither “fortuitous” nor “puzzling.” Figure 15–7 of Ref. [3] describes the Aharonov-Bohm effect that further reveals the nontrivial topological nature of the vector potential, but the electron acquires an Aharonov-Bohm phase for both Figs. 15–7 and 15–8 of Ref. [3]. The preceding Comment [1] specifically discusses three sentences from our Letter [2]. The sentences in the Abstract, (i) “He shows that the addition of an AB phase is equivalent to shifting the zero-field wave interference pattern by an angle expected from the Lorentz force calculation for classical particles,” and the introductory paragraph, (ii) “An interplay of these two distinct phenomena, beautiful in its simplicity and pedagogical richness, occurs when the electrons in the Young’s double-slit experiment are also subjected to weak magnetic field,” are claimed to be “confusing on the real import of Feynman’s though experiment.” The sentence in the concluding paragraph, (iii) “In summary, we have week ending 10 OCTOBER 2014 experimentally illustrated the equivalence of the abstract quantum formulation of electron waves with an added topological phase and classical picture for free-space propagation of electrons under Lorentz force using the single slit diffraction experiment,” is designated “The real import of Feynman’s thought experiment” is, of course, subjective. To us, as is explicitly stated in sentences (i) and (ii) and the discussion around Eq. 3 in Ref. [2], the essence of his thought experiment is the mapping of θ and B via the relationship k sin θ ¼ ðeBL=2ℏÞ [2], and the fact that the same relationship is also inferred from classical Lorentz force calculation. We agree that sentence (iii) cannot be rigorously defended, though perhaps not for the same reasons as mentioned in the preceding Comment. The stated quantum-to-classical correspondence in sentence (iii) is limited to the narrow qualitative sense of the abovementioned mapping between θ and B. In summary, we assert that a clean fully quantummechanical analysis based on the Schrödinger equation was sufficient to unambiguously model the results in our Letter. The experimental diffraction pattern survives the external magnetic field. As there is no inconsistency in Feynman’s argument or in the analysis or interpretation of our experiment, an appeal to speculative ideas needs to be argued for more concretely. The discussion on modular momentum [5] is thus out of context and beyond the scope of our work. P. Khatua,1 B. Bansal1 and D. Shahar2 Indian Institute of Science Education and Research Kolkata, Mohanpur Campus Nadia 741252, West Bengal, India Department of Condensed Matter Physics Weizmann Institute of Science Rehovot 76100, Israel Received 12 August 2014; published 10 October 2014 DOI: 10.1103/PhysRevLett.113.158902 PACS numbers: 03.65.Vf, 42.25.Fx, 73.23.Ad [1] S. C. Tiwari, preceding Comment, Phys. Rev. Lett. 113, 158901 (2014). [2] P. Khatua, B. Bansal, and D. Shahar, Phys. Rev. Lett. 112, 010403 (2014). Lectures on Physics (Addison-Wesley, Reading, MA, 1963), Vol. 2, Chap. 15. [4] Y. Aharonov and D. Bohm, Phys. Rev. 115, 485 (1959). [5] Y. Aharonov and T. Kaufherr, Phys. Rev. Lett. 92, 070404 © 2014 American Physical Society
0a0603efbfeb4b9e
World Library   Flag as Inappropriate Email this Article Path integral formulation Path integral formulation The path integral formulation of quantum mechanics is a description of quantum theory which generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique trajectory for a system with a sum, or functional integral, over an infinity of possible trajectories to compute a quantum amplitude. The basic idea of the path integral formulation can be traced back to Norbert Wiener, who introduced the Wiener integral for solving problems in diffusion and Brownian motion.[1] This idea was extended to the use of the Lagrangian in quantum mechanics by P. A. M. Dirac in his 1933 paper.[2] The complete method was developed in 1948 by Richard Feynman. Some preliminaries were worked out earlier, in the course of his doctoral thesis work by John Archibald Wheeler. The original motivation stemmed from the desire to obtain a quantum-mechanical formulation for the Wheeler–Feynman absorber theory using a Lagrangian (rather than a Hamiltonian) as a starting point. This formulation has proven crucial to the subsequent development of theoretical physics, because it is manifestly symmetric between time and space. Unlike previous methods, the path-integral allows a physicist to easily change coordinates between very different canonical descriptions of the same quantum system. The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all possible random walks. For this reason path integrals were used in the study of Brownian motion and diffusion a while before they were introduced in quantum mechanics.[3] These are just three of the paths that contribute to the quantum amplitude for a particle moving from point A at some time t0 to point B at some other time t1. • Quantum action principle 1 • Feynman's interpretation 2 • Concrete formulation 3 • Time-slicing definition 3.1 • 3.2 Free particle • The Schrödinger equation 3.3 • Equations of motion 3.4 • Stationary phase approximation 3.5 • Canonical commutation relations 3.6 • 3.7 Particle in curved space • The path integral and the partition function 3.8 • Measure theoretic factors 3.9 • Quantum field theory 4 • The propagator 4.1 • Functionals of fields 4.2 • Expectation values 4.3 • As a probability 4.4 • Schwinger–Dyson equations 4.5 • Localization 5 • Ward–Takahashi identities 5.1 • The need for regulators and renormalization 6 • The path integral in quantum-mechanical interpretation 7 • Quantum gravity 8 • Quantum tunneling 9 • See also 10 • References 11 • Notes 12 • Suggested reading 13 • External links 14 Quantum action principle In quantum mechanics, as in classical mechanics, the Hamiltonian is the generator of time-translations. This means that the state at a slightly later time differs from the state at the current time by the result of acting with the Hamiltonian operator (multiplied by the negative imaginary unit, −i). For states with a definite energy, this is a statement of the De Broglie relation between frequency and energy, and the general relation is consistent with that plus the superposition principle. But the Hamiltonian in classical mechanics is derived from a Lagrangian, which is a more fundamental quantity relative to special relativity. The Hamiltonian tells you how to march forward in time, but the time is different in different reference frames. So the Hamiltonian is different in different frames, and this type of symmetry is not apparent in the original formulation of quantum mechanics. The Hamiltonian is a function of the position and momentum at one time, and it tells you the position and momentum a little later. The Lagrangian is a function of the position now and the position a little later (or, equivalently for infinitesimal time separations, it is a function of the position and velocity). The relation between the two is by a Legendre transform, and the condition that determines the classical equations of motion (the Euler–Lagrange equations) is that the action is an extremum. In quantum mechanics, the Legendre transform is hard to interpret, because the motion is not over a definite trajectory. So what does the Legendre transform mean? In classical mechanics, with discretization in time, \epsilon H = p(t)(q(t+\epsilon) - q(t)) - \epsilon L \, p = {\partial L \over \partial \dot{q} } \, where the partial derivative with respect to \dot{q} holds q(t + ε) fixed. The inverse Legendre transform is: \epsilon L = \epsilon p \dot{q} - \epsilon H \, \dot q = {\partial H \over \partial p} \, and the partial derivative now is with respect to p at fixed q. In quantum mechanics, the state is a superposition of different states with different values of q, or different values of p, and the quantities p and q can be interpreted as noncommuting operators. The operator p is only definite on states that are indefinite with respect to q. So consider two states separated in time and act with the operator corresponding to the Lagrangian: e^{i( p (q(t+\epsilon) - q(t)) - \epsilon H(p,q) )}\, If the multiplications implicit in this formula are reinterpreted as matrix multiplications, what does this mean? It can be given a meaning as follows: The first factor is e^{-ip q(t)} \, If this is interpreted as doing a matrix multiplication, the sum over all states integrates over all q(t), and so it takes the Fourier transform in q(t), to change basis to p(t). That is the action on the Hilbert space – change basis to p at time t. Next comes: e^{-i\epsilon H(p,q)} \, or evolve an infinitesimal time into the future. Finally, the last factor in this interpretation is e^{i p q(t+\epsilon)} \, which means change basis back to q at a later time. This is not very different from just ordinary time evolution: the H factor contains all the dynamical information – it pushes the state forward in time. The first part and the last part are just doing Fourier transforms to change to a pure q basis from an intermediate p basis. Another way of saying this is that since the Hamiltonian is naturally a function of p and q, exponentiating this quantity and changing basis from p to q at each step allows the matrix element of H to be expressed as a simple function along each path. This function is the quantum analog of the classical action. This observation is due to Paul Dirac. "...we see that the integrand in (11) must be of the form eiF/h where F is a function of qT,q1,q2qm,qt, which remains finite as h tends to zero. Let us now picture one of the intermediate qs, say qk, as varying continuously while the other ones are fixed. Owing to the smallness of h, we shall then in general have F/h varying extremely rapidly. This means that eiF/h will vary periodically with a very high frequency about the value zero, as a result of which its integral will be practically zero. The only important part in the domain of integration of qk is thus that for which a comparatively large variation in qk produces only a very small variation in F. This part is the neighbourhood of a point for which F is stationary with respect to small variations in qk. We can apply this argument to each of the variables of integration ....and obtain the result that the only important part in the domain of integration is that for which F is stationary for small variations in all intermediate qs. ...We see that F has for its classical analogue t L dt , which is just the action function which classical mechanics requires to be stationary for small variations in all the intermediate qs. This shows the way in which equation (11) goes over into classical results when h becomes extremely small." Dirac (1932) op. cit., p. 69 Dirac further noted that one could square the time-evolution operator in the S representation e^{i\epsilon S} \, and this gives the time evolution operator between time t and time t + 2ε. While in the H representation the quantity that is being summed over the intermediate states is an obscure matrix element, in the S representation it is reinterpreted as a quantity associated to the path. In the limit that one takes a large power of this operator, one reconstructs the full quantum evolution between two states, the early one with a fixed value of q(0) and the later one with a fixed value of q(t). The result is a sum over paths with a phase which is the quantum action. Crucially, Dirac identified in this paper the deep quantum mechanical reason for the principle of least action controlling the classical limit (see quotation box). Feynman's interpretation Dirac's work did not provide a precise prescription to calculate the sum over paths, and he did not show that one could recover the Schrödinger equation or the canonical commutation relations from this rule. This was done by Feynman.[4] Feynman showed that Dirac's quantum action was, for most cases of interest, simply equal to the classical action, appropriately discretized. This means that the classical action is the phase acquired by quantum evolution between two fixed endpoints. He proposed to recover all of quantum mechanics from the following postulates: 1. The probability for an event is given by the modulus length squared of a complex number called the "probability amplitude". 2. The probability amplitude is given by adding together the contributions of all paths in configuration space. 3. The contribution of a path is proportional to e^{i S/\hbar}, where S is the action given by the time integral of the Lagrangian along the path. In order to find the overall probability amplitude for a given process, then, one adds up, or integrates, the amplitude of postulate 3 over the space of all possible paths of the system in between the initial and final states, including those that are absurd by classical standards. In calculating the amplitude for a single particle to go from one place to another in a given time, it is correct to include paths in which the particle describes elaborate curlicues, curves in which the particle shoots off into outer space and flies back again, and so forth. The path integral assigns to all these amplitudes equal weight but varying phase, or argument of the complex number. Contributions from paths wildly different from the classical trajectory may be suppressed by interference (see below). Feynman showed that this formulation of quantum mechanics is equivalent to the canonical approach to quantum mechanics when the Hamiltonian is at most quadratic in the momentum. An amplitude computed according to Feynman's principles will also obey the Schrödinger equation for the Hamiltonian corresponding to the given action. The path integral formulation of quantum field theory represents the transition amplitude (corresponding to the classical correlation function) as a weighted sum of all possible histories of the system from the initial to the final state. A Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude. Concrete formulation Feynman's postulates can be interpreted as follows: Time-slicing definition For a particle in a smooth potential, the path integral is approximated by zig-zag paths, which in one dimension is a product of ordinary integrals. For the motion of the particle from position xa at time ta to xb at time tb, the time sequence can be divided up into n + 1 little segments tjtj − 1, where j = 1,...,n + 1, of fixed duration \epsilon = \Delta t=\tfrac{t_b-t_a}{n+1}\,. This process is called time-slicing. An approximation for the path integral can be computed as proportional to \int\limits_{-\infty}^{+\infty}\,\ldots \int\limits_{-\infty}^{+\infty}\, \ \exp \left(\frac{\hbar}\int\limits_{t_a}^{t_b} L(x(t),v(t), t)\,\mathrm{d}t\right)dx_0 \, \ldots \, dx_n where L(x,v,t) is the Lagrangian of the 1d system with position variable x(t) and velocity v = (t) considered (see below), and dxj corresponds to the position at the jth time step, if the time integral is approximated by a sum of n terms.[note 1] In the limit n → ∞, this becomes a functional integral, which, apart from a nonessential factor, is directly the product of the probability amplitudes \langle x_a,t_a|x_b, t_b\rangle (more precisely, since one must work with a continuous spectrum, the respective densities) to find the quantum mechanical particle at ta in the initial state xa and at tb in the final state xb. Actually L is the classical Lagrangian of the one-dimensional system considered, also L(x,\dot x , t)=p\cdot \dot x - H(x,p,t)\,, where H is the Hamiltonian, p=\frac {\partial L}{\partial \dot x} , and the above-mentioned "zigzagging" corresponds to the appearance of the terms: \exp\left (\frac{\hbar}\epsilon\, \,\sum_{j=1}^{n+1} L \left (\tilde x_{j},\frac{x_j-x_{j-1}}{\epsilon},j \right )\right ) In the Riemannian sum approximating the time integral, which are finally integrated over x1 to xn with the integration measure dx1...dxn, j is an arbitrary value of the interval corresponding to j, e.g. its center, (xj + xj − 1)/2. Thus, in contrast to classical mechanics, not only does the stationary path contribute, but actually all virtual paths between the initial and the final point also contribute. The diagram shows the contribution to the path integral of a free particle for a set of paths. Feynman's time-sliced approximation does not, however, exist for the most important quantum-mechanical path integrals of atoms, due to the singularity of the Coulomb potential e2/r at the origin. Only after replacing the time t by another path-dependent pseudo-time parameter s=\int \frac{dt}{r(t)} the singularity is removed and a time-sliced approximation exists, that is exactly integrable, since it can be made harmonic by a simple coordinate transformation, as discovered in 1979 by İsmail Hakkı Duru and Hagen Kleinert.[5][6] The combination of a path-dependent time transformation and a coordinate transformation is an important tool to solve many path integrals and is called generically the Duru–Kleinert transformation. Free particle The path integral representation gives the quantum amplitude to go from point x to point y as an integral over all paths. For a free particle action (m = 1, ħ = 1): S= \int {\dot{x}^2\over 2} dt \, the integral can be evaluated explicitly. To do this, it is convenient to start without the factor i in the exponential, so that large deviations are suppressed by small numbers, not by cancelling oscillatory contributions. K(x-y;T) = \int_{x(0)=x}^{x(T)=y} \exp\left\{-\int_0^T {\dot{x}^2\over 2} dt\right\} Dx \, Splitting the integral into time slices: K(x,y;T) = \int_{x(0)=x}^{x(T)=y} \Pi_t \exp\left\{-{1\over 2} \left({x(t+\epsilon) -x(t) \over \epsilon}\right)^2 \epsilon \right\} Dx \, where the Dx is interpreted as a finite collection of integrations at each integer multiple of ε. Each factor in the product is a Gaussian as a function of x(t + ε) centered at x(t) with variance ε. The multiple integrals are a repeated convolution of this Gaussian Gε with copies of itself at adjacent times. K(x-y;T) = G_\epsilon*G_\epsilon ... *G_\epsilon \, Where the number of convolutions is T/ε. The result is easy to evaluate by taking the Fourier transform of both sides, so that the convolutions become multiplications. \tilde{K}(p;T) = \tilde{G}_\epsilon(p)^{T/\epsilon} \, The Fourier transform of the Gaussian G is another Gaussian of reciprocal variance: \tilde{G}_\epsilon(p) = e^{-\epsilon {p^2/2} } \, and the result is: \tilde{K}(p;T) = e^{-T {p^2/2}} \, The Fourier transform gives K, and it is a Gaussian again with reciprocal variance: K(x-y;T) \propto e^{ -{(x-y)^2/(2T)}} \, The proportionality constant is not really determined by the time slicing approach, only the ratio of values for different endpoint choices is determined. The proportionality constant should be chosen to ensure that between each two time-slices the time-evolution is quantum-mechanically unitary, but a more illuminating way to fix the normalization is to consider the path integral as a description of a stochastic process. The result has a probability interpretation. The sum over all paths of the exponential factor can be seen as the sum over each path of the probability of selecting that path. The probability is the product over each segment of the probability of selecting that segment, so that each segment is probabilistically independently chosen. The fact that the answer is a Gaussian spreading linearly in time is the central limit theorem, which can be interpreted as the first historical evaluation of a statistical path integral. The probability interpretation gives a natural normalization choice. The path integral should be defined so that: \int K(x-y;T) dy = 1 \, This condition normalizes the Gaussian, and produces a Kernel which obeys the diffusion equation: {d\over dt} K(x;T) = {\nabla^2 \over 2} K \, For oscillatory path integrals, ones with an i in the numerator, the time-slicing produces convolved Gaussians, just as before. Now, however, the convolution product is marginally singular since it requires careful limits to evaluate the oscillating integrals. To make the factors well defined, the easiest way is to add a small imaginary part to the time increment \epsilon. This is closely related to Wick rotation. Then the same convolution argument as before gives the propagation kernel: K(x-y;T) \propto e^{i(x-y)^2 / (2T)} \, Which, with the same normalization as before (not the sum-squares normalization – this function has a divergent norm), obeys a free Schrödinger equation {d\over dt} K(x;T) = {\rm i} {\nabla^2 \over 2} K \, This means that any superposition of K's will also obey the same equation, by linearity. Defining \psi_t(y) = \int \psi_0(x) K(x-y;t) dx = \int \psi_0(x) \int_{x(0)=x}^{x(t)=y} e^{iS} Dx \, then ψt obeys the free Schrödinger equation just as K does: {\rm i}{\partial \over \partial t} \psi_t = - {\nabla^2\over 2} \psi_t \, The Schrödinger equation The path integral reproduces the Schrödinger equation for the initial and final state even when a potential is present. This is easiest to see by taking a path-integral over infinitesimally separated times. \psi(y;t+\epsilon) = \int_{-\infty}^\infty \;\;\psi(x;t)\int_{x(t)=x}^{x(t+\epsilon)=y} e^ - {\partial S \over \partial u}\right)\epsilon(t) dt \right) e^{iS} Du \, But this was just a shift of integration variables, which doesn't change the value of the integral for any choice of ε(t). The conclusion is that this first order variation is zero for an arbitrary initial state and at any arbitrary point in time: \langle \psi_0| {\delta S \over \delta x}(t) |\psi_0 \rangle = 0 \, this is the Heisenberg equations of motion. If the action contains terms which multiply and x, at the same moment in time, the manipulations above are only heuristic, because the multiplication rules for these quantities is just as noncommuting in the path integral as it is in the operator formalism. Stationary phase approximation If the variation in the action exceeds ħ by many orders of magnitude, we typically have destructive phase interference other than in the vicinity of those trajectories satisfying the Euler–Lagrange equation, which is now reinterpreted as the condition for constructive phase interference. Canonical commutation relations The formulation of the path integral does not make it clear at first sight that the quantities x and p do not commute. In the path integral, these are just integration variables and they have no obvious ordering. Feynman discovered that the non-commutativity is still present.[7] To see this, consider the simplest path integral, the brownian walk. This is not yet quantum mechanics, so in the path-integral the action is not multiplied by i: S= \int \left( {dx \over dt} \right)^2 dt \, The quantity x(t) is fluctuating, and the derivative is defined as the limit of a discrete difference. {dx \over dt} = {x(t+\epsilon) - x(t) \over \epsilon} \, Note that the distance that a random walk moves is proportional to √t, so that: x(t+\epsilon) - x(t) \approx \sqrt{\epsilon} \, This shows that the random walk is not differentiable, since the ratio that defines the derivative diverges with probability one. The quantity x ẋ is ambiguous, with two possible meanings: [1] = x { dx\over dt} = x(t) {(x(t+\epsilon) - x(t)) \over \epsilon } \, [2] = x {dx \over dt} = x(t+\epsilon) {(x(t+\epsilon) - x(t)) \over \epsilon} \, In elementary calculus, the two are only different by an amount which goes to zero as ε goes to zero. But in this case, the difference between the two is not zero: [2] - [1] = {( x(t + \epsilon) - x(t) )^2 \over \epsilon} \approx {\epsilon \over \epsilon} \, give a name to the value of the difference for any one random walk: {(x(t+\epsilon)- x(t))^2 \over \epsilon} = f(t) \, and note that f(t) is a rapidly fluctuating statistical quantity, whose average value is 1, i.e. a normalized "Gaussian process". The fluctuations of such a quantity can be described by a statistical Lagrangian \mathcal L = (f(t)-1)^2 \,, and the equations of motion for f derived from extremizing the action S corresponding to \mathcal L just set it equal to 1. In physics, such a quantity is "equal to 1 as an operator identity". In mathematics, it "weakly converges to 1". In either case, it is 1 in any expectation value, or when averaged over any interval, or for all practical purpose. Defining the time order to be the operator order: [x, \dot x] = x {dx\over dt} - {dx \over dt} x = 1 \, This is called the Itō lemma in stochastic calculus, and the (euclideanized) canonical commutation relations in physics. For a general statistical action, a similar argument shows that \left[x , {\partial S \over \partial \dot x} \right] = 1 \, and in quantum mechanics, the extra imaginary unit in the action converts this to the canonical commutation relation, [x,p ] ={\rm i} \, Particle in curved space For a particle in curved space the kinetic term depends on the position and the above time slicing cannot be applied, this being a manifestation of the notorious operator ordering problem in Schrödinger quantum mechanics. One may, however, solve this problem by transforming the time-sliced flat-space path integral to curved space using a multivalued coordinate transformation (nonholonomic mapping explained here). The path integral and the partition function The path integral is just the generalization of the integral above to all quantum mechanical problems— Z = \int e^ which is precisely the partition function of statistical mechanics for the same system at temperature quoted earlier. One aspect of this equivalence was also known to Schrödinger who remarked that the equation named after him looked like the diffusion equation after Wick rotation. Measure theoretic factors Sometimes (e.g. a particle moving in curved space) we also have measure-theoretic factors in the functional integral. \int \mu[x] e^{iS[x]} \mathcal{D}x This factor is needed to restore unitarity. For instance, if S=\int \left[ \frac{m}{2}g_{ij}\dot{x}^i\dot{x}^j - V(x) \right] dt, then it means that each spatial slice is multiplied by the measure √g. This measure can't be expressed as a functional multiplying the \mathcal{D}x measure because they belong to entirely different classes. Quantum field theory The path integral formulation was very important for the development of quantum field theory. Both the Schrödinger and Heisenberg approaches to quantum mechanics single out time, and are not in the spirit of relativity. For example, the Heisenberg approach requires that scalar field operators obey the commutation relation [\phi(x),\partial_t \phi(y) ] = {\rm i} \delta^3(x-y) \, for x and y two simultaneous spatial positions, and this is not a relativistically invariant concept. The results of a calculation are covariant, but the symmetry is not apparent in intermediate stages. If naive field theory calculations did not produce infinite answers in the continuum limit, this would not have been such a big problem – it would just have been a bad choice of coordinates. But the lack of symmetry means that the infinite quantities must be cut off, and the bad coordinates make it nearly impossible to cut off the theory without spoiling the symmetry. This makes it difficult to extract the physical predictions, which require a careful limiting procedure. The problem of lost symmetry also appears in classical mechanics, where the Hamiltonian formulation also superficially singles out time. The Lagrangian formulation makes the relativistic invariance apparent. In the same way, the path integral is manifestly relativistic. It reproduces the Schrödinger equation, the Heisenberg equations of motion, and the canonical commutation relations and shows that they are compatible with relativity. It extends the Heisenberg type operator algebra to operator product rules which are new relations difficult to see in the old formalism. Further, different choices of canonical variables lead to very different seeming formulations of the same theory. The transformations between the variables can be very complicated, but the path integral makes them into reasonably straightforward changes of integration variables. For these reasons, the Feynman path integral has made earlier formalisms largely obsolete. The price of a path integral representation is that the unitarity of a theory is no longer self-evident, but it can be proven by changing variables to some canonical representation. The path integral itself also deals with larger mathematical spaces than is usual, which requires more careful mathematics not all of which has been fully worked out. The path integral historically was not immediately accepted, partly because it took many years to incorporate fermions properly. This required physicists to invent an entirely new mathematical object – the Grassmann variable – which also allowed changes of variables to be done naturally, as well as allowing constrained quantization. The integration variables in the path integral are subtly non-commuting. The value of the product of two field operators at what looks like the same point depends on how the two points are ordered in space and time. This makes some naive identities fail. The propagator In relativistic theories, there is both a particle and field representation for every theory. The field representation is a sum over all field configurations, and the particle representation is a sum over different particle paths. The nonrelativistic formulation is traditionally given in terms of particle paths, not fields. There, the path integral in the usual variables, with fixed boundary conditions, gives the probability amplitude for a particle to go from point x to point y in time T. K(x,y;T) = \langle y;T|x;0 \rangle = \int_{x(0)=x}^{x(T)=y} e^{i S[x]} Dx \, This is called the propagator. Superposing different values of the initial position x with an arbitrary initial state \psi_0(x) constructs the final state. \psi_T(y) = \int_{x} \psi_0(x) K(x,y;T) dx = \int^{x(T)=y} \psi_0(x(0)) e^{i S[x]} Dx \, For a spatially homogeneous system, where K(x, y) is only a function of (x − y), the integral is a convolution, the final state is the initial state convolved with the propagator. \psi_T = \psi_0 * K(;T) \, For a free particle of mass m, the propagator can be evaluated either explicitly from the path integral or by noting that the Schrödinger equation is a diffusion equation in imaginary time and the solution must be a normalized Gaussian: K(x,y;T) \propto e^{i m(x-y)^2\over 2T} Taking the Fourier transform in (x − y) produces another Gaussian: K(p;T) = e^{i T p^2\over 2m} and in p-space the proportionality factor here is constant in time, as will be verified in a moment. The Fourier transform in time, extending K(p; T) to be zero for negative times, gives the Green's Function, or the frequency space propagator: G_F(p,E) = {-i \over E - {\vec{p}^2\over 2m} + i\epsilon} Which is the reciprocal of the operator which annihilates the wavefunction in the Schrödinger equation, which wouldn't have come out right if the proportionality factor weren't constant in the p-space representation. The infinitesimal term in the denominator is a small positive number which guarantees that the inverse Fourier transform in E will be nonzero only for future times. For past times, the inverse Fourier transform contour closes toward values of E where there is no singularity. This guarantees that K propagates the particle into the future and is the reason for the subscript on G. The infinitesimal term can be interpreted as an infinitesimal rotation toward imaginary time. It is also possible to reexpress the nonrelativistic time evolution in terms of propagators which go toward the past, since the Schrödinger equation is time-reversible. The past propagator is the same as the future propagator except for the obvious difference that it vanishes in the future, and in the gaussian t is replaced by (−t). In this case, the interpretation is that these are the quantities to convolve the final wavefunction so as to get the initial wavefunction. G_B(p,E) = { - i \over - E - {i\vec{p}^2\over 2m} + i\epsilon} Given the nearly identical only change is the sign of E and ε. The parameter E in the Green's function can either be the energy if the paths are going toward the future, or the negative of the energy if the paths are going toward the past. For a nonrelativistic theory, the time as measured along the path of a moving particle and the time as measured by an outside observer are the same. In relativity, this is no longer true. For a relativistic theory the propagator should be defined as the sum over all paths which travel between two points in a fixed proper time, as measured along the path. These paths describe the trajectory of a particle in space and in time. K(x-y,\Tau) = \int_{x(0)=x}^{x(\Tau)=y} e^{i \int_0^\Tau \sqrt + {i \over p_0 + \sqrt{\vec{p}^2 + m^2}} For states where one nonrelativistic particle is present, the initial wavefunction has a frequency distribution concentrated near p0 = m. When convolving with the propagator, which in p space just means multiplying by the propagator, the second term is suppressed and the first term is enhanced. For frequencies near p0 = m, the dominant first term has the form: 2m K_\mathrm{NR}(p) = {i \over (p_0-m) - {\vec{p}^2\over 2m} } This is the expression for the nonrelativistic Green's function of a free Schrödinger particle. The second term has a nonrelativistic limit also, but this limit is concentrated on frequencies which are negative. The second pole is dominated by contributions from paths where the proper time and the coordinate time are ticking in an opposite sense, which means that the second term is to be interpreted as the antiparticle. The nonrelativistic analysis shows that with this form the antiparticle still has positive energy. The proper way to express this mathematically is that, adding a small suppression factor in proper time, the limit where t → −∞ of the first term must vanish, while the t → +∞ limit of the second term must vanish. In the Fourier transform, this means shifting the pole in p0 slightly, so that the inverse Fourier transform will pick up a small decay factor in one of the time directions: K(p) = {i \over p_0 - \sqrt{\vec{p}^2 + m^2} + i\epsilon} + {i \over p_0 - \sqrt{\vec{p}^2+m^2} - i\epsilon} Without these terms, the pole contribution could not be unambiguously evaluated when taking the inverse Fourier transform of p0. The terms can be recombined: K(p) = { i \over {p^2 - m^2 + i\epsilon}} Which when factored, produces opposite sign infinitesimal terms in each factor. This is the mathematically precise form of the relativistic particle propagator, free of any ambiguities. The ε term introduces a small imaginary part to the α = m2, which in the Minkowski version is a small exponential suppression of long paths. So in the relativistic case, the Feynman path-integral representation of the propagator includes paths which go backwards in time, which describe antiparticles. The paths which contribute to the relativistic propagator go forward and backwards in time, and the interpretation of this is that the amplitude for a free particle to travel between two points includes amplitudes for the particle to fluctuate into an antiparticle, travel back in time, then forward again. Unlike the nonrelativistic case, it is impossible to produce a relativistic theory of local particle propagation without including antiparticles. All local differential operators have inverses which are nonzero outside the lightcone, meaning that it is impossible to keep a particle from travelling faster than light. Such a particle cannot have a Greens function which is only nonzero in the future in a relativistically invariant theory. Functionals of fields However, the path integral formulation is also extremely important in direct application to quantum field theory, in which the "paths" or histories being considered are not the motions of a single particle, but the possible time evolutions of a field over all space. The action is referred to technically as a functional of the field: S[ϕ] where the field ϕ(xμ) is itself a function of space and time, and the square brackets are a reminder that the action depends on all the field's values everywhere, not just some particular value. In principle, one integrates Feynman's amplitude over the class of all possible combinations of values that the field could have anywhere in space–time. Much of the formal study of QFT is devoted to the properties of the resulting functional integral, and much effort (not yet entirely successful) has been made toward making these functional integrals mathematically precise. Such a functional integral is extremely similar to the partition function in statistical mechanics. Indeed, it is sometimes called a partition function, and the two are essentially mathematically identical except for the factor of i in the exponent in Feynman's postulate 3. Analytically continuing the integral to an imaginary time variable (called a Wick rotation) makes the functional integral even more like a statistical partition function, and also tames some of the mathematical difficulties of working with these integrals. Expectation values In quantum field theory, if the action is given by the functional \mathcal{S} of field configurations (which only depends locally on the fields), then the time ordered vacuum expectation value of polynomially bounded functional F, <F>, is given by \left\langle F\right\rangle=\frac{\int \mathcal{D}\phi F[\phi]e^{i\mathcal{S}[\phi]}}{\int\mathcal{D}\phi e^{i\mathcal{S}[\phi]}} The symbol \int \mathcal{D}\phi here is a concise way to represent the infinite-dimensional integral over all possible field configurations on all of space–time. As stated above, the unadorned path integral in the denominator ensures proper normalization. As a probability Strictly speaking the only question that can be asked in physics is: "What fraction of states satisfying condition A also satisfy condition B?" The answer to this is a number between 0 and 1 which can be interpreted as a probability which is written as P(B|A). In terms of path integration, since P(B|A) = \frac{P(A \cap B)}{P(A)} this means: P(B|A) = \frac{\sum_{F\subset A \cap B}\left| \int \mathcal{D}\phi O_{in}[\phi]e^{i\mathcal{S}[\phi]} F[\phi]\right|^2}{\sum_{F\subset A} \left|\int\mathcal{D}\phi O_{in}[\phi] e^{i\mathcal{S}[\phi]} F[\phi]\right|^2} where the functional Oin[ϕ] is the superposition of all incoming states that could lead to the states we are interested in. In particular this could be a state corresponding to the state of the Universe just after the big bang although for actual calculation this can be simplified using heuristic methods. Since this expression is a quotient of path integrals it is naturally normalised. Schwinger–Dyson equations Since this formulation of quantum mechanics is analogous to classical action principles, one might expect that identities concerning the action in classical mechanics would have quantum counterparts derivable from a functional integral. This is often the case. In the language of functional analysis, we can write the Euler–Lagrange equations as \frac{\delta \mathcal{S}[\phi]}{\delta \phi}=0 (the left-hand side is a functional derivative; the equation means that the action is stationary under small changes in the field configuration). The quantum analogues of these equations are called the Schwinger–Dyson equations. If the functional measure \mathcal{D}\phi turns out to be translationally invariant (we'll assume this for the rest of this article, although this does not hold for, let's say nonlinear sigma models) and if we assume that after a Wick rotation which now becomes for some H, goes to zero faster than a reciprocal of any polynomial for large values of φ, we can integrate by parts (after a Wick rotation, followed by a Wick rotation back) to get the following Schwinger–Dyson equations for the expectation: \left\langle \frac{\delta F[\phi]}{\delta \phi} \right\rangle = -i \left\langle F[\phi]\frac{\delta \mathcal{S}[\phi]}{\delta\phi} \right\rangle for any polynomially bounded functional F. \left\langle F_{,i} \right\rangle = -i \left\langle F \mathcal{S}_{,i} \right\rangle in the deWitt notation. These equations are the analog of the on shell EL equations. If J (called the source field) is an element of the dual space of the field configurations (which has at least an affine structure because of the assumption of the translational invariance for the functional measure), then, the generating functional Z of the source fields is defined to be: Z[J]=\int \mathcal{D}\phi e^{i(\mathcal{S}[\phi] + \left\langle J,\phi \right\rangle)}. Note that \frac{\delta^n Z}{\delta J(x_1) \cdots \delta J(x_n)}[J] = i^n \, Z[J] \, {\left\langle \phi(x_1)\cdots \phi(x_n)\right\rangle}_J Z^{,i_1\dots i_n}[J]=i^n Z[J] {\left \langle \phi^{i_1}\cdots \phi^{i_n}\right\rangle}_J {\left\langle F \right\rangle}_J=\frac{\int \mathcal{D}\phi F[\phi]e^{i(\mathcal{S}[\phi] + \left\langle J,\phi \right\rangle)}}{\int\mathcal{D}\phi e^{i(\mathcal{S}[\phi] + \left\langle J,\phi \right\rangle)}}. Basically, if \mathcal{D}\phi e^{i\mathcal{S}[\phi]} is viewed as a functional distribution (this shouldn't be taken too literally as an interpretation of QFT, unlike its Wick rotated statistical mechanics analogue, because we have time ordering complications here!), then \left\langle\phi(x_1)\cdots \phi(x_n)\right\rangle are its moments and Z is its Fourier transform. If F is a functional of φ, then for an operator K, F[K] is defined to be the operator which substitutes K for φ. For example, if F[\phi]=\frac{\partial^{k_1}}{\partial x_1^{k_1}}\phi(x_1)\cdots \frac{\partial^{k_n}}{\partial x_n^{k_n}}\phi(x_n) and G is a functional of J, then F\left[-i\frac{\delta}{\delta J}\right] G[J] = (-i)^n \frac{\partial^{k_1}}{\partial x_1^{k_1}}\frac{\delta}{\delta J(x_1)} \cdots \frac{\partial^{k_n}}{\partial x_n^{k_n}}\frac{\delta}{\delta J(x_n)} G[J]. Then, from the properties of the functional integrals {\left \langle \frac{\delta \mathcal{S}}{\delta \phi(x)}\left[\phi \right]+J(x)\right\rangle}_J=0 we get the "master" Schwinger–Dyson equation: \frac{\delta \mathcal{S}}{\delta \phi(x)}\left[-i \frac{\delta}{\delta J}\right]Z[J]+J(x)Z[J]=0 \mathcal{S}_{,i}[-i\partial]Z+J_i Z=0. If the functional measure is not translationally invariant, it might be possible to express it as the product M\left[\phi\right]\,\mathcal{D}\phi where M is a functional and \mathcal{D}\phi is a translationally invariant measure. This is true, for example, for nonlinear sigma models where the target space is diffeomorphic to Rn. However, if the target manifold is some topologically nontrivial space, the concept of a translation does not even make any sense. In that case, we would have to replace the \mathcal{S} in this equation by another functional \hat{\mathcal{S}}=\mathcal{S}-i\ln(M) If we expand this equation as a Taylor series about J = 0, we get the entire set of Schwinger–Dyson equations. The path integrals are usually thought of as being the sum of all paths through an infinite space–time. However, in Local quantum field theory we would restrict everything to lie within a finite causally complete region, for example inside a double light-cone. This gives a more mathematically precise and physically rigorous definition of quantum field theory. Ward–Takahashi identities See main article Ward–Takahashi identity. Now how about the on shell Noether's theorem for the classical case? Does it have a quantum analog as well? Yes, but with a caveat. The functional measure would have to be invariant under the one parameter group of symmetry transformation as well. Let's just assume for simplicity here that the symmetry in question is local (not local in the sense of a gauge symmetry, but in the sense that the transformed value of the field at any given point under an infinitesimal transformation would only depend on the field configuration over an arbitrarily small neighborhood of the point in question). Let's also assume that the action is local in the sense that it is the integral over spacetime of a Lagrangian, and that Q[\mathcal{L}(x)]=\partial_\mu f^\mu (x) for some function f where f only depends locally on φ (and possibly the spacetime position). If we don't assume any special boundary conditions, this would not be a "true" symmetry in the true sense of the term in general unless f=0 or something. Here, Q is a derivation which generates the one parameter group in question. We could have antiderivations as well, such as BRST and supersymmetry. Let's also assume \int \mathcal{D}\phi Q[F][\phi]=0 for any polynomially bounded functional F. This property is called the invariance of the measure. And this does not hold in general. See anomaly (physics) for more details. \int \mathcal{D}\phi\, Q\left[F e^{iS}\right][\phi]=0, which implies \left\langle Q[F]\right\rangle +i\left\langle F\int_{\partial V} f^\mu ds_\mu\right\rangle=0 where the integral is over the boundary. This is the quantum analog of Noether's theorem. Now, let's assume even further that Q is a local integral Q=\int d^dx q(x) q(x)[\phi(y)] = \delta^{(d)}(X-y)Q[\phi(y)] \, so that q(x)[S]=\partial_\mu j^\mu (x) \, j^{\mu}(x)=f^\mu(x)-\frac{\partial}{\partial (\partial_\mu \phi)}\mathcal{L}(x) Q[\phi] \, (this is assuming the Lagrangian only depends on φ and its first partial derivatives! More general Lagrangians would require a modification to this definition!). Note that we're NOT insisting that q(x) is the generator of a symmetry (i.e. we are not insisting upon the gauge principle), but just that Q is. And we also assume the even stronger assumption that the functional measure is locally invariant: \int \mathcal{D}\phi\, q(x)[F][\phi]=0. Then, we would have \left\langle q(x)[F] \right\rangle +i\left\langle F q(x)[S]\right\rangle=\left\langle q(x)[F]\right\rangle +i\left\langle F\partial_\mu j^\mu(x)\right\rangle=0. q(x)[S]\left[-i \frac{\delta}{\delta J}\right]Z[J]+J(x)Q[\phi(x)]\left[-i \frac{\delta}{\delta J}\right]Z[J]=\partial_\mu j^\mu(x)\left[-i \frac{\delta}{\delta J}\right]Z[J]+J(x)Q[\phi(x)]\left[-i \frac{\delta}{\delta J}\right]Z[J]=0. The above two equations are the Ward–Takahashi identities. Now for the case where f=0, we can forget about all the boundary conditions and locality assumptions. We'd simply have \left\langle Q[F]\right\rangle =0. \int d^dx\, J(x)Q[\phi(x)]\left[-i \frac{\delta}{\delta J}\right]Z[J]=0. The need for regulators and renormalization Path integrals as they are defined here require the introduction of regulators. Changing the scale of the regulator leads to the renormalization group. In fact, renormalization is the major obstruction to making path integrals well-defined. The path integral in quantum-mechanical interpretation In one philosophical interpretation of quantum mechanics, the "sum over histories" interpretation, the path integral is taken to be fundamental and reality is viewed as a single indistinguishable "class" of paths which all share the same events. For this interpretation, it is crucial to understand what exactly an event is. The sum over histories method gives identical results to canonical quantum mechanics, and Sinha and Sorkin[9] claim the interpretation explains the Einstein–Podolsky–Rosen paradox without resorting to nonlocality. (Note that the Copenhagen/pragmatism interpretation claims there is no paradox—only a sloppy materialism motivated question on the part of EPR—Joseph Wienberg a lecture. On the other hand, the fact that the EPR thought experiment (and its result) does represent the results of a QM experiment says that (despite the path dependence of parallelness/anti-parallelness in curved space) all contributions of paths close to black holes cancel in the action for an EPR style experiment here on earth.) Some advocates of interpretations of quantum mechanics emphasizing decoherence have attempted to make more rigorous the notion of extracting a classical-like "coarse-grained" history from the space of all possible histories. Quantum gravity Whereas in quantum mechanics the path integral formulation is fully equivalent to other formulations, it may be that it can be extended to quantum gravity, which would make it different from the Hilbert space model. Feynman had some success in this direction and his work has been extended by Hawking and others.[10] Approaches that use this method include causal dynamical triangulations and spinfoam models. Quantum tunneling Quantum tunnelling can be modeled by using the path integral formation to determine the action of the trajectory through a potential barrier. Using the WKB approximation, the tunneling rate ( \Gamma) can be determined to be of the form \Gamma = A_o \exp (-S_{eff}/\hbar) with the effective action S_{eff} and pre-exponential factor A_o. This form is specifically useful in a dissipative system, in which the systems and surroundings must be modeled together. Using the Langevin equation to model Brownian motion, the path integral formation can be used to determine an effective action and pre-exponential model to see the effect of dissipation on tunnelling.[11] From this model, tunneling rates of macroscopic systems (at finite temperatures) can be predicted. See also 1. ^ 2. ^ ; also see 3. ^ 4. ^ Both noted that in the limit of action that is large compared to the reduced Planck's constant ħ (using natural units, ħ = 1), the path integral is dominated by solutions which are in the neighbourhood of stationary points of the action. 5. ^ 6. ^ For details see Chapter 13 in Kleinert's book cited above. 7. ^ 8. ^ 9. ^ 10. ^ "Most of the Good Stuff", Memories Of Richard Feynman, edited by Laurie M. Brown and John S. Rigden, American Institute of Physics, the chapter by Murray Gell-Mann. 11. ^ 1. ^ For a simplified, step by step, derivation of the above relation see Path Integrals in Quantum Theories: A Pedagogic 1st Step Suggested reading • The historical reference, written by the inventor of the path integral formulation himself and one of his students. • A highly readable introduction to the subject. • A modern reference on the subject. • Discusses the definition of Path Integrals for systems whose kinematical variables are the generators of a real separable, connected Lie group with irreducible, square integrable representations. • Highly readable textbook; introduction to relativistic QFT for particle physics. • A mathematically rigorous introduction to Functional Integration • This course, designed for mathematicians, is a rigorous introduction to perturbative quantum field theory, using the language of functional integrals. • A great introduction to Path Integrals (Chapter 1) and QFT in general. External links • Path integral on Scholarpedia • Path Integrals in Quantum Theories: A Pedagogic 1st Step
587603d6259e3f7b
Examination of a Particle in an Infinite and Finite Potential Well Solutions of time-independent Schrödinger equations provide many interesting predictions concerning quantum mechanical phenomena. The following discussion is meant to provide insight into solving such equations for potential energy wells of the infinite and finite form. The steps taken for finding eigenfunctions, eigenvalues, and wave functions will be examined mathematically. Their predictions will allow for interpretation of the physical significance of these quantities. 1-D Potentials In this section, we will focus on two types of potential wells: the infinite square well potential and the finite square well potential. The infinite potential well will be examined first in detail, and will be used as a basis for solving the finite potential square well. In each case, the importance of boundary conditions will be examined. Each problem is restricted to one dimension. By imposing this restriction, the math is simplified, while still allowing demonstration of interesting quantum aspects. A brief overview of of other types of 1-D potentials are also presented. 1. The Infinite Square Well Potential: Particle-in-a-box The particle-in-a-box problem is the simplest example of a confined particle. Examination of this problem enables us to understand the origin of many features of such systems, such as the appearance of discrete energy levels and the important concept of boundary conditions [3]. By using the time-independent Schrodinger Equation (TISE), we are able to find the probability distribution describing the whereabouts of the particle in question, and are also able to obtain information about the permitted energies which the system can have [5]. An overview of the analytical approach used to solve this problem is as follows [5]: 1. Write the time-independent Schrodinger equation for the desired system, including in it the correct potential functions. 2. Solve the resulting TISE for the appropriate forms of the wave function, $\psi (x)$ 3. Normalize the wave function to ensure the probability interpretation is valid. Figure 1.1 Square well with infinite potential at walls Consider a quantum mechanical particle, described by the wavefunction $\psi (x)$, in one dimension. In this example, the particle is confined to a square well with impenetrable walls, $0 < x< L$ , as in Figure 1.1. This is represented by a potential which is zero inside the box and infinite outside. The potential, $V (x)$, is given as, Here, we have boundary conditions requiring that $\psi (x)$ vanish at $x=0$ and $x=L$, so that stable standing waves can form [5]. The particle is free to move within the well, but has no possibility of moving into the region outside of the well. Therefore, we need only consider the solution of the Schrodinger equation within the space interval where the potential is zero. The TISE can be applied to the particle in a box problem to find expectation values for particle positions, velocities, and energy levels [11]. With the Hamiltonian, For an explanation of how this equation was derived, see http://electrons.wikidot.com/schrodinger-equation. Within the space interval of concern, where the potential is zero, the Schrodinger equation becomes, where several solutions are anticipated, indexed by n [11]. The solutions of this second-order differential equation are[6] Where A and B are integration constants that are defined by the boundary conditions: These boundary conditions imply that: If we require that A is not zero (if it were, we would have the trivial case of no particle in the well), the values of k must be of the form: and, the wavefunction becomes, In order to complete the solution, the wavefunction must be normalized and the value for the constant A determined. The actual probability of finding the particle is given by the product of the wavefunction, $\psi (x)$, with its complex conjugate, $\psi^* (x)$. Since the probability for finding the particle in the box must be equal to one, the sum of the probabilities for all of the space must be equal to one (normalization) [9]. The condition for normalization is then: Evaluation of this integral yields the value for A: and, the complete wavefunction becomes: Plugging $\psi (x)$ back into the TISE gives the value of the quantized set of energies as: These allowed energy values, which depend on the quantum number, n, are called eigenvalues. This dependence shows that the higher the quantum number, the higher the energy will be, and the higher the number of nodes will be (points where the wavefunction crosses through zero)[8]. They help to describe the particular probability amplitudes, or wavefunctions, $\psi_n (x)$, which are allowed [5]. A graphical representation of these concepts is seen in Figure 1.2. Figure 1.2 The wavefunctions and probability density functions for several states. Within this simple particle-in-a-box problem, some important principles are revealed: • The energy comes out quantized, which is a natural outcome of the boundary conditions. • The highest probability of finding the particle in the box is where the antinodes of the sine function appear [8] • The concept of zero-point energy, which states that the energy of the system would approach $E_1 (x)$ if it were cooled to absolute zero. This implies that the particle would still be able to move throughout the box (a contradiction classically) [8]. 2. The Finite Square Well Potential The finite potential well is an extension of the infinite potential well from the previous section. The main difference between these two systems is that now the particle has a non-zero probability of finding itself outside the well, although its kinetic energy is less than that required, according to classical mechanics, for scaling the potential barrier [4]. This type of problem is more realistic, but more difficult to solve due to the yielding of transcendental equations. The particle is again confined to a box, but one which has finite, not infinite, potential walls. We consider a potential well of height $V_0$. Figure 2.1 Square well with finite potential. Here, a=L. We have the following potential, $V (x)$, given by the boundary conditions shown in Figure 2.1: In this example, the origin of the x-axis was chosen at the center of the well. By doing so, the potential is symmetric about $x=0$, giving rise to parity (Note: this could also be applied to a symmetric infinite wells). For the finite well, two cases must be distinguished, corresponding to positive or negative values of the energy E [1]. It is possible for the particle to be bound, or unbound. The case $E<0$ corresponds to a particle which is confined (and whose energy is less than the well depth) and hence is in a bound state [1]. When $E>0$, the particle is unconfined and corresponds to a scattering problem. The latter case will only briefly be discussed. Figure 2.2 [7] provides a prelude to what the wavefunctions and probability distributions for several states will look like in a finite well. Comparing Figure 2.2 with Figure 1.2 for the infinite case, we see that in the finite case, the wavefunctions do not have to be zero at the walls of the well. In the finite case, the wavelengths are slightly longer, implying that the allowed energies will be somewhat smaller. Figure 2.2 In this figure, the points O and L represent the walls of the well. 2.1 Case 1. Bound State In the bound state we have,$-V_0<=E<0$ (since E cannot be lower than the absolute minimum of the potential [1]. The Schrodinger equation for the two regions is given by [6] Here, the binding energy, |E| of the particle is introduced ($|E|=-E$). To simplify the TISE equations, let, The solutions of the TISE separate into even and odd parity states, and we need only consider positive values of x (which could be inferred from the potential). With the application of the boundary conditions, the even solutions are given as [6] and the odd solutions are given as [1] Despite the discontinuous nature of the potential at $x=L$, the wavefunction and its derivative are still continuous, and these conditions provide the required boundary conditions to determine the quantized energies [6]. The requirements that $\psi^O=psi^E (x)$ and $\psi^'O=psi^'E (x)$ yields the two equations These can be combined to give the even eigenvalue condition (which depends on the energies but not on the constants A and C) [6] is The odd eigenvalue condition is found to be The energy levels of the bound states are found by solving these transcendental equations, either graphically or numerically [1]. In doing so, it is helpful to change to dimensionless variables. We introduce the dimensionless quantities [1] Upon substitution these quantities, the transcendental equations become The graphical determination of the energy levels are obtained by finding the points of intersection of the circle (See Graph 2.1)[1] where $\gamma$ is of known radius (found by simple substitution), Graph 2.1.1 The left graph shows energy levels for even states, while the right graph represents odd states. What can be concluded from this figure? • The bound-state energy levels are non-degenerate. [1] • The bound-state energy levels are finite, but increase without bound and depend on the parameter $\gamma$. Thus, deeper and wider potentials have a larger number of bound states. [6] • The bound state spectrum consists of alternating even and odd states, with the ground state always being even. 2.2 Case 2. Unbound State The finite well problem shows that wavefunctions are not localized in the vicinity of the well[10] . This is the case of unbound states, where eigenvalues of energy are a continuum. In this problem, a particle is incident upon the well from the left, interacts with the well, and gets transmitted or reflected. The solution of the Schrodinger equation is outlined briefly. Note that in one dimension, there is a lack of symmetry between the external regions to the left and right of the potential, since the particle is assumed to be incident on the potential in a given direction [6]. Therefore, there is no need to exercise parity. For the external regions, the solution of the TISE is given by [1] In the region of $x<-L$, the wavefunction is seen to consist of an incident wave of amplitude A and a reflected wave of amplitude B. In the region of $x>L$, the wavefunction is seen as a pure transmitted wave of amplitude C [1]. For the internal region, the solution of the TISE is given by [1] By applying continuity at $x=L$ and $x=-L$, F and G are eliminated and the ratios of $B/A$ and $C/A$ can be solved to obtain the reflection coefficient, $R=|B/A|^2$, and the transmission coefficient, $T=|C/A|^2$. An important aspect to briefly point out is that the transmission coefficient is generally less than unity, which is in contradiction to the classical prediction that suggests that the particle should always be transmitted [1]. To summarize, the major differences between a particle in a finite box and an infinite well, are [((web1))]: • Only a finite number of energy levels exist (bound state) • Tunneling into the barrier (wall) is possible • Higher energy states are less tightly bound than lower ones • A particle provided with enough energy can escape the well (unbound state) Figure 2.2.1 Comparison of infinite and finite well. 3. Variations of 1-D Potentials The infinite and finite square wells represented here are amongst the most fundamental of 1-D potentials. Table 3.1[2] summarizes popular systems that are often studied, and provides a physical example of each, as well as significant features pertaining to each system. Table 3.1 4. Relevance of Potential Well Problems Potential well problems serve as simple, yet instructive models for quantum phenomena. Tunneling, or the penetration of barriers, as portrayed in wells of finite potential, is one of the most interesting aspects of quantum mechanics. It is seen in real-world applications, such as field emission, scanning tunneling microscopy, alpha particle decay of nuclei, and nuclear fusion reactions [2]. 1. Bransden, B.H.; Joachain, C.J. Introduction to Quantum Mechanics. London: Longman Scientific and Technical, pp. 158-164, 1995. 2. Eisberg, R.; Resnick, R. Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles. New York: John Wiley & Sons, Inc., pp. 232-244, 1974. 3. Green, N.J.B. Quantum Mechanics 1: Foundations. Oxford: Oxford Science Publications, p. 40, 1997. 4. Lindsay, P.A. Introduction to Quantum Mechanics for Electrical Engineers. Berkshire: McGraw-Hill Publishing, pp. 51-59, 1967. 5. Pohl, H.A. Quantum Mechanics for Science and Engineering. New Jersey: Prentice-Hall, Inc., pp. 38-41, 1967. 6. Robinett, R.W. Quantum Mechanics. New York: Oxford University Press, Inc., pp.178-184, 1997. 7. Tipler, P.A.Elementary Modern Physics. New York: MacMillan, pp. 96-98, 1992.
efc2f17191195e02
Topological Matter in Artificial Gauge Fields Sonic Landau levels and synthetic gauge fields in mechanical metamaterials Abbaszadeh, Hamed Mechanical strain can lead to a synthetic gauge field that controls the dynamics of electrons in graphene sheets as well as light in photonic crystals. Here, we show how to engineer an analogous synthetic gauge field for lattice vibrations. Our approach relies on one of two strategies: shearing a honeycomb lattice of masses and springs or patterning its local material stiffness. As a result, vibrational spectra with discrete Landau levels are generated. Upon tuning the strength of the gauge field, we can control the density of states and transverse spatial confinement of sound in the metamaterial. We also show how this gauge field can be used to design waveguides in which sound propagates with robustness against disorder as a consequence of the change in topological polarization that occurs along a domain wall. By introducing dissipation, we can selectively enhance the domain-wall-bound topological sound mode, a feature that may potentially be exploited for the design of sound amplification by stimulated emission of radiation (SASERs, the mechanical analogs of lasers.) Probing quantum turbulence in He II by quantum evaporation measurements Amelio, Ivan In superfluid 4He, due to strong interactions, the density profile of a vortex line, as computed with Quantum Monte Carlo, deviates from what predicted by Gross–Pitaevskii (GP) mean-field. We find that the basic features of this density modulation are recovered in wave–packets of a single rotonic excitation. This suggests to correct the current GP-based view of a vortex reconnection event as a source of phonon waves, by including emission of rotons. Experiments at low temperature of quantum evaporation should be able to detect these non-thermal rotons. Topology and dynamics in driven hexagonal lattices Asteria, Luca Ultracold atoms are a versatile system to study the fascinating phenomena of gauge fields and topological band structures. By Floquet driving of optical lattices, the topology of the Bloch bands can be engineered. In this poster, we present experimental schemes for momentum-resolved Bloch state tomography, which allow mapping out the Berry curvature and obtaining the Chern number. Furthermore, we discuss the dynamics of the wave function after a quench into the Floquet system. We observe the appearance of dynamical vortices, which trace out a closed contour, the topology of which can be directly mapped to the Chern number. Our measurements provide a new perspective on topology and dynamics and a unique starting point for studying interacting topological phases. Spin-orbitcoupling in a Bose-Einstein condensate: Triple-well in momentum space Cabedo Bru, Josep Spin-Orbit (SO) coupling, which links a particle's spin to its motion, has a crucial role in the electronic properties of many condensed matter systems, and it is at the basis of phenomena such as the spin-Hall effect and topological insulators. The high level of control of ultracold atoms makes them ideal candidates to engineer the spin-orbit coupling in neutral systems [1]. Here we show that by dressing three atomic internal states of a Bose-Einstein condensate (BEC) with two pairs of lasers in a double Raman configuration, such three atomic spin states of the BEC become coupled and a triple-well in the 2D lowest band of the single atom dispersion relation is obtained. The distance between the centres, heights of the barriers, and the energy bias of the triple well in momentum state can be engineered by an appropriate manipulation of the laser intensities and detunings while tunneling in momentum space is induced by the external trapping potential. Interaction-dependent quantum phase transitions of the BEC ground state in such a triple-well potential in momentum space are predicted. [1] Y. Zhang, M. E. Mossman, Th. Busch, P. Engels, and C. Zhang, Frontiers of Physics 11, 118103 (2016). Tailoring the Fermi velocity in 2D Dirac Materials Díaz Fernández, Álvaro Motivation: Previous works aiming to modify the Fermi velocity in Dirac materials require cumbersome setups [1-3]. It is thus desirable to find new ways to tune this fundamental parameter. Our proposal is to embed different Dirac materials in a uniform electric field, something readily achievable in experiments. Systems: Topological crystalline insulator/semiconductor interface, armchair graphene nanoribbons and carbon nanotubes. Main result: The Fermi velocity is significantly reduced with increasing the transverse electric field in Dirac materials. This result has been tested via continuum (Dirac equation), tight-binding and ab initio approaches [4-5]. [1] G. Li et al., Nat. Phys. 6, 109 (2010). [2] C. Hwang et al., Sci. Rep. 2, 590 (2012). [3] D. C. Elias et al., Nat. Phys. 7, 701 (2011). [4] A. D. F. et al., Scientific Reports 7, 8058 (2017). [5] A. D. F. et al., Physica E 93, 230 (2017). A new machine for dysrosium experiment Du, Li We introduce the scientific goals, the engineering progress and the new technology being developed at the new dysprosium lab being built at MIT. Exact Edge and Bulk States of Topological Models and their Robustness Against an Impurity Duncan, Callum When considering topological states we usually use the bulk-edge correspondence to look for their existence. In this work we will not use the bulk-edge correspondence, instead, we will construct both the edge and bulk eigenfunctions analytically. It is known that the bulk states of certain topological models can be constructed via Bloch's theorem. We will discuss a general approach to construct the bulk states of a finite system in one dimension. Then by extending Bloch's theorem, we construct the exact edge state eigenfunctions. We fully prescribe a method of obtaining the form and properties of the edge and bulk eigenfunctions for a given one-dimensional periodic model. We extend the method to two dimensions by considering the dimensionally separable Hofstadter model. We also show that this method can be utilized to consider the robustness of a model to a static impurity localized to the edge or in the two-dimensional case a line defect across the edge. We observe that the presence of a single edge impurity can have a drastic effect on the edge state of a system. On increasing the impurity strength, for certain models, the topological edge state can be replaced (or joined) by a trivial bound state of the impurity, with an energy of the order of the impurity strength. Enhanced chiral anomaly in Floquet Schwinger model Ebihara, Shu Controlling quantum states by temporally periodic driving is actively studied with the use of the Floquet theory. In such driven systems we can realize exotic properties which cannot be exhibited in undriven equilibrium states. In this study we analyze the Schwinger model, (1+1)-dimensional quantum electrodynamics (QED), under temporally periodic electric field. Since the Schwinger model is a relativistic theory with fermions, we can expect chiral anomaly. We show that the periodic external field plays a role to shift the energy dispersions oppositely for right- and left-handed fermions, which is nothing but the spectral flow nature of the chiral anomaly, and that this leads to temporally oscillating chiral condensate. Synthetic dimensions and chiral currents with spin-orbit-coupled two-electron ultracold fermions Franchi, Lorenzo We report on two di erent approaches to the quantum simulation of Hall-like systems subjected to an arti cial gauge eld. Adopting an innovative scheme we engineer an hybrid two dimensional lattice characterized by a \real" dimension, provided by a 1D optical lattice, and a \synthetic" dimension encoded in the internal degrees of freedom of 173Yb. In the rst experiment [1] the synthetic dimension is mapped by performing a Raman coupling between the hyper ne states (F = 5=2) of the ground state of 173Yb. In this kind of experimental setup we observed chiral edge states, as well as their \skipping" trajectories. In the second major experiment [2] we demonstrate a new method to synthesize spin-orbit interaction which exploits the ultranarrow clock transition between the 1S0 and the long-lived 3P0 state in degenerate 173Yb atoms. For the rst time we characterize the dependence of the amplitude of the chiral current on the magnetic ux, providing a direct evidence of the inversion of the chiral current sign when the magnetic ux increases above . In second experiment the presence of spin-orbit coupling has been detected by means of clock transition spectroscopy, as proposed in [3], also taking advantage of a 642 km-long optical ber link infrastructure connecting LENS to the Italian National Metrology Institute (INRiM). Realizing and detecting a topological insulator in the AIII symmetry class García Velasco, Carlos Topological insulators in the AIII symmetry class lack experimental realization. Moreover, fractionalization in one-dimensional topological insulators has not been yet directly observed. Our work might open possibilities for both challenges. We propose a one-dimensional model realizing the AIII symmetry class which can be realized in current experiments with ultracold atomic gases. We further report on a distinctive property of topological edge modes in the AIII class: in contrast to those in the well studied BDI class, they have non-zero momentum. Exploiting this feature we propose a path for the detection of fractionalization. A fermion added to an AIII system splits into two halves localized at opposite momenta, which can be detected by imaging the momentum distribution. Real and imaginary part of conductivity of strongly interacting bosons in optical lattices Grygiel, Barbara Optical lattices filled with ultra-cold atomic gases can be thought of as a counterpart of solid state systems, where the optical lattice plays the role of the ionic potential, while ultra-cold atoms act as the charge carriers. Recent development in experimental techniques allowed to investigate correlation functions and transport phenomena in such systems. We study the Bose-Hubbard model in the quantum rotor approach, which allows us to take into account spatial dependencies, such as dimensionality, lattice geometry, and influence of the gauge potentials. We calculate conductivity of bosons in a two-dimensional lattice in synthetic magnetic field. In such scenario, two types of conductivity can be distinguished, intra- and inter-band. The interband contribution, usually omitted in analysis of multiband systems, appears to have a crucial role in the transport properties as its values are a few orders of magnitude greater than the intraband one. Topological Phases in Ultracold Fermionic Ladders Haller, Andreas Inspired by the recent experimental advances in the study of ultracold atoms trapped in optical lattices, we consider models of fermions hopping in ladder geometries and subject to artificial magnetic fluxes such as [1, 2, 3]. By applying the concept of resonances in chiral currents [2], we find a parameter (momentum component of the current in Fourier space), distinguishing between trivial and quantum Hall (QH) phases in non-interacting cases. We aim for evidence about fractional QH phases: In case of nearest-neighbor Hubbard interactions, we identify a gap in the spin sector of the corresponding Luttinger liquid leading to a resonant state at fractional filling factor v =1/2. We support our analytic results with matrix product sates (MPS) simulations [3]. References: [1] L. Mazza, M. Burrello et. al, New J. Phys. Volume 17, 105001 (2015) [2] E. Cornfeld and E. Sela, Phys. Rev. B Volume 92, 115446 (2015) [3] A. Haller, M. Rizzi and M. Burrello, arXiv:1707.05715 (2017) Characterzing interacting topological states of matter via charge pumps and single-particle topological invariants Hayward, Andrew Charge pumps in 1D systems can be used to probe the topology 2D systems by associating a cyclic Hamiltonian parameter with an artificial quasi-momentum. We use this mapping to investigate topological phase transitions in the presence of interactions. Transport in optical lattices with flux Hudomal, Ana Recent cold atom experiments have realized artificial gauge fields in periodically modulated optical lattices [1,2]. We study the dynamics of atomic clouds in these systems by performing numerical simulations using the full time-dependent Hamiltonian and comparing these results to the semiclassical approximation. Under constant external force, atoms in optical lattices with flux exhibit an anomalous velocity in the transverse direction. We investigate in detail how this transverse drift is related to the Berry curvature and Chern number, taking into account realistic experimental conditions. [1] G. Jotzu et al., Nature 515, 237 (2014). [2] M. Aidelsburger et al., Nature Phys. 11, 162 (2015). Time-periodic driving of spinor condensates in a hexagonal optical lattice Ilin, Alexander Local topological invariant of the Interacting Hofstadter Interface Irsigler, Bernhard Analogue black hole in coupled pseudo-spin-1/2 Bosons Kaur, Inderpreet Quantum fluids such as ultracold condensate of bosonic atoms has long been suggested as an important candidate for sonic black hole. Existence of such analogue black hole, their event horizon, and related Hawking radiation was recently confirmed experimentally. In this work we report a study of such sonic black hole in the pseudo-spin-1/2 Bosons, the related modification of the sonic horizon as well as the analogue space-time metric. Exactly Solvable Topological Edge, Surface, Corner and Hinge States from Destructive Interference Kunst, Flore The main feature of topological phases is the presence of robust boundary states, which appear for example in the form of chiral edge states in Chern insulators and open Fermi arcs on the surfaces of Weyl semimetals. Recently, new higher-order topological phases were proposed in the form of corner and hinge states. Even though, noninteracting, topological systems can be straightforwardly described by fully periodic systems, the understanding of the corresponding boundary states has almost exclusively relied on numerical studies. We devised a generic recipe for constructing D-dimensional lattice models whose d-dimensional boundary states, located on edges, surfaces, corners, hinges and so forth, can be obtained exactly. The solvability of these states is rooted in the underlying lattice structure and does not as such depend on fine-tuning, which allows us to track their evolution throughout various phases and across phase transitions. On my poster, I present the generic method with which to find these exact solutions and provide explicit examples of chiral, edge states, Fermi arcs, corner states and topologically protected hinge states. This is based on Phys. Rev. B 96, 085443 (2017) and arXiv: 1712.07911. Observation of the Higgs mode in a strongly interacting fermionic superfluid Link, Martin Higgs and Goldstone modes are possible collective modes of an order parameter upon spontaneously breaking a continuous symmetry. Whereas the low-energy Goldstone (phase) mode is always stable, additional symmetries are required to prevent the Higgs (amplitude) mode from rapidly decaying into low-energy excitations. In high-energy physics, where the Higgs boson has been found after a decades-long search, the stability is ensured by Lorentz invariance. In the realm of condensed–matter physics, particle–hole symmetry can play this role and a Higgs mode has been observed in weakly-interacting superconductors. However, whether the Higgs mode is also stable for strongly-correlated superconductors in which particle–hole symmetry is not precisely fulfilled or whether this mode becomes overdamped has been subject of numerous discussions. Experimental evidence is still lacking, in particular owing to the difficulty to excite the Higgs mode directly. Here, we observe the Higgs mode in a strongly-interacting superfluid Fermi gas. By inducing a periodic modulation of the amplitude of the superconducting order parameter $\Delta$, we observe an excitation resonance at frequency $2\Delta/h$. For strong coupling, the peak width broadens and eventually the mode disappears when the Cooper pairs turn into tightly bound dimers signalling the eventual instability of the Higgs mode. Exploring exotic orders by simple degrees of freedom coupled to a gauge theory Liu, Ke In condensed matter physics, gauge theories are often considered as an “emergent” phenomenon. They appear as an effective description of the collective behavior of some interacting systems at low energy. However, we could also reverse this methodology: taking gauge theories and some other degrees of freedom as initial inputs, it may give rise to exotic orders that correspond to the collective behavior of some underlying model. That is, instead of the gauge theory, the order is emerging. In this presentation, I will attempt to give examples of this scenario by considering ordinary $O(n)$ rotors coupled with, typically discrete, gauge theories. This produces various “emergent” orders. I will discuss the meaning of these orders from a perspective of statistical physics, in the hope that they can be realistic under the rapid development of engineering artificial gauge fields. Cesium solitons Mežnaršič, Tadej When non-interacting Bose-Einstein condensate is confined to a quasi one-dimensional channel it will spread due to dispersion as dictated by the Schrödinger equation. The spreading rate can be affected by changing the interaction between the atoms via the Feshbach resonance. If the interaction is set to just the right value, the attraction between atoms exactly compensates the dispersion. In this case the BEC doesn't spread and we get a bright matter-wave soliton. The maximum number of atoms in a soliton is limited by the frequency of the channel and the interaction between atoms. By setting the inter-atom interaction to different attractive values we are able to create soliton trains with different number of solitons from elongated BECs. Fractional quantum Hall physics in lattice systems Nielsen, Anne Ersbak Bang The fractional quantum Hall effect, which can be realized in certain two-dimensional systems at low temperature and high magnetic field, leads to many interesting properties, such as the possibility to have anyonic quasiparticles that are neither bosons nor fermions. There is currently much interest in investigating the possibilities for having fractional quantum Hall physics in lattice systems, both because it may lead to new ways to realize the effect, and because the lattice gives rise to new features and opportunities. Here, we propose a quite general approach based on conformal field theory to obtain lattice fractional quantum Hall models. The models have analytical ground states, and we use Monte Carlo simulations to compute, e.g., topological entanglement entropies and shape and statistics of anyons. We also discuss how one can interpolate between lattice and continuum fractional quantum Hall models and propose a scheme to implement a related model in ultracold atoms in optical lattices. High-frequency analysis of periodically driven quantum system with slowly varying amplitude Novičenko, Viktor We consider a quantum system periodically driven with a strength which varies slowly on the scale of the driving period. The analysis is based on a general formulation of the Floquet theory relying on the extended Hilbert space. It is shown that the dynamics of the system can be described in terms of a slowly varying effective Floquet Hamiltonian that captures the long-term evolution, as well as rapidly oscillating micromotion operators. We obtain a systematic high-frequency expansion of all these operators. Generalizing the previous studies, the expanded effective Hamiltonian is now time-dependent and contains extra terms appearing due to changes in the periodic driving. The same applies to the micromotion operators which exhibit a slow temporal dependence in addition to the rapid oscillations. As an illustration, we consider a quantum-mechanical spin in an oscillating magnetic field with a slowly changing direction. The effective evolution of the spin is then associated with non-Abelian geometric phases reflecting the geometry of the extended Floquet space. The developed formalism is general and also applies to other periodically driven systems, such as shaken optical lattices with a time-dependent shaking strength, a situation relevant to the cold atom experiments. Versatile detection scheme for topological Bloch-state defects Nuske, Marlon The dynamics in solid state systems is not only governed by the band structure but also by topological defects of the Eigenstates. A paradigmatic example are the Dirac points in graphene. For this system with a two-atomic basis the linear dispersion relation at the Dirac points is accompanied by a vortex of the azimuthal phase of the Eigenstates. In a time-of-flight (ToF) expansion the Eigenstates interfere and the resulting signal contains information about the azimuthal phase. We present a versatile detection scheme that uses off-resonant lattice modulation to extract the azimuthal phase from the ToF signal. This detection scheme is applicable to a variety of two-band systems and can be extended to general multi-band systems. Competating quantum phases in the disordered Bose-Hubbard model Pal, Sukla The effect of disorder on zero temperature phase diagram of two dimensional Bose Hubbard model has been studied in presence of artificial gauge field. Employing single site gutzwiller mean field theory, we incorporate the effect of disorder which shows the presence of Bose glass phase which impedes the direct transition from mott insulator to superfluid phase. Incorporating nearest neighbour interaction, at nearest neighbour strength, VN = 0:02, the density wave states first starts to appear. Applying disorder in this regime shows the co-existence of Bose glass and disordered solid phase depending on the nature and distribution of the disorder. Furthermore, we report the effect of synthetic magnetic field on Bose glass phase. Edge states in bosonic honeycomb lattices Pantaleon Peralta, Pierre Anthony We investigate the properties of magnon edge states in a ferromagnetic honeycomb lattice with zig-zag, bearded and armchair boundaries. In contrast with the fermionic graphene, we find novel edge states due to the missing bonds along the boundary sites. After introducing an external on-site potential at the outermost sites we find that the energy spectra of the edge states are tunable. Additionally, when a non-trivial gap is induced, we find that some of the edge states are topologically protected and also tunable. Our results may explain the origin of the novel edge states recently observed in photonic lattices. Optical Hall conductivity of Haldane-Bose-Hubbard model Patucha, Konrad We study ultra-cold bosonic atoms in optical lattice with gauge potentials. In order to describe these systems, we use Bose-Hubbard model in quantum rotor approximation. This allows us to include influence of spatial correlation – necessary in correct description of lattices with non-zero Chern number such as Haldane model. We calculate the optical Hall conductivity and present its dependence on the temperature and model parameters. We identify two main transport channels and excitation related to them. The results show that the spectral properties of the Berry curvature influences the transverse transport. Transition in traps of different shapes in a system of a few ultra-cold fermions Pęcak, Daniel The ground-state properties of a few spin-1/2 fermions with different masses and interacting via short-range contact forces are studied within an exact diagonalization approach. It is shown that, depending on the shape of the external confinement, different scenarios of the spatial separation between components manifested by specific shapes of the density profiles can be obtained in the strong interaction limit. We find that the ground-state of the system undergoes a specific transition between orderings when the confinement is changed adiabatically from a uniform box to a harmonic oscillator shape. We study the properties of this transition in the framework of the finite-size scaling method adopted to few-body systems. Pelegrí, Gerard Recent theoretical and experimental studies have shown that it is possible to simulate artificial magnetic fields with ultracold atoms in optical lattices [1]. In particular, the possibility to implement chiral, topologically protected edge states analogous to those found in the context of quantum Hall physics has been demonstrated both for fermionic and bosonic atoms [2,3]. In this work, we propose an alternative strategy to implement robust edgelike states (ELS) with an ultracold atom carrying orbital angular momentum (OAM) in a diamond-chain optical lattice. The existance of these states is due to quantum interference effects, and they can be intuitively constructed as combinations of three-site spatial dark states (SDS). These states are very robust against different types of deffects [4] and form a zero-energy flat band. For states with one unit of OAM, the l=1 case, the tunneling amplitudes depend both on the spatial localization and the winding number of the local states, and they may become complex depending on the relative position of the sites [5]. The ELS implemented in this manifold can display global chirality. In addition, the angular momentum degree of freedom opens a gap in the band structure that is not present in the absence of OAM, resembling the effect of a net flux through the plaquettes [6]. Finally, in the limit of unit filling and strong interactions we study the mapping of the system into a spin-1/2 model with two-body nearest neighbour interactions [7]. References [1] M. Aidelsburger, S. Nascimbene, N. Goldman, arXiv 1710.00851. [2] M. Mancini, G. Pagano, G. Cappellini, L. Livi, M. Rider, J. Catani, C. Sias, P. Zoller, M. Inguscio, M. Dalmonte, and L. Fallani, Science 349, 1510-1513 (2015). [3] B. K. Stuhl, H.I. Lu, L.M. Aycock, D. Genkina, and I.B. Spielman, Science 349, 1514-1518 (2015). [4] G. Pelegrí, J. Polo, A. Turpin, M. Lewenstein, J. Mompart, and V. Ahufinger, Phys. Rev. A 95, 013614 (2017). [5] J. Polo, J. Mompart, and V. Ahufinger, Phys. Rev. A 93, 033613 (2016). [6] A. A. Lopes and R. G. Dias, Phys. Rev. B 84, 085124 (2011). [7] G. Pelegrí et al, in preparation. A Versatile Strontium Quantum Gas Machine with a Microscope Piatchenkov, Sergei Strontium opens new perspectives for Hamiltonian engineering because it is an alkaline-earth element with narrow intercombination lines, metastable excited electronic states, and ten collisionally-stable SU(N)-symmetric nuclear spin states. We have built a new versatile Sr machine with quantum gas microscope capability. After precooling on a broad blue transition, we collect 107 atoms at 2 µK in a narrow-line red MOT, load them into a 1064 nm dipole trap, and evaporatively cool them to obtain either a BEC or a degenerate Fermi gas of ~105 atoms. We have now also observed for the first time the doubly-forbidden 1S0 - 3P2 transition in 87Sr by direct laser excitation, which opens up possibilities for quantum computation and gauge field engineering. Non-monotonic response and Klein-Gordon physics in gapless-to-gapped quantum quenches of one-dimensional free fermionic systems. Porta, Sergio The properties of prototypical examples of one-dimensional free fermionic systems undergoing a sudden quantum quench between a gapless state characterized by a linear crossing of the energy bands and a gapped state are analyzed. By means of a Generalized Gibbs Ensemble analysis, we observe an anomalous non-monotonic response of steady state correlation functions as a function of the strength of the mechanism opening the gap. In order to interpret this result, we calculate the full dynamical evolution of these correlation functions. We show that the latter is governed by a Klein-Gordon equation with a mass related to the gap opening mechanism and an additional source term, which depends on the gap as well. The competition between the two terms explains the presence of the non-monotonous behavior. We conclude by arguing the stability of the phenomenon in the cases of non-sudden quenches and higher dimensionality. Charge fractionalization in small fractional-Hall samples Račiūnas, Mantas The discovery of fractional quantum Hall effect (FQHE) in 2D electron gas gave rise to immense interest in topological phases of matter. One of the most intriguing features of FQH state is fractionally charged excitations which embody anyonic statistics. Nowadays, experiments in optical lattices allow much more controllable study of many-body systems, therefore allowing regimes that are impossible to realise in semiconductor based experiments. Historically, FQHE comes from condensed matter systems, which can be characterized by a very large number of particles, as a consequence, theoretical studies were focused only on infinite or periodical Hamiltonians. However, few of the unanswered questions remain: can FQHE states be realised in minuscule lattices, containing only several sites in diameter, and what additional effects would open boundaries produce? These questions are interesting not only from the fundamental point of view, but are crucial for the design of a experiment in optical lattices. Using numerical diagonalization of interacting Harper-Hofstadter Hamiltonian we were able to observe localisation of fractional charge excitations in a square lattice using two different techniques. Driven-dissipative phase transitions of open quantum systems from a Floquet-Liouville perspective Reimer, Viktor The study of driven-dissipative open quantum systems prompted the emergence of a plethora of interesting new physics inaccessible to their equilibrium counterparts [S. Diehl et al., Nat. Phys. 4, 878 (2008)]. Combining Floquet's theorem with the general Liouvillian approach to open quantum systems [M. Grifoni and P. Hänggi, Phys. Rep. 304, 229 (1998)] provides powerful tools to investigate such systems beyond the adiabatic limit. Here, we present a general method to calculate the quasistationary state of a driven-dissipative system coupled to a transmission line (and more generally, to a reservoir) with arbitrary coherent driving strength and modulation frequency of system parameters. Applying this method, we extend our previous results based on the Floquet scattering theory [M. Pletyukhov et al., Phys. Rev. A 95, 043814 (2017)] for a two level system with time-dependent parameters which show the breakdown of the adiabaticity condition even for a slow time modulation. Secondly, we apply our method to a driven Lambda-system exhibiting electromagnetically induced transparency (EIT) and observe how the time modulation modifies the latter phenomenon. Our focus however lies on the third application - the single-mode Kerr nonlinearity model - where driving is considered across the point of the dissipative phase transition [A. Le Boite et al., Phys. Rev. A 95, 023829 (2017)]. The poster discusses the behaviour of observables in the quasistationary regime going beyond the range of driving parameters studied previously. Attractive fermions in a 2D optical lattice with spin-orbit coupling: Charge order, superfluidity, and topological signatures Rosenberg, Peter Exotic states of matter, including high-Tc superconductors, and topological phases, have long been a focus of condensed matter physics. With the recent advent of artificial spin-orbit coupling in ultracold gases, and the remarkable experimental control and enhanced interactions provided by optical lattices, a broad range of novel strongly correlated systems are quickly becoming experimentally accessible. One system of particular interest, given its potential impact on spintronics and quantum computation, is the attractive Fermi gas with spin-orbit coupling in a 2D optical lattice. Here we examine the combined effects of Rashba spin-orbit coupling and interaction in this system, with particular focus on the unique pairing, charge, and spin properties of the ground state, which is computed using the numerically exact auxiliary-field quantum Monte Carlo technique. We also study the behavior of edge currents, which are a potential precursor of various topological phenomena, such as Majorana fermions. In addition to illuminating the behavior of this exotic charge ordered superfluid state, our results serve as high-accuracy benchmarks for the coming generation of precision experiments with ultra-cold gases. Finally, we provide an outlook on future directions, including the addition of a Zeeman field to induce a spin polarization, in order to investigate finite-momentum pairing states and topological superconductivity. Design and characterization of a quantum heat pump in a driven quantum gas Roy, Arko We propose a novel scheme for a quantum heat pump powered by rapid time-periodic driving. We focus our investigation on a system consisting of two coupled driven quantum dots in contact with fermionic reservoirs at different temperatures. Such a configuration can be realized in a quantum-gas microscope. Theoretically we characterize the device by describing the coupling to the reservoirs using the Floquet-Born-Markov approximation. A time-dependent variational analysis of lattice gauge theories Sala, Pablo Fermionic Gaussian states are completely characterized by their two-point correlation functions. These are collected in the so-called covariance matrix, which then becomes the main object in their description. We derive a time-dependent variational description of (1+1)-dimensional gauge theories using the framework of lattice gauge theories as well as fermionic Gaussian states. We compare our results to previously obtained results via matrix product states for ground-state properties and real-time dynamics. Specifically, we investigate the phase transition between the string and string-breaking phases among other properties in the massive Schwinger model and other non-Abelian generalizations Manipulating spin correlations in a periodically driven many-body system Sandholzer, Kilian Periodic driving can be used to coherently control the properties of a many-body state and to realize new phases which are not accessible in static systems. In this context, cold fermions in optical lattices provide a highly tunable platform to investigate driven many-body systems and additionally offer the prospect of quantitative comparisons to theoretical predictions. We implement a driven Fermi-Hubbard model by periodically modulating a 3D hexagonal lattice. In the regime where the drive frequency is much higher than all other relevant energy scales, we verify that the interacting system can be described by a renormalized tunneling. Furthermore, we achieve independent control over the single particle tunneling and the magnetic exchange energy by driving near-resonantly to the interaction. As a consequence, we are able to show that anti-ferromagnetic correlations in a fermionic many-body system can be enhanced or even switched to ferromagnetic correlations. The implementation of more complex modulation schemes opens the possibility to combine the physics of artificial gauge fields and strongly-correlated systems. High-temperature nonequilibrium Bose condensation induced by a hot needle Schnell, Alexander Interacting Topological Insulators in 1D Superlattices Stenzel, Leo Without interactions, 1D charge pumps can be mapped onto 2D topological systems. The 1D superlattice then corresponds to the transversal kinetic energy. 1D charge pumps are readily realized experimentally. We add 1D repulsive interactions of Fermions and find topologically non-trivial Mott insulators and band insulators. The latter exhibit a topological phase transition which can be understood with an effective 1D model for strong superlattices and interactions. Topological properties and many-body phases of synthetic Hofstadter strips Tirrito , Emanuele Creating local topological excitations in quantum gas microscopes Ünal, F. Nur The idea of inserting a local magnetic flux, representing the field of a thin solenoid, plays an important role in various condensed matter models, especially in the understanding of topological systems. One example is the creation and manipulation of quasiparticle or hole excitations in these systems, which are essential for fault-tolerant quantum information processing. Implementing such local fluxes in cold atom experiments promises great potential. Here, we propose an experimental scheme to realize a local flux in a cold atom setting which takes advantage of the recent developments in synthetic gauge fields and quantum microscopes. To demonstrate the feasibility of our method, we consider quantum-Hall-type lattice systems and study the dynamical creation of topological excitations. We analyze the adiabatic charge pumping by tuning the strength of the local flux. Periodically quantum driven system realization in photonic lattices Upreti, Lavi Kumar Different topologically properties have been found static system. Different in the sense, topological properties depending on the dimension and the symmetry. Here, we explore them for periodically driven system. It has been seen that trivial system in static phase can be made topological by the application of periodic driving. Not only that, we can also have phases where the topological invariant for the bands vanishes eventhough it is topological, aka anomalous phases or Floquet topological insulator. From there, we try to realize such system in photonic system, more precisely waveguide arrays. And we calculate phase diagrams using bulk-boundary correspondence. Probing topological excitations via engineering of an optical solenoid Wang, Botao The realization of artificial gauge fields in optical lattice systems paves a route to the experimental investigation of various topological quantum effects. Here we propose a realistic scheme to locally control artificial gauge fields and to directly probe topological transport effects in Hofstadter optical lattice. In that case the system can be effectively described by a modified Hofstadter Hamiltonian with an additional flux in some individual plaquette. By treating this additional flux as a pump parameter, a different paradigm for quantum charge pumping can be created. Considering that varying gauge field with time gives rise to synthetic electric fields, which in turns affects the particle distribution, a gauge dependent dynamic happens here. As well as that, topological edge currents in a two-dimentional optical lattice can also be generated. Since all these effects are manifested on the spatial density distribution, with recent advances of microscopic manipulations in optical lattices, a direct detection of such topological properties could be achieved in the near future. Topological order in finite-temperature and driven dissipative systems Wawer, Lukas Majorana Box Engineering: Quantum Spin Liquids and Sachdev-Ye-Kitaev Model Yang, Fan Optical Ladder Lattices With Tunable Flux Žlabys, Giedrius Ultracold atoms in optical lattices provide clean and tunable systems to realize many-body quantum physics. They can be used to simulate a variety of effects ranging from superconductivity and superfluidity to novel phases of matter. Particles trapped in an optical lattice are neutral so the Lorentz force does not affect them. A workaround resolving this issue is the introduction of an artificial gauge field that generates magnetic flux. It can be created by using laser assisted tunneling and periodic driving schemes. This also allows to realize a stronger magnetic flux per lattice plaquette than typically available in solid state experiments. In this work, we propose a driving scheme for a quasi-one dimensional ladder lattice that induces a tunable artificial magnetic flux through the lattice plaquettes. By manipulating the shaking phase for each individual site, this flux can be made inhomogeneous in space. It allows us to explore the dynamics and control capabilities of an atomic wave-packet propagating in such a lattice.
d4f6ef1f560181b3
3A12 Abstracts 2014 Below is a list of abstracts by third-year students in the Integrated Science Program: Harrison Martin: An Investigation of Possible Scaling Relationships Between Dune- And Bar-Scale Features In A Modern Fluvial Setting The interpretation of ancient environments through their preserved rock records can be aided by the study of modern environments as analogues. One common problem in the interpretation of fluvial paleoenvironments is the estimation of river scales and properties using stratigraphic features. While it is already known that there exist scaling relationships between unit bars and rivers, the full dimensions of bars are not generally preserved in the rock record. Dunes, however, can sometimes have their dimensions preserved in plan-view. For this reason, a scaling relationship between either the wavelength or sinuosity of duneforms in planview, and the unit bars upon which they form, would allow for the approximation of ancient unit bar scales and thus river properties using preserved dunes. To this end, modern, high-resolution remote sensing satellite data will be used to view modern river systems with exposed dunes and bars. Measurements will be taken and analysed using statistical techniques in order to investigate the possibility of a reliable scaling relationship between any of various dune or bar measurements. If successful, applications of this research could aid in the field of hydrocarbon resource exploration. Most of the world’s oil & gas reserves are located in sedimentary rocks, with many of those (including most of the Alberta oilsands) located in the modern products of ancient fluvial-deltaic systems. The discovery of new scaling relationships between dunes and bars could help in developing new interpretation methods for these paleoenvironments for both academic and industry-related purposes. Christina Spinelli – Examining the influence of movement on neural entrainment to meter in young infants Listening to music is a multi-sensory experience during which not only our auditory system, but also our motor system, plays a major role. While we hear by listening to the pitch of the notes, our motor system entrains to the underlying rhythmic beat, encouraging us to move with it in synchrony. Furthermore, the metrical structure of a song influences how we move to the beat. Larger movements are often made on the first beat of a musical bar, every second beat for duple meter (a march) and every third for triple meter (a waltz). While what we hear influences how we move, the reverse is also true. Behavioural research has shown that bouncing an infant to every second or third beat biases their interpretation of an ambiguous rhythmic stimulus (a pattern that can be interpreted as either duple or triple meter). Similar effects of movement on metrical perception can be measured behaviourally in adults. This effect can also be measured at the neural level by analyzing the steady-state evoked potentials in an adult’s electroencephalography (EEG) in response to rhythms. The objective of the current project is to take advantage of this new approach to determine how the brain of infants encodes meter. Seven-month-old infants will be bounced either on every second or on every third beat as they listen to an ambiguous rhythmic pattern. Then, while sitting quietly on their mother’s lap, EEG will be recorded while they listen to the ambiguous rhythm repeated for 18 minutes. We hypothesize that the way the infant is bounced will influence how the ambiguous pattern is interpreted, and that these different metrical interpretations will be related to corresponding changes in the neural entrainment to the rhythm as captured with the EEG. Douglas Chan No abstract provided. Trystan Nault – GPCR Interacting Proteins (GIPs) and Their Roles in Synaptic Plasticity G protein coupled receptors (GPCRs) are generally activated by a ligand binding to a binding pocket.  Following this initiation on the extracellular motif(s), conformational changes occur which allow signalling to intracellular G proteins and the activation of downstream pathways, leading   to any number of changes within a cell or greater tissue. GPCR interacting proteins (GIPs) have functional domains,  which allow for these downstream pathways within a cell to occur based on the initial activation/conformational change of the GPCR.  GIPs also function by trafficking GPCRs to subcellular destinations such as the cell membrane, where they are able to function as metabotropic receptors for neurotransmitters.  GIPs  play a role in synaptic plasticity,  in that they can be signalled to potentiate or depress a synapse by adding or removing GPCRs from a post-synaptic membrane. On a larger scale, GPCRs can mediate cell signalling by acting as scaffolds for recruitment of GIPs which modulate GPCR function and signal transduction.  GIPs also function by regulating the specificity of GPCR binding pockets, receptor endocytosis, expression in the cell membrane (post synapse) and receptor recycling. Collectively, these functions allow GPCRs to mediate cell signalling through their recruitment of GIPs. A literature review including historical and current research regarding the specific roles of GIPs in classical long term potentiation (LTP) mechanisms within CA3-CA1 Hippocampal Synapses was conducted.  In addition, the roles that different GIPs play in the more general function of the neuron are discussed in order to add breadth to the topic. Along with the identification of future potential research questions to further the current understanding of classical LTP, possible methods through which these questions can be answered were discussed. Hypotheses and different possible outcomes are overviewed, along with the implications of each scenario for the present knowledge of the subject and for possible applications in drug discovery and health care. Josanne White – The Effect of Language and Structure on Mathematical Word Problem Solving In the 1980s and 90s, several researchers looked into the role of wording on elementary school children’s ability to solve single step addition and subtraction problems. It was found that certain ways of structuring a word problem would cause the children to form a specific mental representation of the problem. Some of these mental representations are more useful than others to use in selecting the correct problem-solving strategy, so the problems which elicit the best mental representation are much easier to interpret and solve. Research has tried to determine what the optimal structure or wording for this type of problem would be. Vicente et al. more recently found that conceptual rewording, or the addition of statements that clarify mathematical relationships, was the most effective approach. Caroline van Every – The impacts of Bythotrephes Longimanus on the food web structure of Canadian inland lakes Bythotrephes longimanus, commonly known as the spiny waterflea, is a predatory zooplankton species that is native to Northern Europe and Asia. It was first introduced to the Great Lakes in the 1980’s, and has since spread to many inland lakes in the surrounding region. While the impacts of Bythotrephes on zooplankton abundance and community structure have been largely investigated, less is known about the effects of Bythotrephes on species occupying higher trophic levels. Additionally, research has shown that Bythotrephes is a potential competitor with small and juvenile fish for zooplankton prey. This study investigated the impacts of Bythotrephes on the food web structure of Canadian inland lakes. Carbon and nitrogen stable isotope ratios were analyzed to compare the trophic positions of zooplankton and fish populations in lakes that either have or have not been invaded. Past research has indicated that lakes containing Bythotrephes exhibit reduced herbivorous zooplankton biomass, as well as increased proportions of omnivorous and predatory zooplankton. Therefore, it is predicted that invaded lakes will exhibit zooplankton and fish communities with elevated trophic positions. The investigation of the effects of Bythotrephes on food web structure will allow for a better understanding of its invasive impacts, which is vital in regards to food webs, especially if Bythotrephes is causing significant modifications or outcompeting small native fish species. Daniella Pryke – Comparing the efficacy of SSRIs and cognitive behavioural therapy alone and in combination on anxiety disorders. Approximately one in six people living in North America will experience an anxiety disorder during their lifetime. In these individuals, anxiety is persistent and severe, and causes distress in their lives. Two of the most common methods of treating anxiety disorders are selective serotonin reuptake inhibitors (SSRIs) and cognitive behavioural therapy (CBT). It is therefore important to assess how effective these treatments are, and whether they are more effective independently or when used in conjunction. In order to answer this question, I am conducting a systematic literature review. This review is being conducted on PubMed. The following keywords are being used: “cognitive therapy” AND “anxiety disorders/therapy” AND “serotonin uptake inhibitors/ therapeutic use”. From the papers that this search generated, only those in English which directly compared SSRIs and CBT were used. This created a set of 22 papers. These papers were then grouped by anxiety disorder, age and specific SSRI to see which area it would be best to focus on. I chose to look at nine papers which compared the efficacy of CBT with and without SSRIs in children or youth (ages 7-17). The first of these papers was published in 1997, and the most recent paper was published in 2013. I expect that CBT and SSRIs will be more effective when used in conjunction. I also expect that when the only comparison is between CBT alone and SSRIs alone, SSRIs will be more effective. This research will provide insight into whether SSRIs or CBT is more effective, and if they are more effective in conjunction. I am also hoping to clarify if one treatment option is better than the other when treating children with anxiety disorders.  Melissa Ling – Transcriptional and Metabolic Changes in the Inflammatory Response The medical advances in the past century have enabled first world populations to have greater access to health care and medicine leading to an ever increasing elderly population. The elderly are more vulnerable to infectious diseases, however, and this barrier must be tackled if the boundaries of age are to truly be pushed back. Unlike a young person, an aged individual’s immune system is less successful in mounting an effective immune response following pathogen infection. Since macrophages are important first line defenders of the innate immune system against pathogens, their performance in response to different stimuli acts as an indication of the immune response being generated. Macrophages are polarized to be M2 (“repair”) macrophages or M1 macrophages. M1 activated macrophages mount a microbicidal defense, and benefit from using pathways of glycolysis to provide the building blocks necessary for their mechanisms of pathogen destruction. Previous studies have demonstrated that metabolic changes that occur with age result in the inability to switch to the glycolytic pathway, and are subsequently responsible for impaired macrophage function in older individuals. As such, altering metabolic cycles to restore glycolytic activity is an attractive target for slowing down the effects of aging. In order to know which part of the cycles to target, however, it is first imperative to understand how and when young murine macrophages react upon activation through LPS stimulation. A targeted metabolic analysis using Gas Chromatography – Mass Spectrometry was completed and RNA sequencing results analyzed to investigate these metabolic changes. Our findings indicate the greatest metabolic changes occur 16 hours following LPS stimulation, whereupon young macrophages demonstrate a shift of their metabolism to aerobic glycolysis. At that time, genes involved in glucose metabolism were upregulated. Due to the importance of aerobic metabolism for activated macrophages, future research is needed to investigate whether altering the metabolite concentration in the macrophage growth medium results in a regained ability for aged macrophages to perform the switch to aerobic glycolysis upon infection. Phil Lauman – Elucidating the site of TRAF-6 sequestration on SR-AI in Macrophages On macrophages and several other types of immune and immunity-related cells, Toll-like receptors (TLRs) are responsible for pathogen recognition and the subsequent activation of cellular pathways which mediate various aspects of the immune response. In previous studies, it has been shown that cell surface activation of TLR-2 in macrophages triggers the MyD88 signalling pathway, while endosomal activation of TLR-2 in the same cells triggers the TRIF-TRAM pathway. Since the activation of these pathways is inappropriate in the presence of certain stimuli, cellular mechanisms must exist to downregulate the signalling pathways under certain conditions. Indeed, it has recently been determined that SR-AI, a scavenger receptor involved in macrophage-driven phagocytosis, may contain a motif which binds to a downstream adapter of the MyD88 pathway known as TRAF-6 and sequesters it, thus preventing a response. SR-AI normally contains two putative binding motifs known as the TRAF-2 and TRAF-6 binding motifs (T2BM & T6BM), respectively located on the cytoplasmic and extracellular domains. Although one might naturally expect the latter motif to be involved in TRAF-6 sequestration, recent studies indicate that TRAF-6 – SR-AI interaction occurs in the cytoplasm and the T2BM may therefore be a possible candidate. In this study, we attempt to identify the SR-AI motif involved in the sequestration of TRAF-6 and thus the downregulation of the MyD88 signalling cascade. To achieve this, we use a SEAP assay to measure the level of TRIF-TRAM activation in HEK-293T cells transfected with either wild type (WT) SR-AI or SR-AI mutants with T2BM, T6BM, or T2/6BM deletions. Since TRIF-TRAM activation and concentration of free TRAF-6 are positively correlated, analysis of the SEAP assay allows us to determine which mutants are associated with lower levels of TRIF-TRAM activation, and thus identify the binding motif(s) involved in the sequestration of TRAF-6. Negative controls are used to determine baseline levels of TRIF-TRAM activation in the absence of endosomal internalization by SR-AI. Pam3Csk4, a synthetic analogue of bacterial lipopeptides, is used as the primary ligand to activate TLR-2. We expect to find significantly higher levels of TRIF-TRAM activation in the T6BM and T2/6BM transfectants when challenged with both Pam3Csk4 and SP P1121, demonstrating that the T6BM is involved in TRAF-6 sequestration. These results will clarify the localization of TRAF-6 – SR-AI interaction, and may eventually be used to produce drugs which promote activation of the MyD88 and TRIF-TRAM pathways by blocking TRAF-6 sequestration. Jared Valdron – Home Sweet (Materialistic) Home: The Contextual Malleability of the Implicit Association between Wealth and Happiness Promotions and overtime are a part of professional life and often provide an important trade-off: more work in exchange for more pay. Classic economic theory assumes that people make these kinds of decisions completely rationally and independently of irrelevant factors, but a growing body of research suggests that this is not the case. The present study examined whether one’s likelihood of taking more work for more pay changes with the context in which the decision is made, through malleability in their implicit association between wealth and happiness. Specifically, it was investigated if “Work”, “Lab” and “Home” settings differentially affect decisions to take more work for more pay. In an experiment administered online, participants were first primed with either “Work”, “Lab” or “Home” settings through a writing task. Second, participants took an Implicit Association Test (IAT) assessing their association between “Wealth” and “Happiness”. Third, participants denoted their willingness to take an increase in work for an increase in pay. Finally, participants were asked how much they associated wealth and happiness on an explicit level, and subsequently completed the short Money Ethic Scale (MES). Contrary to initial predictions, participants implicitly associated “Wealth” and “Happiness” more strongly when primed with “Home” than “Work” or “Lab”. Consistent with initial predictions, participants’ IAT scores (but not explicit attitudes or MES scores) were positively correlated with their willingness to take more work for more pay. These findings can be applied to industry, where employers could encourage employees to make the final decision about taking a promotion or doing overtime while at home. These results also have theoretical implications into the nature of implicit attitudes, lending support to the views that implicit attitudes are malleable and constructed on the spot. Jesse Bettencourt – The Arduino Platform and Science Education Arduino is an open-source electronics prototyping platform which utilizes well-documented hardware and software to provide a rich, open, and accessible interface for user-created electronics. The platform is employed by many areas of interest, ranging from creative installations by artists to mechatronic projects by engineers. Arduino has great potential as a learning tool in the sciences. This presentation will be to introduce the platform, highlight examples of introductory Arduino projects, and discuss the relevance of Arduino to science curricula. It will feature overview the hardware, software, and the resources available to students interested in pursuing an Arduino project. Further, it will showcase an example of open hardware in undergraduate lab design. Aaron Goldberg – Numerical Approximations of Partial Differential Equations using Finite-Difference Methods Disciplines such as physics, chemistry, and economics are governed by descriptions of how certain properties change relative to others. These descriptions are often codified mathematically as partial differential equations (PDEs), which relate multivariable functions to one or more of their partial derivatives. The solution of a PDE quantifies how a system will behave in time, space, and/or a mixture of other variables; however, most PDEs are not solvable analytically. By approximating solutions of PDEs, scientific computing can be harnessed to model the evolution of otherwise unsolvable complex systems. These approximations have inherent imprecisions, and special care must be taken to ensure an appropriate approximation is used. This project characterizes the use of the forward-time central-space finite-difference (FTCS) method to approximate solutions of two common PDEs: the heat equation and the wave equation. MATLAB codes were written to model the evolution of these two PDEs. Partial derivatives were approximated by finite difference equations, yielding equations for the systems’ states one time step in the future of the current states. Matrix manipulation was used to iteratively evaluate these states for arbitrary lengths of time, to analyze the behaviour of various initial conditions. The known, exact solutions of the heat and wave equations were used to characterize the error, stability, and convergence of the approximations. For the heat equation, it was found that the FTCS method was stable when ν*Δt/Δx2 ≤ 0.5, where ν is a type of diffusion constant, Δt is the size of each temporal division, and Δx is the size of each spatial division. As well, it was found that the truncation error was of order Δt + Δx^2. For the wave equation, it was found that the FTCS method was stable when c2*Δt2/Δx2 ≤ 1, where c is a type of wave speed. The wave equation’s truncation error was of order Δt2 + Δx2. The above results were used to verify the calculated theoretical values for convergence and error. The wave equation is further hypothesized to be subject to diffusive errors, whereby waves with sharp corners become rounded upon analysis with this iterative time scheme, and dispersive errors, whereby well-defined waves spread out over time. Both PDEs will also be tested to see whether they are well-posed, i.e. whether their evolution can be retraced backwards in time. These results are important in understanding the extent to which finite-difference approximations can be used to model everyday phenomena. Rebecca Dipucchio – Transforming the Development of Inquiry Skills: To What Extent Does Participation in an Inquiry Course Enhance the Development of Inquiry Skills as Compared to Other Inquiry-Based Opportunities? Inquiry Based Learning (IBL) within chemistry has been well characterized in previous literature, but focuses entirely on inquiry based labs or inquiry in a general first year course. As well, there is no documented IBL material specific to Chemical Biology courses and no literature exists to document student perceptions of any upper year inquiry-based course. McMaster continues its history of innovation in IBL with ChemBio 2Q03, Inquiry for Chemical Biology. This second year course is required for all Honours Chemical Biology students and exposes students to a set of IBL skills, as defined from the literature. A key goal for ChemBio 2Q03 is for students to develop these skills and to prepare students for future inquiry based experiences. One such experience is the inquiry project in Chem 3AA3, Instrumental Analysis. Chem 3AA3 is completed by both Chemistry and Chemical Biology students together. This study investigated the perceptions of level three, four, and five Honours Chemistry and Honours Chemical Biology students regarding the development of their IBL skills for application in the Chem 3AA3 inquiry project. Within this, the role of ChemBio 2Q03 in IBL skill development as compared to other IBL skill development opportunities was determined. All study data were obtained through the combination of an online survey filled out by level three, four, and five Chemistry and Chemical Biology students as well as in person interviews conducted with instructors and teaching assistants from ChemBio 2Q03 and Chem 3AA3. Information was analysed both qualitatively and quantitatively, by looking for quantitative survey trends and combining survey data with interviews to identify themes. The survey results will indicate whether students perceive that ChemBio 2Q03 improves their own IBL skills, and whether this course stands out among other possible IBL experiences for Honours Chemistry and Honours Chemical Biology students. In addition to discussing any qualitative or quantitative trends, possible contrasts between responses presented by students in the online survey and instructors from in person interviews will be explored. These results will be given context in the broader body of literature surrounding inquiry in chemistry and they will be compared to any existing published information on student perceptions of IBL. Kerri Kosziwka – Case Studies as a Pedagogical Tool Students in the Integrated Science (iSci) Program at McMaster University in Hamilton, Ontario, Canada learn concepts through methods that are not common in Canadian Universities. With a limited enrolment of 60 students per year, a variety of techniques involving problem-based learning are used and are effective. This project explores the benefits of one of the learning techniques commonly used in this program: case studies. Case studies are often used as a way to promote critical thinking skills. In university, these skills are often not developed until later on in a student’s education, as in large classes success is usually determined by the ability to memorizing facts. Case studies however, are an effective method of teaching that helps to promote active learning, help problem solving, and encourage the development of critical reasoning and analysis skills. Two case studies to teach concepts from the Life Sciences component of iSci 1A24 are being made in differing formats: a handout with questions and a PowerPoint. Each of the case studies revolves around the same model organism: Crown-of-Thorns Starfish (COTs). By using the same organism in each case study, students will become receptive to the familiarity and consistency. The first case study aims to teach evolutionary concepts consistent with those learned in iSci 1A24.  This case study will look more into paleobiology and the fossil records connection to evolution, as well as problems that can arise in marine systems with fossilization. Next, ecology concepts will be taught from a PowerPoint presentation and iClicker questions. As ecology is the science of interactions students will explore how COTs interact with their environment. This is discussed through a presentation that outlines their predators, prey, feeding habits, abundance, and conservation efforts. The use of these case studies work on two levels: the implications on the students and the implication on the broad use of case studies. In terms of iSci specifically, students become very comfortable with group settings through class discussions and group projects. By adding to the amount of collaborative learning, students will have enhanced satisfaction with the entire learning process. In addition, by discussing the topics with a group or the whole class, students are exposed to a variety of ideas, which will extend their engagement further. Nicholas Goncharenko – Using the Arduino Platform to Teach Neuroscience Students Information Processing and Scientific Models in the Context of the Visual System. Many undergraduate students receive limited exposure to analysis of scientific models and complex systems. Specifically, only an estimated 7% of students in North America have taken a course in computer science before attending university. This is problematic, as modern science often requires translating scientific models and complex systems into those able to be simulated by a computer. In an effort to make computer science more accessible and hands-on to these students, a proposal has been developed in the form of an undergraduate lab that could be carried out at McMaster University, aimed at teaching students, information processing and scientific models in the context of the visual system. This lab will provide students with the opportunity to model visual systems using Arduino microcontrollers: a small computer designed to do one task at a time, whose hardware and software are open source. Arduino microcontrollers were chosen as they can be used to model neurons as computers. Students will be shown a model of the visual system that uses Arduino microcontrollers to sense light and correctly identify colours. Students will then be challenged to construct their own alternate model of the visual system. At their disposal will be modified Arduino microcontrollers and software programs, which can be used to test their model. At the end of this lab, students should understand the process behind building a scientific model and gain an understanding of important concepts such as emergent properties. A significant part of the presentation will focus on the benefits of using Arduino to teach scientific concepts, mainly its cost effectiveness, adaptability and multidisciplinary use to teach many important concepts in science outside of neuroscience. Mackenzie Richardson – Improving the McMaster Outdoor Orientation Student Experience (MOOSE) The McMaster Outdoor Orientation Student Experience (MOOSE) program is a first year experience (FYE) that aims to help incoming students transition into life at McMaster University. Currently preparing for its third session, a total of 120 students from Arts and Science 1, Integrated Sciences 1, Kinesiology 1, and Social Sciences 1 have attended MOOSE. The program uses camping and canoe tripping to form an outdoor education setting, where students form relationships with peers and faculty, learn about McMaster and its programs, and better understand other aspects of university life. MOOSE is looking to evolve so it can provide the best possible experience for the greatest number of students. With this in mind, feedback from past participants and examination of other outdoor FYE programs from different universities is necessary. This research project aimed to accomplish both these tasks. Past MOOSE participants were invited to participate in an online survey, which focused on how effective and important they perceived MOOSE to be for their transitions into university life. A literature review of available research on other outdoor FYE programs was conducted, drawing inspiration for how the program can improve. The results of the literature review and the online survey were analyzed and compiled into a manuscript and a set of training documents for future MOOSE student leaders. The manuscript summarizes the literature review, and makes suggestions for changes to the MOOSE program. The research performed is very important for the future posterity of the MOOSE program. Research has shown that effective FYE and transitional programs can dramatically increase how welcome students feel at a university, decrease their perceived levels of stress, and improve their abilities to form relationships. Overall, this can lead to higher student retention at an institute, and improve the overall perceived satisfaction of the university experience. Matt Galli & Mary Kate MacDonald – Comparing Student Stress Levels in Interdisciplinary Programs at McMaster University Undergraduate students experience high degrees of stress due to the transition to a more independent life, and the high volume and consistency of academic demands and evaluations associated with university. This stress is frequently correlated with illnesses, both physical and psychological in nature. Specific to students, high levels of stress often result in a decrease in academic success. In order to address the issue of student stress, it is paramount to identify student populations that experience augmented stress levels in addition to the potential causes, or stressors. The evolution of novel and unconventional undergraduate science pedagogies and teaching environments, exemplified by the Integrated Science program at McMaster University, which employs a problem-based and small group learning style, contrasts with the more traditional large-scale, lecture-based teaching styles of the Life Sciences Program. In light of these novel approaches to learning, there is a need to understand how these new techniques affect the perceived stress of science undergraduate. This project involves using an online survey sent to students in first and third year of both the Life Sciences and Integrated Science programs in order to quantify the potential differences in their perceived stress, and the potential causes and coping mechanisms unique to their programs. The survey contains 19 questions, takes approximately 10 minutes to complete, and is divided into three main sections. The first part aims to categorize the student in terms of year, program, and gender. The second employs the perceived stress scale to attain a quantitative measurement of perceived stress. The third section aims to elucidate program-specific reasons for any trends that appear, with questions investigating available coping mechanisms, learning strategies, and teaching strategies. Ultimately, this research is intended to understand the sources of stress in undergraduate science programs that are derived from their academic environment. A better understanding of environmental stressors related to various programs and pedagogies will provide motivation and rationale on how to reduce student stress, and input for how to improve existing pedagogies to minimize stress while maximizing the student learning experience. Jonathan Park – From ancient calendars to the calendar today The world moves in a continuous cycle of time. It has become natural from the moment of birth that most people treat the calendar as if it has always existed the calendar before. People now make plans in hour, day, week, month, and even in year units. It is now almost impossible to imagine the world without a calendar system. The calendar is a very complex system that was not created instantly. There were many different calendars that emerged in the past that either disappeared in the course of history or influenced the calendar that is being used today. The purpose of this project is to investigate different types of calendars that existed in the past, and how they are interconnected with or isolated from each other. The main calendar systems investigated were the Egyptian, the Mayan, the Babylonian, the Julian (Roman), the Hebrew (Jewish), and the current calendar. Each system has its own distinctive feature and they also share similar characteristics. In addition, historical processes such as calendar reformations will be discussed to demonstrate the transition from ancient to modern calendar systems. The investigation was conducted by means of a literature review, by examining library catalogues and journal articles. The catalogues were available from the libraries of McMaster University and University of Toronto Mississauga, and journal articles in online format were accessed via library websites. The results suggest that the current calendar system is influenced mostly by the Babylonian, Julian, and Hebrew calendar that dominated most regions of Europe and Middle East. However, the Mayan calendar was one of the most complicated calendars in ancient history and unfortunately, the self-destruction of Mayan culture due to civil wars did not allow the knowledge to be passed on. The Egyptian calendar that dominated North African region disappeared soon after the expansion of Roman Empire. Looking into different calendars in the past and the current calendar show how humanity continuously struggled to create a better version of calendar in order to have more efficient time-keeping and more systemized society. Alexandra Kasper – A NetLogo Model for Fractioned Radiation Treatment Radiation therapy is one of the most common methods for treating cancer.  When mammalian cells are irradiated, their chance of survival depends on many factors including dose and type of cell. The effect of radiation on cell viability can be expressed through cell survival curves. The relationship between dosage and cell viability can be determined experimentally by measuring the surviving fraction of cells after exposure to varying amounts of radiation. Recognizing the differences between the response of tumour cells and normal tissue to irradiation is crucial to treatment planning of radiation therapy for cancer patients. Additionally, understanding cell survival curves can help to explain the actual mechanism of radiation damage on the cellular level: what is happening within the cell that causes radiation to kill some of the population? Presently, there are many mathematical models which agree with cell survival data to varying degrees of accuracy. The linear quadratic (LQ) model is one of the most widely used models in the teaching of cell survival curves. The LQ model utilizes α/β values, which are a measure of the sensitivity of the tissue to radiation, and also determine the curvature of the cell survival curve. Rather than using a single high radiation dose, following a fractionated dose schedule can amplify the response differences between the tumour and normal cells and ideally maximize damage to tumour cells, while minimizing damage to surrounding healthy tissue. Developing a fractionated radiation treatment schedule requires consideration of the α/β value, applied dose, and treatment frequency. In this project I developed a NetLogo model which allows the user to adjust the α/β value, applied dose, and frequency of treatment to create an ideal fractionated radiation treatment schedule. NetLogo is an agent based programming environment which is designed to be an educational tool. This model uses the LQ model and other concepts from iSci’s first year cancer research project and is intended for possible future use within the cancer project. This model will allow students to compare different clinical treatment schedules such as hyperfractionation and conventional strategies, as well as observe the consequences of stopping treatment before completion. Rebekah Ingram – Potential Contamination of a Private Drinking Water Well During the Pleistocene Epoch, the Laurentide Ice Sheet slowly advanced to cover a large portion of North America, with its maximum size and thickness occurring approximately 20 000 years before present. This glaciation deposited a thick layer of sediment across the area known today as Southern Ontario. In many areas across Southern Ontario, glacial deposits serve as the primary aquifers for municipal and private drinking water needs. This study involves the investigation of the cause of a water quality issue encountered at a private drinking water well in rural Ontario. Water from this well is characterized by a foul odour comparable to rotten eggs, foaming or fizzing at the tap, and brownish or blackish deposits. This water does not exceed any Ontario Drinking Water Standards, however has been shown to have levels of iron and manganese far higher than their aesthetic objectives. The area in question is underlain by glaciolacustrine deposits of surficial sand, gravel and silt. The well draws from an unconfined aquifer which makes it susceptible to contamination. It was previously determined that the high iron and manganese concentrations coupled with a lack of nitrate and sulphate in the well water indicated a reducing environment in which sulphate reducers could be biofouling the groundwater. It has been theorized that these reducing conditions in the complainant’s well could be due to the past discharge of wastewater into the aquifer by a food grade trucking company located directly across the street from the complainant’s property. In 2013, three monitoring wells were installed on the trucking company’s property to determine the direction of groundwater flown and water quality. This project will involve additional analysis of the hydrochemical, hydrogeologic, and sedimentologic data collected during investigation of the complainant’s water issues to determine if the trucking company is responsible for these water quality issues. Water quality data will be compared to Ontario Drinking Water Quality Standards, to the water quality of nearby wells, and to typical geochemical parameter values in glaciofluvial sediments.  A subsurface geology map of the site will also be made using well record data and a logged sediment core. If the source of the water quality issue is determined to be natural rather than anthropogenic, the results of this study may be applicable to the quality of water drawn from glacial aquifers across Southern Ontario. Nathaniel Smith – Nuclear War and Nuclear Peace: A Holistic Approach to the Manhattan Project In 1939 Albert Einstein and Leo Szilárd wrote a letter to Roosevelt addressing the concern that Germany was developing weapons of mass destruction. Based on the discovery of nuclear fission by German physicists Otto Han and Lise Meitner, Szilárd proposed that the nucleus of an atom could be unlocked to unleash inconceivable amounts of energy. The fear of this weapon in Nazi hands motivated Roosevelt to take action and the Manhattan Project was born. The following seven years saw the collaboration of the greatest minds in the history of science, from Niels Bohr to Richard Feynman. The Manhattan project was a large-scale, classified operation, employing 129,000 people, including construction workers, plant operators, and military personnel. The project consumed US$26 billion (2014) and mined thousands of tons of uranium from Canada and the Belgian Congo. On July 16, 1945, the world’s first nuclear bomb was detonated in Trinity Site, New Mexico, and the Atomic Age had officially begun. This plutonium implosion device had the same design as the Fat Man, which devastated Nagasaki on August 9, 1945. The Japanese Instrument of Surrendered was signed on September 2, and the Manhattan Project was replaced by the Atomic Energy Act of 1946. The Manhattan Project changed the course of history, and is controversial on many levels. The death toll of Hiroshima and Nagasaki was 185,000. American scientists expressed moral conflict, and circulated the Franck Report, which attempted to halt the use of the atom bomb. In fact, historical literature suggests that Japan was considering surrender before Nagasaki, yet Truman’s motivation to use the atom bombs was a means of intimidation to the Soviet Union. To this day, it is still debated whether the use of nuclear weapons on Japan was a defensive effort or war crime. The Manhattan Project’s positive influence on science, however, is impossible to debate. During the Manhattan Project, Glenn Seaborg revolutionized the periodic table by discovering 9 new transuranium elements (94 through 102) and distinguishing the actinide series. Nuclear medicine also relies on the hundreds of radioactive isotopes discovered and produced by Seaborg. Lastly, the dependence of modern society on electricity (The United States generated 769.3 billion kWh in 2012) is due to the pioneering of Enrico Fermi’s Chicago Pile-1 Reactor. Indeed, the impact of the Manhattan Project is vast in both devastating human civilization and supporting it. It is for this reason that young scientists should learn of its impact, to seek ethically healthy research and avoid disasters. Jacqui Rotondi – Phenotypic plasticity in Eutrema saluginea’s herbivore defence mechanisms Phenotypic plasticity in Eutrema salsuginea’s herbivore defence mechanisms Eutrema salsuginea is a crucifer of the Brasscia (mustard) family. The natural accession native to Yukon Territory, Canada shows phenotypic plasticity through tolerance to many abiotic stress factors including extreme cold, salt, drought, and nitrogen limitation. In the Yukon E. salsuginea grows on high sulfur soil and crucifer plants frequently use sulfur metabolites called glucosinolates for defence against herbivores. We are testing the hypothesis that E. salsuginea shows plasticity with respect to herbivore defence strategies in high versus low sulfur environments. According to the resource allocation hypothesis, some plants can distribute nutrients to serve various functions based on the availability of that nutrient. Therefore, more available sulphur could result in stronger herbivore defence mechanisms. To explore the possibility of phenotypic plasticity in defence mechanisms, we grew E. salsuginea plants in sulfur rich and sulfur poor soil then inoculated the plants with Green Peach aphids (Myzus persicae). The total number of aphids per plant was counted daily and statistical testing was used to compare the number of herbivores on the sulfur rich plants to the sulfur poor plants. We predicted that improving the capacity of the plants to make glucosinolates by a growing on the sulfur rich soil would reduce aphid numbers in comparison to the sulfur poor plants. If the predicted results are supported, this experiment would be evidence of plasticity of E. salsuginea’s herbivore defence mechanisms. George Wells – The Life of Thales This paper is a literature review on the formation of the school of Ionia, focusing on the life of Thales. This is not only a literature review but a critical assessment of the plausibility of different sources. The school of Ionia was founded by Thales of Miletus (624- 546 BC), who is praised with being the first western philosopher to replace superstitious thinking with rational thought in explaining the world. There is a lot of accreditation to Thales: he predicted an eclipse in 585BC and found several geometrical theorems (e.g. a circle is bisected by its diameter and the angles at the base of an isosceles triangle are equal).  However, Thales did not leave any written work – or at least no written work that survived to the modern day. – Thales’s life and his teachings are recorded by a number of other famous philosophers such as Aristotle and Plato. As a result, present literature about Thales is spread out over a vast number of sources that make a wide variety of claims about Thales’s postulations and life. Some sources tend to be more definitive about who Thales was and what he did while other authors are more skeptical. The aim of this review is to determine different aspects of Thales’s life, his possible travels and his teachings, by evaluation of available sources of such information. The next part of the paper will look at the culture of Miletus and at the Greek religion which allowed the school of Ionia to form. My hypothesis is that authors should be more skeptical about the life of Thales and that he had too many teachings which have been attributed to him. I hypothesize this because the philosophers that wrote about Thales did so after his death. This study is relevant because Thales is considered to be the founder of science as we know it, but is hardly known to the general public.  He had an extremely interesting life if you combine all the attributions to him, but it is important to know which are most plausible. Finally, there does not yet seem to be a complete analysis of the life of Thales as sources tend to focus on one aspect of Thales’s life or teachings. This could be another outcome of the paper. Katie Woodstock & Laura Hogg – Interspecific Competition Between Semi-Feral Horse Herds and Giant Pandas in the Wolong National Nature Reserve: Modelling the Impact of Domestic Livestock on Endangered Species The resources required to support domestic livestock across the globe are tremendous, resulting in habitat disruption and deforestation. On top of this, recent studies indicate that free-roaming livestock may significantly affect the population sizes of at-risk and endangered species through interspecific competition. One instance of this is in the Wolong National Nature Reserve, where the giant panda population is impacted by farmers allowing their horses to roam in the surrounding forests. The regions and sample sizes used in the study were limited by logistical constraints. This study extrapolates the data obtained from a study of the region by modelling the point at which livestock begin to negatively affect the giant panda population. The reserve, created to protect the endangered giant panda population, also houses several native communities that rely on subsistence farming. Increases in the horse trade have led to the release of horses within the reserve’s forests, where they are eventually recaptured and sold as the need arises. This practice gives farmers a reserve of potential income, without depleting resources on their farms. Giant pandas are specialist feeders; bamboo meeting almost all of their caloric needs. Semi-feral horse herds on the reserve also select bamboo as their food source, and the increase in demand caused by the introduction of horses has resulted in a decline in bamboo availability. The data from this experiment was used to create a NetLogo program. A map of the reserve was overlaid onto a coordinate system where each patch was ranked according to habitat suitability and food availability. Intrinsic qualities such as the lifespan and reproductive rates of each species and the growth rate of bamboo were embedded in the code for the program, while the initial size of each population was manipulated using sliders. It was found that, due to the higher reproductive rate and longer lifespan of the horses, their growth rate per capita significantly exceeded that of the pandas. Starting with the current population of each species, both populations initially increased. Once the horse population exceeded a threshold amount, the giant panda population decreased to zero while the horse population continued to increase until it reached carrying capacity. These results substantiate the measures being taken within the Wolong National Nature Reserve to decrease the semi-feral horse population. The code used in this program can be used with minor alterations to model interspecific competition concerning endangered species in other regions as well. Eric Turner – Vortices in the Diffraction Pattern of a Particle Beam Diffraction has been a key instrument in understanding properties of waves, light and quantum particles ever since Young’s fundamental two-slit experiment. It is one of the most powerful tools in understanding wave-particle duality. A simple result of diffraction experiments, with 2 or more slits, is interference based on the superposition of waves. When the superposition sums to zero we find the wave function is equal to zero at that point and if the solution is zero then the phase is indeterminate. In 1974 Nye and Berry came out with a paper, “Dislocations in Wave Trains”, regarding topology of waves including vortices and singularities. They refer to a singularity as a dislocation. Whereas a vortex is a dislocation morphology known as the pure screw dislocation (Nye and Berry, 1974; Berry 1981). Vortices come up in many different physical systems and describe various phenomena in optics, acoustics, hydrodynamics and quantum mechanics. A paper on the vortices in quantum mechanics by O’Dell describes the properties of a diffracted beam of atoms by a standing wave of light. The research works through solving the evolution of the wave function and its behaviour as they pass through the standing wave and diffract. To describe the behaviour involves solving Schrödinger’s equation, which will be in the form of the Raman-Nath equation (Mathieu’s Equation). This paper hopes to explore the topology of the wave functions of diffracted atoms, specifically to identify vortices. The atoms will be diffracted through a standing wave of light imparting a potential on the beam of atoms. The beam of atoms obey the Schrödinger equation, but after passing through the potential, the beam conforms to a specific form of the Schrödinger equation known as the Raman-Nath (RN) equation. The RN equation yields the behaviour of the amplitudes and phase of the wave function, describing how it evolves in time. By analyzing the topology given by the RN equation we hope to find vortices. The aim of this project is to find a method by which to determine where vortices are occurring using only mathematical analysis, sight, and computing. The final product will be a numerical method to automate the identification of vortices as well as create images that highlight the topology of these vortices and provide a qualitative analysis. Alex Shephard – Investigating time management and lifestyle in interdisciplinary programs at McMaster University: A Pilot Study Undergraduate science students lead busy lives, and the effective implementation of time management skills is crucial for academic success (Kember et al., 1996). University programs dictate the amount of time students devote to in-class learning, while students are responsible for finding time to complete the required coursework. Time devoted to coursework may significantly differ between students in different university programs (Ruiz-Gallardo et al., 2010).  Depending on the nature of the coursework, time devoted to other basic life activities such as paid work, extracurriculars, leisure, and sleep could be compromised (Macan et al., 1990). Integrated Science is a four-year undergraduate science program at McMaster University. The coursework largely consists of supervised, inquiry-based learning through group research projects. Life Sciences is an alternative program to Integrated Science at McMaster, based primarily on a lecture format. These two programs are similar in content learned but differ in terms of teaching strategy and workload style (McMaster University, 2014). It is hypothesized that these differences may lead to differences in time spent on both out-of-class learning and other activities typical to student life. The first goal of our research is to test the hypothesis that students enrolled in Integrated Science I differ from students enrolled in Life Sciences I in terms of time allocated to basic life activities. Students from Level I Integrated Science (n=35) and Level I Life Sciences (n=800) will complete an online survey to estimate the amount of time allocated to life activities such as paid work, extracurriculars, leisure, and sleep, in an average university week. Students will then complete a perception-based survey to indicate their satisfaction with time spent on these activities. The second research goal is aimed specifically at students in Integrated Science I, who face a rigorous and diverse workload. The question will address whether these students allocate an appropriate amount of time to the tasks that make up their coursework, based on the weighting of the tasks in the overall grading scheme of the course. Students will complete an additional online survey to estimate time devoted to group-based projects, small assignments, and studying in an average university week, and then an additional perception-based survey to indicate their satisfaction with their time management. These data could be useful for educators and program designers, who strive to design university programs that maximize student learning while maintaining a workload that is manageable enough for students to be successful. Additionally, the results could benefit first year course design of the Integrated Science program. Any inconsistencies between time allocation and mark distribution could indicate where students are having difficulties managing time, potentially calling for refinement of course structure. Hanna Stewart & Pratik Samant No abstract provided. Ben Windeler Objectives: The value of statistical analysis in professional hockey has been widely debated. This report provides an introduction to common statistical measures used to predict the performance of NHL hockey teams, an in-depth explanation of the methods used to choose these measures, and a layman’s explanation of their significance as a predictor of success. Following this is original statistical analysis. First, using these statistical measures as predictors of playoff success, which is arguably the most important benchmark for team success. Second, analyzing specific game outcomes: overtime (OT) and shootouts (SO). These games have particular importance in the NHL, as teams are awarded a point for an overtime or shootout loss, which biases teams to extend non-division games into overtime to guarantee a point. Methods: The majority of public research on statistics in hockey comes from an online community of bloggers and amateur statisticians. The foremost part of this report synthesizes results claimed by these sources, outlines the exact methods used to obtain the results, and provides well-referenced justification for, and explanation of, the results. Original analysis was conducted by looking at Pearson correlation coefficients. The R software package was used for all data analysis. Results: All of the external results that were looked at were fairly simple to reproduce, but the significance of these results was often exaggerated. In general, while the methods used seem to have been appropriate, their utility for predicting future team success was poor at best. Analysis showed that the most significant indicator of a team’s success in a given playoff series was its performance against the opposing team during the regular season. The importance of different statistical measures in determining how often teams would go to OT or SO was miniscule, as this seems to be dominated by chance. Conclusions: It is important for any statistical analysis to thoroughly describe its methods in order to provide an unbiased and reproducible predictive model. This report provides these qualities to the typically poorly sourced body of statistical analysis in the NHL. It also highlights useful predictors for playoff success. Finally, the report demonstrates the inherent randomness of points provided in overtime losses and makes an argument against the validity of awarding points in this scenario. David Yun – Simulating the formation of complex systems The Earth’s organisms are too complex to have been formed through strictly random processes (Bonner, 1988; Dawkins, 1986). Current theories for the formation of biological complexity are based on Darwin’s theory of evolution by means of natural selection (Vinicius, 2010). Critics of natural selection argue that a sentient and intelligent designer is required to explain the complexity exhibited in biological structures (Dawkins, 1986; Paley, 1802; Discovery Institute, 2014). Simon (1962) describes a thought experiment imagining two brothers making 1000-component watches. One brother takes a stepwise approach, attempting to assemble all 1000 pieces in a single run, but he loses all of his work each time he is interrupted. The other brother uses a modular approach, constructing 100 subunits of 10 components each. He then combines these subunits into 10 larger units of 100 components each. Finally, he combines these larger units to create a finished watch. In using this approach, he only loses the progress on his current subassembly when interrupted. In this research, Simon’s “watchmaker” parable was evaluated through computer simulation to compare the efficiency of modular construction to an unstructured, stepwise construction. A model of Simon’s parable was generated in Maple using the probability of interruption (p) during construction and assembly structure as the variable parameters. Combinations of these two variables were tested to research their effects on the relative efficiency between hierarchical and stepwise construction. Input parameters for Simon’s parable are also presented in an assignment for an evolutionary biology course at McMaster University. These parameters were tested to evaluate the assignment’s use as a teaching aid.  Modular construction productivity decreased linearly as p increased, while stepwise productivity decreased exponentially. It was also found that Simon’s calculation overstated the productivity of both modular construction and stepwise construction. His calculated ratio for productivity (modular 4000 times more efficient than stepwise) was close to the simulated value (2850). Changes to the modular assembly structure did not produce measurable effects on productivity. Under the conditions specified in the assignment, it was found that modular construction is more productive than stepwise for p >0.2. Since p = 1/6 in the assignment, the instructions need to be modified to demonstrate the advantage of modular construction. The findings of this research support Simon’s hypothesis that modular construction is advantageous to stepwise construction when there is a sufficient probability of losing progress. This idea has been applied in evolutionary biology to explain the accumulation of favourable mutations (Vinicius, 2010).
d00639e45079a2d8
Wednesday, March 27, 2019 Nonsense arguments for building a bigger particle collider that I am tired of hearing (The Ultimate Collection) [Image Source] I know you’re all sick of hearing me repeat why a larger particle collider is currently not a good investment. Trust me, I am sick of it too. To save myself some effort, I decided to collect the most frequent arguments from particle physicists with my response. You’ve heard it all before, so feel free to ignore. 1. The “Just look” argument. This argument goes: “We don’t know that we will find something new, but we have to look!” or “We cannot afford to not try.” Sometimes this argument is delivered with poetic attitude, like: “Probing the unknown is the spirit of science” and similar slogans that would do well on motivational posters. Science is exploratory and to make progress we should study what has not been studied before, true. But any new experiment in the foundations of physics does that. You can probe new regimes not only be reaching higher energies, but also by reaching higher resolution, better precision, bigger systems, lower temperatures, less noise, more data, and so on. No one is saying we should stop explorative research in the foundations of physics. But since resources are limited, we should invest in experiments that bring the biggest benefit for the projected cost. This means the higher the expenses for an experiment, the better the reasons for building it should be. And since a bigger particle collider is presently the most expensive proposal on the table, particle physicists should have the best reasons. “Just look” certainly does not deliver any such reason. We can look elsewhere for lower cost and more promise, for example by studying the dark ages or heavy quantum oscillators. (See also point 18.) 2. The “No Zero Sum” argument. “It’s not a zero sum game,” they will say. This point is usually raised by particle physicists to claim that if they do not get money for a larger particle collider, then this does not imply a similar amount of money will go to some other area in the foundations of physics. This argument is a badly veiled attempt to get me to stop criticizing them. It does nothing to explain why a particle collider is a good investment. 3. Everyone gets to do their experiment! This usually comes up right after the No-Zero-Sum-argument. When I point out that we have to decide what is the best investment into progress in the foundations of physics, particle physicists claim that everyone’s proposal will get funded. This is just untrue. Take the Square Kilometer Array as an example. Its full plan is lacking about $1 billion in funding and the scientific mission is therefore seriously compromised. The FAIR project in Germany likewise had to slim down their aspirations because one of their planned detectors could not be accommodated in the budget. The James Webb Space telescope just narrowly escaped a funding limitation that would have threatened its potential. And that leaves aside those communities which do not have sufficient funding to even formulate proposals for large-scale experiments. (See also point 19.) Decisions have to be made. Every “yes” to something implies a “no” to something else. I suspect particle physicists do not want to discuss the benefit of their research compared to that of other parts of the foundations of physics because they know they would not come out ahead. But that is exactly the conversation we need to have. 4. Remember the Superconducting Super Collider! Yes, the Superconducting Super Collider (SSC). I remember. The SSC was planned in the United States in the 1980s. It would have reached energies somewhat exceeding that of the Large Hadron Collider, and somewhat below that of the now planned Future Circular Collider. Whatever happened to the SSC? What happened is that the estimated cost ballooned from $5.3 billion in 1987 to $10 billion in 1993, and when US congress finally refused to eat up the bill, particle physicists collectively blamed Phillip Anderson. Anderson is a Nobel Prize winning condensed matter physicist who testified before the US congress in opposition of the project, pointing out that society doesn’t stand much to benefit from a big collider. While Anderson’s testimony certainly did not help, particle physicists clearly use him as a scapegoat. Anderson-blaming has become a collective myth in their community. But historians largely agree the main reasons for the cancellation were: (a) the crudely wrong cost estimate, (b) the end of the cold war, (c) the lack of international financial contributions, and (d) the failure of particle physicists to explain why their mega-collider was worth building. Voss and Koshland, in a 1993 Editorial for Science, summed the latter point up as follows: “That particle physics asks questions about the fundamental structure of matter does not give it any greater claim on taxpayer dollars than solid-state physics or molecular biology. Proponents of any project must justify the costs in relation to the scientific and social return. The scientific community needs to debate vigorously the best use of resources, and not just within specialized subdisciplines. There is a limited research budget and, although zero-sum arguments are tricky, researchers need to set their own priorities or others will do it for them.” Remember that? 5. It is not a waste of money This usually refers to this attempted estimate to demonstrate that the LHC has a positive return on investment. That may be true (I don’t trust this estimate), but just because the LHC does not have a negative return on investment does not mean it’s a good investment. For this you would have to demonstrate it would be difficult to invest the money in a better way. Are you sure you cannot think of a better way to invest $20 billion to benefit mankind? 6. The “Money is wasted elsewhere too” argument. The typical example I hear is the US military budget, but people have brought up pretty much anything else they don’t approve of, be that energy subsidies, MP salaries, or – as Lisa Randall recently did – the US government shutdown. This argument simply demonstrates moral corruption: The ones making it want permission to waste money because waste of money has happened before. But the existence of stupidity does not justify more stupidity. Besides that, no one in the history of science funding ever got funding for complaining they don’t like how their government spends taxes. The most interesting aspect of this argument is that particle physicists make it, even make it in public, though it means they basically admit their collider is a waste of money. 7. But particle physicists will leave if we don’t build this collider. Too bad. Seriously, who cares? This is a profession almost exclusively funded by taxes. We don’t pay particle physicists just so they are not unemployed. We pay them because we hope they will generate knowledge that benefits society, if not now, then some time in the future. Please provide any reason that continuing to pay them is a good use of tax money. And if you can’t deliver a reason, I full well think we can let them go, thank you. 8. But we have unsolved problems in the foundations of physics. This argument usually refers to the hierarchy problem, dark matter, dark energy, the baryon asymmetry, quantum gravity, and/or the nature of neutrino masses. The hierarchy problem is not a problem, it is an aesthetic misgiving. For the other problems, there is no reason to think a larger collider would help solving them. I have explained this extensively elsewhere and don’t want to go into the question what problems make promising research directions here. If you want more details, read eg this or this or my book. 9. So-and-so many billions is only such-and-such a tiny amount per person per day. I have no idea what this is supposed to show. You can do the same exercise with literally any other expense. Did you know that for as little a tenth of a Cent per year per person I could pay my grad student? 10. Tim Berners-Lee invented the WWW while employed at CERN. By the same logic we should build patent offices to develop new theories of gravitation. 11. It may lead to spin-offs. The example they often bring up is contributions to WiFi technology that originated in some astrophysicists’ attempt to detect primordial black holes. In response, allow me to rephrase the spin-off-argument: Physicists sometimes don’t waste all money invested into foundational research because they accidentally come across something that’s actually useful. That wasn’t what you meant? Well, but that’s what this argument says. If these spin-offs are what you are really after, then you should invest more into data analysis or technology R&D, or at least try to find out which research environments are likely to benefit spin-offs. (It is presently unclear how relevant serendipity is to scientific progress.) Even in the best case this may be an argument for basic research in general, but not for building a particle collider in particular. 12. A big particle collider would benefit many tech industries and scientific networks. Same with any other big investment into experimental science. It is not a good argument for a particle collider in particular. 13. It will be great for education, too! If you want to invest into education, why dig a tunnel along with it? 14. Knowledge about particle physics will get lost if we do not continue. We have scientific publications to avoid that. If particle physicists worry this may not work, they should learn to write comprehensible papers. Besides, it’s not like particle physicists would have no place to work if we do not build the next mega-collider. There are more than a hundred particle accelerators in the world; the LHC is merely the largest one. Also note that the LHC is not the only experiment at CERN. So, even if we do not build a larger collider, CERN would not just close down. 15. Highly energetic particle collisions are the cleanest way to measure the physics of short distances. I tend to agree. This is what originally sparked my interest in high energy particle physics. But there is currently no reason to think that the next breakthroughs wait on shorter distances. Times change. The year is 2019, not 1999. 16. Lord Kelvin also said that physics was over and he was wrong Yeah, except that I am the one saying we could do better things with $20 billion than measuring the next digits of some constants. 17. Particle accelerators are good for other things. The typical example is that beams of ions can treat certain types of cancer better than the more common radiation therapies. That’s great of course, and I am all in favor of further developing this technology to enable the treatment of more patients, but this is an entirely different research avenue than building a larger collider. 18. You do not know what else we should do. Sure I do. I wrote a whole book on this: In the foundations of physics, we should focus on those areas where we have inconsistencies, either between experiment and theory, or internal inconsistencies in the theories. Examining such inconsistencies is what has historically led to breakthroughs. We currently have such situations in the following areas: (a) Astrophysical and cosmological observations attributed to dark matter. These are discrepancies between theory and data which should be studied closer, until we have pinned down the theory. Some people have mistakenly claimed I am advocating more direct detection experiments for certain types of dark matter particles. This is not so. I am saying we need better observations of the already known discrepancies. Better sky coverage, better resolution, better stats. If we have a good idea what dark matter is, we can think of building a collider to test it, if that turns out to be useful. (b) Quantum Gravity. The lack of a theory for quantized gravity is an internal theoretical inconsistency. We know it requires solution. A lot of physicists are not interested in experimentally testing this because they think it is not possible. I have previously explained here and here why that is wrong. (c) The foundations of quantum mechanics: The measurement postulate is inconsistent with reductionism. There is basically no phenomenological or experimental exploration of this. Needless to say, I think my argument for how to break the current impasse is a good one, but I do not really expect everyone to just agree with it. I am primarily putting this forward because it’s the kind of discussion we should have: We have not made progress in the foundations of physics for 40 years. What can we do about it? At least I have an argument. Particle physicists do not. 19. But you do not have any other worked-out proposals The proposal for the FCC was worked out by a study group over 5 years, supported by 11 million Euro. Needless to say, I cannot, as a single person and in a few weeks of time, produce comparable proposals for large scale experiments. Expecting me to do so is unreasonable. 20. But it will do all these things Particle physicists like to point towards their 716 pages report that summarizes what they could do with the FCC. But, look, no one doubts that you can do something with $20 billion. The question is whether what you can do is worth the investment. The report does not address this point at all. 1. Final sentence of 16.b) "I have previously written explained here and here why that is wrong." - I guess links are missing. 2. Hi Sabine, I totally appreciate what you're doing, and it even opened my eyes to the systematic errors that scientists make. Please don't let the cargo cult followers silence you :) But, as reader of your blog, I kinda miss the variety of the content that you published some time ago. Like reviews of new (or old) papers, introductions to new (and old) theories, etc. I hope at some point you will get back to digging out such papers and theories, and presenting them... Best regards 1. Michael, Yes, I am aware of this :( I hope to get back to "normal" soon. I have several interesting papers I want to write about, but I am severely behind. 3. "The foundations of quantum mechanics: The measurement postulate is inconsistent with reductionism. There is basically no phenomenological or experimental exploration of this." At the start of the Quantum Information/Computing/Communication industry, it was very much felt that such things were experimental foundations of physics, and I think they were instrumental in making QM seem much more familiar than it felt before, say, 2000, whether we can say we now better understand measurement or not. By now many people working on such things would hate to be thought so impractical, and therefore probably wasting $billions, but the early runners went to Foundations of QM conferences and did care about such things. If quantum computation doesn't pan out quickly, perhaps we'll be treated to stories of how the many billions spent led to better understanding of the foundations of QM. 16. (d) The foundations of interacting QFT. We don't understand interacting QFT. [But you may remember that I'm as much a broken record on this as other people are about their enthusiasms.] 1. Peter, Yes, you are right. I should have included QFT in that. I usually do, but somehow I forgot. My bad. 4. "By the same logic we should build patent offices to develop new theories of gravitation." Not the worst logic of the arguments considered. 5. Before you get annoyed about humanity spending money to advance fundamental knowledge consider this.. Google make $4 Billion/month from people clicking on their silly little ads. Give science a break. Give them the money. Lets look inside the proton. Unless of course you prefer to click on ads. 1. @Richard " The Hossenfelder Scale" for measuring Crackpots is way better than The Baez's Crackpots Index ... 6. For all who are discouraged to build the FCC (or CLIC) after reading the arguments above, I recommend to read the interview with Nima Arkani-Hamed, it will cheer you up again ! Where there is hope, there is life ! 7. If anyone can flesh out just a little what Sabine means by "the measurement postulate is inconsistent with reductionism," I'd be grateful. I assume this is a problem I've heard stated in other terms, and I'm just failing to translate it into this phrasing. My failure, not Sabine's. 1. Dave M, The point is that we would like our measurement instruments to be describable, in principle, by quantum mechanics. In that case, the measurement process should not require an additional assumption: all the details of the measurement process should be explained by QM without an additional measurement postulate. If that is not so -- i.e., if the action of measurement instruments cannot be explained by QM alone -- then we are entitled to ask what novel physical process is going on in the measurement process that is not explained by quantum mechanics. Weinberg explained this quite clearly in Sabine's interview in her book. See also his discussion in the second edition of his Lectures on Quantum Mechanics: "If quantum mechanics applies to everything, then it must apply to a physicist’s measurement apparatus, and to physicists themselves. On the other hand, if quantum mechanics does not apply to everything, then we need to know where to draw the boundary of its area of validity. Does it apply only to systems that are not too large? Does it apply if a measurement is made by some automatic apparatus, and no human reads the result?" The ultimate issue is whether (human?) consciousness somehow is needed to bring about a true measurement. Wigner suggested just that in his famous essay in The Scientist Speculates. Of course, if it were ever shown that consciousness is integral to the measurement process, then we would be obligated to turn our attention to understanding consciousness, which would certainly be a change of direction for physics! It seems reasonable that physicists should at least try to give a fully complete physical exposition of QM without invoking consciousness. Weinberg sums up by alluding to perhaps the oddest aspect of this whole matter: "Indeed, many physicists are satisfied with their own interpretation of quantum mechanics. But different physicists are satisfied with different interpretations." So, if you think you know the "obvious" answer to Weinberg's questions, be aware that many physicists agree that there is an "obvious" answer, but they disagree as to what that "obvious" answer is. Dave Miller 2. PhysicistDave, I totally endorse what you have written above. I guess quite a lot of non-HEP scientists feel that there is unfinished business at the level of ordinary QM, and indeed that that may be truly fundamental. As you point out, Shroedinger's equation properly applies to every part of life - not just a few particles that happen to be under study. Superficially those equations would imply a reality consisting of an ever more entangled wave function encompassing different possible situations superimposed. The possible relationship between QM and consciousness clearly interests Roger Penrose, so there it isn't as though this idea has been 'settled', it has just been put to one side because it is embarrassing! 3. Physicist Dave, QM is a mathematical method for describing the statistical outcomes of otherwise unobservable physical processes. The math neither describes nor explains those processes. Why then, should we expect a complete physical exposition of QM (with or without consciousness)? 4. Bud Rap wrote, "QM is a mathematical method for describing the statistical outcomes of otherwise unobservable physical processes." That makes QM sound like classical statistical mechanics, which I think isn't fair. First of all, QM computes the wave function, which is *not* in itself a probability distribution - not least because it can take on negative or complex values. QM isn't creating a statistical outcome of a deeper theory (although OK it is an approximation to QFT). You only get probabilities when you evaluate Ψ Ψ*. Surely physics should be more than obtaining some equations that seem to describe reality, shouldn't it also provide an explanation of what it is that the maths relates to? 5. "QM isn't creating a statistical outcome of a deeper theory (although OK it is an approximation to QFT)." Actually, it might as well be; it's just that we don't know that deeper theory yet. And I think even QFT doesn't fix that - you get a distribution over configurations of classical fields instead than over configurations of classical point-like particles, but the 'statistical distribution' effect remains. 6. David Bailey, At the interface between QM and observation statistics is all you get. That QM arrives there via a different set of formalisms necessitated by the peculiar circumstances of the quantum scale, doesn't alter the analogous nature of the outcome. It certainly should! My point was only that you cannot expect to obtain reasonable physical explanations from mathematical formalisms that aren't constructed on reasonable qualitative foundations. 7. Simone said, "Actually, it might as well be; it's just that we don't know that deeper theory yet. " Well unless there are an infinite number of theories, each depending on the one below, the process has to stop somewhere. My gut feeling is that QM is special - it says that fundamentally we have different possibilities (realities if you like) that evolve and interfere with each other. This feels more fundamental than particles. So I would rate QM as fundamental, and since QM cannot coexist with GR, I'd bet that GR has to change. 8. Bee, has Moriond 2019 found any BSM physics signals, i understand possible lepton flavor violations 1. Moriond is really only the occasion on which rumors become official. If there were any BSM breakthroughs in the data analysis done so far, we'd have heard of it by now. 2. The most interesting physics is the measurement of CP violations in decays of D0 vs bar-D0. 3. Yes that's in the popular news. has Moriond released new bounds on SUSY such as gluino's and squarks? given Morion hasn't seen SUSY in the full data set, it seems the likelihood of a 5-sigma discovery of SUSY is low. 4. So far evidence for s-tau or s-top etc is at best around 2-sigma. It has not risen to the eyebrow raising level of 3-sigma. The most recent thing I have seen is 9. Doctor Hossenfelder, In response 17, having pointed out that you are only one person, the criticism is not relevant because there exists a wealth of ready available alternatives already. To suggest a few (sorry, just my personal interests): Fusion Energy; Carbon Removal from the Atmosphere; Efficient Storage of Renewable energy sources during times of over-production; higher temperature superconductivity; Neurobiological Research; Cognitive and neurological health; Structures encouraging responsibility and objectivity in leadership. 1. I don't think diverting (even more) funds from the foundations of physics research into engineering research (and a bit of biology and medical sciences) is the right way to go (and I don't think that's what Sabine proposes, I trust she'll correct me if I misinterpreted her). Those $20 billion should stay in the same field of research, but funding 5-100 promising experiments instead of one mega-project with few to no chances of getting a breakthrough. Or even a different huge project if you have the justification. Biology, biomedicine and engineering are already attractive research fields for which funding, private and public, is *relatively* easy to come by. Physics (specially foundations) is extremely hard to sell to the public and the chances of private funding are close to nil. Please, do not advocate for moving funds away from physics, we *need* physics research. 2. Javier, Sabine has never seemed to me to suggest diverting funds from physics research. She presents arguments that, in upgrading the LHC, these funds are not being allocated for convincing objectives. Intelligent probing of the unknown, including in the field of theoretical physics, should always be supported. So should building on existing knowledge to directly address massive known problems. Tax supported funding is not unlimited; worthy ideas in all fields die daily for their lack. No single individual can be expected to develop programs which solve all the associated problems. (17.) In suupporting arguments for upgrading the LHC by related applied science, e.g. in superconducting magnets, the question simply arises whether the known value of advancing applied science should be more directly supported until physics offers programs with a higher probability of definitive results than the LHC. jmo. Bert Kortegaard 3. Yes, I'm aware Sabine wasn't suggesting that; you were, though. In my experience, Applied Science is just a fancy way of saying engineering research and, as I said, I don't think we should transfer money from the much-in-need-of-funding foundations of physics into the bad-but-still-not-nearly-as-bad field of engineering research. Superconducting magnets are being actively researched by public and private interests (plenty of direct applications) and although you can always use more funding, they have plenty of opportunities to get it (same with your other proposals). Foundations of physics (QFT, Cosmology, Quantum Gravity, etc) get nearly 0 funding from the private sector because of their lack of immediate applicability and, because of the obscurity of the topics, it's also a hard sell to the public (at the risk of being wrong, I'm guessing they are the worst funded field within the natural sciences; probably only social scientists envy them). That's why while I agree that we should fund something else, I believe the funds should stay in the field. And for full disclosure, I say this precisely from the point of view of someone who does engineering research for a living... in the private sector. Find a theoretical physicist who can say the same (and is still doing fundamental research). 4. Javier, thanks for your comments. I thought what I was suggesting was obvious from what I wrote, but I apologize to anyone who misinterpreted it as you have. Applied Science starts where science is understood well enough to build on it to produce useful things. At its most interesting it includes developing new techniques and tools, but ithose of us who practice it do not ordinarily describe that as research. My blog includes a link to some of my own work in this field. Lest this should become off-topic, my blog also contains my email. 10. "...Google make $4 Billion/month from people clicking on their silly little ads. Give science a break......" God I hope that asinine comment is an attempt at humour..but I have a feeling its not.. 11. On "what novel physical process is going on in the measurement process" I've always assumed it was some sort of Darwinian-like selection-of-fittest-history (in a sum-over-histories formulation of QM). But this process is apparently an additional "postulate" to QM. 12. I love your blog and totally agree that a "wrapping up" of this discussion was due. For that reason, I would suggest a change in argument 6: "With it, THESE particle physicists... "That THE particle physicists MAKING it ..." Only the ones making the argument suffer from moral corruption. Many others just think it isn't a waste of money, they just have a different opinion (generalization). It may help avoiding unwanted 'rants' 1. Ward, I think this is clear from the context, but I nevertheless changed that sentence along the line you suggest. 13. Me as taxypayer I think we should not spend billions of € for a even bigger collider - instead we should invest money in exploring and pondering, where we failed in our beautiful Taka-Tuka-theories during the last half century and consider new ways of thinking aubout the fundamental laws of physics! 14. As to point 7, maybe NASA's space launch system could use the extra physicists if no new collider is built. They could move from one project with no results to another that is building a rocket that will never launch, because the important thing is to have jobs in all fifty states, not actually get anything done. As Rep. Aderholt said about SLS ""The SLS and Orion programs are, of course, key to the health of our national aerospace supplier base, and it's really helped to really put a new boost of energy into the suppliers in all the 50 states following the retirement of the space shuttle," 15. Bee, do these arguments in these post apply to HE-LHC with 16 tesla magnets, a estimated ~7 billion upgrade to LHC 8.33 tesla magents in its 27 km tunnel? I would argue that for the price tag, exploring between 14 TEV to 27 TEV for new physics is certainly a justified upgrade. i wonder whether it'd be better to simply forget about HL-LHC and instead invest that money into HE-LHC. and by the time 16 tesla magnets are ready, perhaps 24 tesla or even 32 tesla magnets will be in development. so no new tunnel will be built, the 27 km is reused, but super conducting magnet technology is improved over decades. 16. "IF" dark matter is made of particles that only interact through gravity, how can you study it if not by missing energy momentum of high energy collisions? 1. @Daniel de França MTd2; Dark Matter necessarily gravitates with other matter; it can be studied astronomically; through gravitational lensing and perhaps by studying galaxy dynamics in a wide range of galaxy sizes, or a range of galaxy proximities. What's happening with the dark matter in galactic collisions? Let's build a $20B super high resolution space telescope, or 20 $1B telescopes we can gang together in an array. Let's study it. 2. Dr Castaldo, Yes but - there's always a but! The recent paper "Probing dark matter particles at CEPC" by Zuowei Liu and colleagues illustrates the possibility of using high energy colliders to investigate various Dark Matter models. The point being that thorough investigation of a phenomenon requires multiple lines of attack. This means making the best of the available options - which are often not mutually exclusive. Will collider funding be diverted to astronomy? There's currently no reason to suggest this would be the case. 3. Dr Castaldo This is like studying electrons with circuits. You won't be able to infer what dark matter is, but, just its collective properties. That is, you will just know what a current looks like. You won't get insight of what is dark matter. 17. Hi Sabine, you state: The hierarchy problem is not a problem. Maybe, but if you find a solution you sure will have surprises - surprises that the current foundations may not survive. 18. "The measurement postulate" I feel like the justification to question the "Copenhagen interpretation" (you know the one they still teach undergrads) has been around and readily accessible for at least 8 years ( The problem seems to be that none of the alternative hypotheses (can we call them that?) have been able to gather the doubters together and gain traction. This business of questioning weather "consciousness" is required for things to be "measured" always seemed daft to me, isn't "superposition" a statement about the correlation or non-correlation of two quantum system not a statement about a single quantum system? ie until I correlate my detector with the superimposed system (by shooting lasers between them I guess would be typical) then the detector isn't 'touching' /hasn't 'touched' the other system and just doesn't contain information about the superimposed system yet? So there is never a funny magic state there is just a situation where two systems don't currently share any information so querying ether of them about the other is nonsensical till you 'connect' the two systems (fire the lasers, take the measurement, open the box, throw the detector at the test article... ect) Obviously I'm out of my depth please correct my childish simplifications you there smart physical folk! Thank you for the help... 19. On 17, "you do not know what else to do": I understand you DO know, but --- Since when is knowing the solution to a problem necessary to know that there IS a problem? If I go to the vet because my dog is limping, I don't go there knowing what should be done about it. Making it known that a problem exists is the first step, getting agreement on that, and detailing the nature of problem, come next. Developing a plan of attack is well down the list. 1. It is interesting to me that (re the video link above: The Quantum Conspiracy, 1,571,119 views, GoogleTechTalks) that some physicists like an "interpretation" that says "you don't really exist". It seems to me to be a part of the curious antimaterialist turn (we are all just "information" or something like that) among physicists, at least as indicated by the current articles published for the general reader. 2. @Philip Thrift, voices that advocate for "antimaterialism" are perhaps more shrill, but for the general reader you could try Philip Ball's "Beyond Weird",, which deflates the weirdness of QM in a way that IMO fairly accurately reflects the practical "let's use QM" perspective of working quantum computing/information, condensed matter, and most working physicists. His Royal Institution lecture,, gives a fairly good sense of the position he suggests in that book. You may already know that in philosophy anti-realism is as or more often anti-realism about theories than it is an anti-materialism of anti-realism about the world and our experience of it. There will be some continuity between our current theories and new theories, so that electrons will exist in *some* form in future theories (with careful discussions of how the electron is both equivalent and not quite equivalent to new concepts), but they or other concepts may be deprecated, so to speak, because other theoretical tools and concepts will be devised that are just more effective. An absolute commitment even to such an apparently robust theoretical concept as the electron may, or may not, turn out to be ill-advised, but an appropriate slight hesitancy to say of every part of the standard model of particle physics that it is "emphatically, finally real", does not demand any hesitancy in our belief in and engagement with the world as a whole. 3. I have read articles about Philip Ball's book (e.g. Peter Woit's, but not the book I admit. My own view has been some combination of Path Integral (or Sum-Over-Histories) and (some version of) Quantum Darwinism: PI+QD. But that's as "real" as I get. :) 4. FWIW, the (very popular) idea that the Path Integral (a generating function for time-ordered vacuum expectation functionals) somehow makes quantum theory classical (paths!) is IMO problematic because it uses time ordering to sweep the noncommutative algebraic structure under the table, whereas noncommutative measurements are essential for the empirical success of QM/QFT. If you say "(some version of) QD", I take you to be invoking decoherence in some way, which one has to have formal worries about, but, as you know, it works more-or-less, and certainly for all practical purposes. My own view has become that QM and QFT are (stochastic) signal analysis formalisms, for which we can say, loosely, that incompatible measurements are mathematical consequences of using classical representations of the Heisenberg algebra, which is closely connected with fourier analysis. 5. On the PI, I just follow Fay Dowker (@DowkerFay, Mar 26): "This was an enjoyable discussion. I argued that there is one world, not many, in quantum theory based on the Path integral or Feynman sum-over-histories." On "Darwinian" selection: Only one history survives. The others die. Poor things. 20. Hi , Sabine �� Nice discussion. I agree with you as to a larger collider. -- I just find it interesting, the references to 'Tesla' -- (appearently) without knowing what it was (is). -anyway , keep up the good -- it is good. All Love, 21. re: "Nonsense arguments for building a bigger particle collider that I am tired of hearing (The Ultimate Collection)" Bee, the question i have about your arguments in this post is this CERN has earmarked several billion dollar upgrade for LHC to HL-LHC, to increase its luminosity is the billions dollars spent to upgrade luminosity by a factor of 2 to 10 a worthwhile use of money? what about $7 billion more to upgrade LHC to HE-LHC? HL-LHC and HE-LHC upgrade cost billions, but reuse the same 27km tunnel. it seems to me if we apply your arguments, we shouldn't bother upgrading the luminosity of LHC, after all, it is still going to CM of 14 TEV, and it seems a 5 sigma discovery at this point is moot. 22. This was a great thing to read right after opening my bottle of wine :) 23. RE "What should we do?" Martin Harwit wrote a very interesting book in 1981 called "Cosmic Discovery". In it, he shows the amazing role played by serendipity in fundamental discoveries, and tries to get some understanding of how to go forward based on what has led to the current state of knowledge. I think you would enjoy it. This post reminded me of it. 24. What do you think of the latest version of string theory called F-theory? I think it's a four-letter word they can't say in public 25. The intense discussion suggests the collider culture has yet to be buried and given up. I have pretty good reasons to believe that we we require new ideas about such experimental research particularly in relation to the ultimate nature of existence and of our realities. It cannot be argued that we have reached the end of all possibilities. However what I have in mind concerns the ultimate nature of forces and particles which if knew would open out a new world of physics. 26. Sabine, It seems to me that several of your arguments boil down to "Cost matters!", contrary to your opponents who are, in effect, arguing "No, Cost does not matter!" I came close to majoring in economics instead of physics, and I have trouble grasping the mind-set of anyone who truly believe that cost does not matter, but this does seem to be their perspective. Frankly, I think the subtext of your opponents' arguments is, in essence, "We high-energy physicists are just more important than other people, and doing high-energy physics is just more important than what other people do!" No one will say this quite so bluntly, but I am not sure any of us HEP physicists are completely immune to such hubris. After all, we chose to go into HEP because we really did think it was important. Of course, scientists should strive for rationality and objectivity, but, obviously, we too are all-too-human! All the best, 1. Dave, I am not sure if they actually believe that cost does not matter or whether they just argue this way because they know it's their only chance. Either way, though, what surprises me is that they would even make such an argument, if not explicitly, then implicitly by refusing to explain why the expenses are justified. Well, yes, everyone thinks that their occupation is the most important. I don't blame anyone for that. But most people understand at least that others might not share that impression. 2. I find it extraordinary that fundamental physics is now utterly divorced from the rest of science, or anything that matters more widely. HEP doesn't seem at all likely to discover a foundational truth - but it is always possible to throw yet more money at it to achieve higher energy collisions, and maybe some more 'particles'. That process will only stop when more people like Sabine put their feet down! 27. Hi Sabine. some of the latest tests give credence to your argument. ( high intensity laser / mirror trap) ...(nano particles) ,.. money can be better spent, on smaller scales. --. All Love, 28. Every ten years the space astronomers get together with NASA and create a new list of prioritized space missions. There is never enough money to fund everything and as science changes priorities change and as technology changes capabilities change. It's sort of what Erdos used to do with mathematical problems, He'd assign a cash bounty, higher for the problems he thought would be most fruitful. The problem with particle physics is that the price is getting so high, even in comparison with the costs of space missions, that funding even one item is just too expensive. No one has been thinking about a Plan B, C or D. My guess is that we'll start seeing the real spinoffs from the LHC when physicists start leaving the field. 29. Ms Hossenfelder, I personally think your position against the larger particle collider is very relevant. But I don't think that your arguments can change anything, and that is why : the larger collider has become a collective narrative of the particle physics community. Specialists call that "Intersubjective narratives", those are the root of our human society and when they have got some traction there is no way to kill them by questioning their soundness. By the way most of them are not built on RATIONAL arguments. Think for example of the moon race in the 1960. There was no rational to make such a costly programm without any other purpose than self proudness, but it became an intersubjective narrative of american people and as so impossible to cancel... until the mission succeeded and we could see there wasn't anything usefull to get from it. Il you do want to prevent that project there are in my view only two ways : 1/ leave the scientists and go to the politicians who will ultimately give the money.They most probably are not in the narrative of the particle physics community and could listen to the voice of the reason. But don't expect that the money not spent on the super collider will go in any massive way elsewhere in physics ; 2/ build another narrative on another subject and try to give it traction. To do that you have to get massive support within the physics community not just on criticizing the new collider idea but more importantly on one and only one other project which could get most of the money that could go to the collider. That does not seem fair to all the other good ideas which could benefit of a funding ? Yes, but life is not fair. 1. Franck, I think what you mean by "rational" is really "scientific". I agree that there are reasons besides the scientific ones that make people spend money on large science projects. I have nothing to say about those, so I don't. But I wouldn't call them irrational. You seem to be misunderstanding my intention though. I am not writing to prevent something from happening. I hope to make something happen. I hope that physicists who work in the foundations think about what has gone wrong and how to make progress. Blindly throwing money at the problem will not solve it. You seem to expect me personally to come up with a solution and then convince people to support me. This does not make any sense. Of course I have my own convictions about what is the right thing to do, but I don't think I should be the one making decisions. I merely want physicists to use their brain rather than blindly continuing down dead end streets. It's not about fairness, it's about progress. 2. Franck; Sputnik was launched in October, 1957. To Americans, it was widely considered a dire threat. Russia then put the first man in space four years later. Kennedy needed a response to a potential militarization of space; there was a perceived necessity to not let Russia seize "the high ground". Kennedy considered a number of potential operations, but "putting a man on the moon" before Russia did seemed the most likely to succeed, with the most inspirational content to get public backing. There were very rational ideas behind this program, even if the ultimate goal was just a symbolic finish line. The point was to develop the science and technology and capabilities of the space age, to match the same being developed by a hostile power (the Cold War was 14 years old at this time), and this is what was accomplished. There were many entirely rational reasons to "go to the moon", including the rational decision to appeal to emotions in building public support. Because, as we Americans are currently proving, and other countries have proven time and again throughout recorded history, rationality is definitely not the primary decision making tool of our citizens. 3. @Franck: That "life is not fair" is not an excuse for taking action to make life more unfair; the primary value of human intelligence has surely been to make us far less victims of the random cruelties of life and nature, not to exacerbate them. The solution to one swindle is not another swindle, it is getting people to recognize when they are being swindled. 30. With respect to the discussions on the foundations of quantum mechanics and measurement I write this below. Probability theory for statistically independent events is L^1 in that probabilities add linearly and there are no correlations between probabilities. Quantum mechanics is L^2 in that amplitudes add linearly, but the “distance,” or really most importantly the distance squared as probabilities, is the sum of the modulus square of amplitudes. This makes statistical mechanics or a theory based on pure classical probability fundamentally different from quantum mechanics. The theory of convex sets is such that for a set with measure L^p, with elements x, and another with L^q, with elements y, that Holder's norm ||x||_p×||y||_q ≥ sum_i|x_iy_i| for 1/p + 1/q = 1. This means there is a duality between convex sets with these values of p and q defining these norms. For p = 1 this means q → ∞ and for p = 2 the dual is also q = 2. This is a part of how quantum mechanics and spacetime, with its Gaussian metric distance, are dual to each other. The dual to pure statistical systems with q → ∞ means there are no probabilities at all and this is a completely deterministic system such as Newtonian mechanics. A measurement occurs where there is a decoherence of a quantum wave occurs and the trace elements of the density matrix defines a classical probability distribution. The theory of decoherence permits us to understand how a wave function is reduced, because the superposition or entanglement phase of that system is transferred to a reservoir of states, say the needle state of a measuring apparatus, and the system is reduced to pure probabilities. We can't really know which of these outcomes happens in some deterministic manner according to quantum mechanics. As the dual for a p = 1 system, where the wave function reduction is a p = 2 → 1 process, has as its dual the q → ∞ convex set or hull description. Does this then mean we can use this to understand some underlying classical type of structure to quantum measurement? We might want to be a bit conservative here. The problem is that we have convex sets that we propose are computing quantum numbers, and in the case with a p ↔ q duality we have this idea that quantum numbers, say as the Gödel number for an integer computed by a Diophantine equation or the computed outcome of a deterministic system, as having a single axomatic process. Hilbert's 10th problem proposed there should be a single algorithmic or axiomatic process for solving Diophantine equations Matiyasevich found the final conclusion to a series of lemmas and theorems worked by Davis, Putnam and Robinson, called the DMPR theorem. This is a form of Gödel's theorem and the conclusion is there is no comprehensive axiomatic system for Diophantine equations. Quantum numbers as Gödel numbers for integer solutions to Diophantine equations are then not entirely computable and there can't exist a Turing machine (in the classical sense a q → ∞ convex set) that computes quantum outcomes. I then maintain the solution to the quantum measurement problem is that there can't exist such a solution. It is an unsolvable problem. Quantum measurement has some features similar to self-reference in that a quantum system is encoded by another system ultimately made of quantum states. It also has features similar to the Euclid's 5th axiom problem. One can assume the axiom holds and stick with Euclidean flat space, or one can abandon it and work with a plethora of geometries. In QM this would be to stay with Merman’s shut up and calculate dictum, or to adopt any of the quantum interpretations out there, which contradict each other, to augment QM in some extended way. This has features remarkably similar to the dichotomy between consistency and completeness. 1. @LawrenceCrowell, this is fine, but I suggest there is a question as to what Classical Mechanics is. Specifically, Koopman in 1931 introduced a Hilbert space formalism for CM, which can be thought of as offering a unification of CM with QM, just as the Schrödinger equation and Heisenberg's matrices were unified as Hilbert space formalisms. In these terms, the difference between CM and QM is mostly "just" that CM has a purely commutative algebra of measurements. Mutually noncommutative measurements do make sense for CM, however, as is well-known in signal analysis, where Wigner functions are frequently used: one can introduce the Heisenberg group as differential operators, [j∂/∂q,q]=j, instead of as in QM as [q,p]=i. Call an extension to include all such operators CM+. I lay out an argument that if we have a solution of the measurement problem for CM+ (using a Gibbs state over the CM algebra extended to the CM+ algebra), we also have a solution for QM, in my (currently submitted to Physica Scripta): I find that a solution for CM+ is less elusive. In particular, I suggest that the specific difficulty you outline above is eliminated by comparing CM+ with QM instead of comparing CM with QM. We don't obtain a complete unification, but it's closer than we've had. 2. I looked over your paper and down loaded it. I will have to reserve judgment until I read it sometime later, though I hope not too long into the future. It looks a bit like noncommutative geometry of Connes et al.. The connection between quantum and classical mechanics is often stated as 1 = {q, p} → [q, p] = iħ for large action S = nħ for n → ∞. I think the most important aspect of this is that classical mechanics is real valued and quantum mechanics is complex valued. The extension of the reals into complex numbers means probabilities are the modulus square for |ψ⟩ = sum_n c_n|n⟩ ⟨ψ|ψ⟩ = sum_{mn}c^*_mc_n⟨m|n⟩ = sum_n|c_n|^2 = sum_n P_n. Classical mechanics has none of this construction, and instead determines the value of classical variables. The correspondence between an observable Ô|n⟩ = O_n|n⟩ in quantum mechanics and probabilities is then ⟨ ψ| Ô |ψ⟩ = sum_{mn}c^*_mc_n⟨ m| Ô |n> = sum_n|c_n|^2 = sum_n P_nO_n. This is Born's rule, where curiously a general proof of this is not at hand. Anyway the observable occurs as eigenvalues in a distribution with probabilities. We can think of both classical and quantum mechanics as a measure theory O_{obs} = ∫dμO, but where for classical mechanics the measure is zero everywhere except the contact manifold and the with quantum mechanics there is this quadratic set of modulus square of amplitudes = probabilities in a summation that weights eigenvalues. There is Gleason's theorem that tells us the linear span of a Hilbert space defines a trace that uniquely defines probabilities. Hence any measure μ(X) = Tr(WP_X) for W a positive trace class. So this appears half way to a complete proof of Born's rule; all we need is to slip operators in this. The problem is that operators come in sets of commuting operators. In particular the density matrix evolves by ρ(t' - t) = Uρ(t)U^† for U = exp{-iH(t' - t)/ħ}. For t' - t = δt very small then U ≈ 1 - iH(t' - t)/ħ and it is not hard to see that time evolution of the density matrix involves a nonzero commutator of the density matrix with the Hamiltonian. This means the Hamiltonian rotates or evolves the density matrix out of the basis one might consider for Gleason's theorem. I think this is the reason that Gleason's theorem, as profound it may be, does not reach the generalization of a proof of Born's rule. However, observables in classical and quantum mechanics have different measure theories or distributions. Classical mechanics is “sharp,” which means it it L^∞ --- say like a delta function. Classical mechanics is L^2, and the metric structure of spacetime is L^2 as well and with conformal spacetimes and R_{ab} = κg_{ab} it is also L^2. Without getting further this is a duality connected with building spacetimes with entanglements. Now with 1/p + 1/q = 1 for convex sets then L^∞ is dual to L^1, which is a measure of pure classical probabilities. So what is this system? It is about complete stochasticity, which the outcomes of measurements are an example of. The question is whether the eigenvalues of the QM L^2 coded as integer solutions to Diophantine equations, something proven to be possible by Matiyasevich as any function has a corresponding Diophantine equation (even transcendentals like e^{ix} etc). 3. Not so much Connes as an algebraic QM approach, with the intention to bring it down to a mortal (my) mathematical level (I'm just reading Valter Moretti, "Spectral Theory and Quantum Mechanics", Springer, 2017, for example, where his Chapter 14, "Introduction to the Algebraic Formulation of Quantum Theories, is nicely done). The starting point for both classical (as usually understood, a commutative *-algebra) and quantum (a noncommutative *-algebra), as I take it, is that a state over a *-algebra is a normalized, positive map to average measurement results. The GNS-construction gives us a Hilbert space in both cases. Normal states are given by Trace[Aρ] in both cases and the Born rule is "just" a measurement |ψ⟩⟨ψ| in a pure state with density matrix ρ=|φ⟩⟨φ|, Note that everything is linear until we insist on discussing pure states. The key question is to ask whether classical physicists can reasonably ascribe a meaning to all operators that act on the classical Hilbert space, to which I argue that they can. Transformations to a different basis, with the fourier transform as case in point, more than just making sense, are *used* in classical signal analysis. I'm doing very little that's specially new in this QM context. As I said, Koopman suggested such an approach in 1931; von Neumann wrote a long paper in German that has *not* been translated, so of course it's called the Koopman-von Neumann approach, but the approach mostly languished until about 2000, when a PhD thesis appeared, since when there has been a slow stream of papers, and for the last few years there has been a Wikipedia page that's not bad. Recently a connection has been made with Quantum Non-Demolition measurements, which seems to have led to slightly more interest. I believe that understanding how things look in this kind of approach deserves to be at least as much in physicists' consciousness as deBB approaches. One final comment: *I* take the view that the complex structure *can* be understood rather nicely as associated with the fourier sine and cosine transforms of probability densities, which, as any engineer can tell you, introduces a naturally useful imaginary, j. I'm not committed to that approach, but so far I haven't seen a more natural approach. I ought to let the paper do its own talking, given that you've kind enough to say that you have at least downloaded it, but I'm quite keen to see in what ways it might or might not be attractive to other people. 4. The GNS construction is an aspect of noncommutative geometry. The spectra with Tr(Aρ) is also used in Gleason's theorem. I will try to get to your paper as soon as possible. I have this large backdrop of things to read, including finishing Sabine's book. I started reading a library copy last year and have since bought my own copy and that is on my stack as well. 31. @Dave M and all who responded Thank you for the question and the replies. It has given me a little more to guide a short Internet search. I found a retired SEP entry, It contains a significant non-technical discussion of the issues. The disagreement between Bohr and Heisenberg over the Copenhagen interpretation is very much like the contrast between Skolem and Zermelo with regard to set theory. It would seem that the measurement problem in physics is very similar to some debates in the foundations of mathematics. 32. This comment has been removed by the author. 33. In The EU, Canada and 'Developed Asia' , the budgets for Science and technology seems to not be at Risk ... It is U.S.A. that prioritize their Budgets in Military Applications the ones that knows that They have to make their research agendas to fit into Geo-Political Military Conflicts to get The Money ... (ROFL) ... Very Likely, The EU's headquarter are waiting to China's Parliament approve their Budget for HEP projects ... after that, They will decide ... No Problem, Some CERN physicists will be invited to participate in China's Toys ... and CERN will receive its 'upgrading' budget ... There is not an Eternal HEP Vacuum in Your Future ... Don't Cry in advance for things that are not happening ... 34. @Sabine, I do not quite see what this "measurement problem" is, although apparently some people lose sleep over it. The view of standard QM+ decoherence is perfectly reasonable: Schroedinger's equation (SE) describes a closed quantum system. But when the system is measured, it cannot be considered closed anymore, so it is no surprise that it's not described by the SE. The collapse of the wave function is just an effective prescription that describes this coupling to the external environment induced by the measurement. Decoherence theory showed how this process can be explained in detail in terms of standard QM. So really, I do not see where is the problem. From the experimental point of view, the experiments of Serge Haroche, for instance, have clearly shown that when the "environment" is sufficiently simple, the decoherence can be well controlled or even reversed. Again, no mystery there. I would not spend gigadollars, not even megadollars, on this pseudo-problem. For K$, I'm OK. 1. Opamanfred, Decoherence does not solve the measurement problem. Please do some reading. Don't worry, I do not want your "giga-dollars". 2. The following video simulates the collapse of the wave function. This gives pretty good ideas of how probability plays a role in collapse and what visually a collapsed wave function appears as. Of course a caveat is in order, for the ontology of a quantum wave is highly uncertain and it does not exactly "appear." However, this tells us about the mathematical representation. This video also makes the point that this sudden transition is not something the Schödinger equation predicts. As I wrote above dated 3/31 I think very strongly this problem is not solvable. Of course I might be wrong, but the issue of quantum measurement appears remarkably similar to the concept of self-reference. Instead of a predicate acting on Gödel numbers for predicates including itself a measurement is quantum information encoding quantum information. Decoherence does address aspects of measurement. However, it does not tell us how a particular outcome occurs, but rather how probability amplitudes transform into classical-like probabilities as quantum phase of superposition or entanglement is transferred to a reservoir of states. Decoherence takes us right to the doorstep of the measurement dragon, but no further. 3. "Decoherence does not solve the measurement problem" Please elaborate. I would also like to hear how exactly you define the problem. I consider what I sketched as a perfectly acceptable solution. On what aspect do you disagree? 4. Opamanfred, This is really off-topic. I am one person and not a forum. I do not have time to respond to random questions. Really, this is common knowledge, and in any case, I explained this in my book, and also Lawrence explained it correctly when he writes: 5. Lawrence, Re the measurement problem: ...I think very strongly this problem is not solvable. Well, it is not solvable mathematically speaking because it is not a question of mathematics, but of physics. The question involves the nature of the physical processes underlying the maths of QM. The difficulty, of course, is that those processes are not directly observable, and the standard formalism does not resolve logically to a realistic picture of the quantum subsystems - a wavefunction is not a physical thing. The resulting ontological speculations (MW,PI, superposition) based on the maths are muddied, metaphysical, and lacking in scientific significance, to say the least. The Copenhagen approach, OTOH, is simply to ignore the ontological problem, which consequently induces the measurement problem. Only Bohmian mechanics approaches the ontological problem from a physics (rather than strictly maths) perspective by assuming that quantum subsystems are ontologically continuous with classical mechanics. That this physically realistic reformulation (of QM) is currently disfavored relative to all the logically strained, metaphysical interpretations (of QM), says nothing good about the state of modern theoretical physics. BM is mathematically equivalent (but not qualitatively identical) to QM. In Bohmian mechanics there is no measurement problem. So, problem solved, no? 6. I have certain proclivities for the Bohm interpretation. I suppose this is just as I have the same for other interpretations. In fact I derived a form of path integral with Bohm's quantum mechanics. I found the mention of Bohm was a form of toxin in getting this published. Bohm's QM is also potentially interesting for solving problems in chaos or quantum chaos. Bohm's QM is though not identical to QM in general, but only so for wave functions of a certain form. Bohm's QM has some other deeper problems as well. The Klein-Gordon equation is a scalar wave form of the invariant momentum-energy interval of special relativity. If you follow the Bohmian prescription with a polar wave function you find the KG equation has the quantum potential. The odd implication is that a massless particle is off the light cone and in fact moving faster than light. This does not give reason to think there are various nonlocal physics with this, for that violates no-signaling and other things. This is why it is often said that Bohm's QM is not relativistic. Bohm's QM also without a Hilbert space does not derive things such as the generation or absorption of photons by atoms in a concise way, and things get worse with higher energy creation and annihilation of particles. There are quantum interpretations that are ψ-epistemic and other that are ψ-ontic. The many world interpretation (MWI) and Bohm interpretation (BI) are ψ-ontic. Bohr's Copenhagen interpretation (CI) and now the latest Qubism by Fuchs are ψ-epistemic. These are some of the popular interpretations and there are others such as consistent histories, the Montevideo interpretation and the related one by Penrose, and other. In fact quantum interpretations are multiplying like bunnies, maybe cockroaches to put it in a negative light, and none of them seems to really solve everything. The CI is interesting in that M-theory of D-branes works well with it. Quantum information theory is worked often in MWI. Qubism is now the beautiful child of those into Bayesianism --- which I can tip my hat towards. Pullen and Penrose have interesting ideas on how gravitation plays a role, and quantum gravitation built up from quantum entanglements probably does have a correspondence with quantum wave decoherence and maybe even measurements. However, all of these have big holes you can run an optics bench through, maybe even a collider. I wrote a math-physics result on how quantum mechanics is neither ψ-epistemic or ψ-ontic with any certainty. It does not work for two state systems, which is unfortunate. I should revisit this to make it work. The result is that quantum interpretations that are either ψ-epistemic or ψ-ontic are not determined by a measure theory of QM. I like the prospect of this: QM has this sort of Man proposes and QM disposes flavor to it. 7. @Lawrence Crowell Last Spring I submitted a short essay to the Gravity Research Foundation (GRF) in Wellesley, Massachusetts, that effectively is another interpretation of QM; albeit, a very amateur one. The concept is largely heuristic with a minimum of mathematical modeling. Currently I'm expanding on the original paper, submitted to the GRF, to include ideas for which the essay word limit (1500 words) would not allow. In the abstract of the paper, submitted to GRF last year, a tie-in to De Broglie-Bohm Pilot Wave Theory (PWT) is mentioned. This might have been a mistake seeing that PWT is anathema to much of the physics community as illustrated by your choice of the word "toxin", to describe the reaction of publishers to that particular QM interpretation. While I didn't mention it directly in the essay submitted to GRF the model provides a mechanism for reported anomalous acceleration signals observed in certain superconductor experiments that are orders of magnitude larger than allowed by standard physics (Tajmar et. al. 2003-2006, and others). This connection provided the rationale for submitting the essay to GRF, as the organization's stated mission involves understanding gravity, and presumably artificially generated gravity-like forces. To wind this up, I hope to complete the expanded version of the originally submitted GRF essay in a few weeks and upload it to 8. The particle in the pilot wave interpretation of Bohm and taken from deBroglie is not highly regarded in part because of Bohm's intention with local hidden variables. The idea is workable in a nonrelativistic framework and I think a way of working quantum chaos. There is a fascinating way of doing quantum mechanics that Pascual Jordan worked with Wigner. It is a way of doing QM with trace and determinants that is useful with the Freudenthal determinant over exceptional algebras. In fact I think it is useful with permanents as well, which find their way into algebraic geometric complexity and P vs NP. So why is this not widely used? Jordan and Wigner published on this in 1935 and Jordan became fanatically committed to the Nazi cause. He worked on the rocket programs at Peenemunde and was committed to the Nazi program. It is amazing how this sort of crap can infect brains, much like MAGA promoted in the US these days. Anyway this approach to QM fell into disrepute. History and affiliation have big impacts on the course of development in physics. 9. This comment has been removed by the author. Well yes, but the Bohmian advantage over all those proliferating bunnies is twofold. First, it eliminates the self-induced measurement problem of CI. More importantly, it provides a qualitative account of unobservable quantum processes that is continuous with classical mechanics and therefore provides a sound (and realistic) basis for further qualitative and quantitative elaboration. The continuity with CM is achieved by introducing a scale factor, the guiding equation. This guiding equation, in turn, is suggestive of an underlying physical component that induces quantum behavior in sufficiently low-mass classical particles. This avenue would seem to offer, at least the possibility, of a qualitative and quantitative approach with the potential to converge on a plausibly realistic account of quantum phenomena. I don't think the same can be said for any of the other cockroaches. 35. @Lawrence Crowell I do personal research in foundations because of the continuum hypothesis. Should you ever wish to be put on a crank list, become interested in just such a problem. One morning, thirty years ago, I simply woke up with the conviction of its truth. I now know why. The result from core mathematics lies in dimension theory. There is no transfinite dimension beyond the first uncountable cardinal. And, there is nothing in the usual account of set theory or its model theory that reproduces a collapse of the cardinal hierarchy to just two infinities. Suppose, for the moment, that this has bearing on the mathematics of physics. In Chapter 11 of Birkhoff's "Lattice Theory" there is a theorem showing that the truth of the continuum hypothesis affects measures. If I recall correctly, there can be no non-trivial countably additive measure in which every point has measure zero. I need to trace through the mathematics of this more carefully, but, I suspect that it is similar to the effect of the axiom of choice in some ways. With regard to dimension, Coxeter gave a group-theoretic account of regular polytopes. Because of the method, some stellated and truncated forms are admissible as being regular. There are only three forms common to all dimensions, and, there is only one dimension with an infinite number of regular forms -- that would be the plane. If you look at Freiling's axiom of symmetry on Wikipedia, it will mention the relationship between graph theory and the continuum hypothesis. Now one of the forms occurring in every dimension by Coxeter's account is the simplex. And, complete graphs are the projection of simplexes into the plane. What you say about the multiplicity of interpretations for quantum mechanics is not unlike the the diversity of opinion that has resulted in the current state of affairs for the foundations of mathematics. The independence of the continuum hypothesis, as one hears about it, only applies with respect to a paradigm. My own experience is that one can use finite geometries to relate truth tables to mathematical elements associated with Lorentz metrics. Physicists use symmetry in relation to higher order mathematics. But there is a famous criticism of mathematical logicians in Black's paper on the identity of indiscernibles. And, it is not unreasonable to approach foundations with symmetry as a guiding principle. Your comparison with results from the foundation of mathematics appear quite reasonable to me (but, then, John Baez undoubtedly has a list waiting for me :-) ). 1. This conjecture on my part is not something I have actually bent metal on or have done any calculations. This is pretty removed from my day job work that is more applied or engineering. The DMPR theorem is similar to the Bernays-Cohen result that the continuum hypothesis is a case of Gödel's theorem. Polytopes also enter into the algebraic geometry complexity of N vs NP. The role symmetry is of course important for gauge fields. Also for quantum entanglements quotient spaces or groups occur when some set of quantum numbers are replaced by other degrees of freedom. A bipartite entanglement replaces the spin of two fermions with the Bell state. This is a quotient system. The exact sequence for the moduli space of gauge connections is similar. In fact I think dual to entanglement geometry. 36. Okay, okay. But what if we find more Odderons? ;) 37. I often wonder what a theory will look like that explains QM and ART as special cases. As far as I can see, most scientists are trying to bridge the gap from QM. This seems logical, since most physicists probably regard QM as the most fundamental theory. However, the classic cases of really new theories have developed differently. There was no direct path from classical physics to quantum mechanics, nor to GRT. So QM and GRT were really new. Therefore, the question is whether the current approaches to unifying the two basic theories can really be promising enough. I myself am a mathematician with a solid background in Artificial Intelligence. When developing an algorithm for decision-making, I came across interesting relationships rather playfully. The chaotic decision process (I call it the "GenI process") is a chaotic random process based on very simple rules. Except for the basic arithmetic in complex number space, this does not require any difficult mathematics. (Simple maths do not necessarily produce simple results: think of the fractal sets by Mandelbrot.) Significantly more difficult is the statistical analysis of chaotic state changes. On the one hand, I can show that the process, starting from an initial state, certainly selects one of several decisions, and thereby exactly fulfills the statistics known from quantum mechanical measurements. On the other hand, I can derive a relativistic metric such that averaged state changes follow time-like geodesic paths in a four-dimensional Riemann space. Should not such or similar approaches, which are not derived directly from QM or GRT, ensure a fresh start? In principle, this is only about a change of perspective. 38. @WSG There is a upcoming version of QM that uses complex numbers and four-dimensional Riemann space. It's used to handle open systems. It is called PT-symmetric quantum mechanics. PT-symmetric quantum mechanics is an extension of conventional quantum mechanics into the complex domain. (PT symmetry is not in conflict with conventional quantum theory but is merely a complex generalization of it.) PT-symmetric quantum mechanics was originally considered to be an interesting mathematical discovery but with little or no hope of practical application, but beginning in 2007 it became a hot area of experimental physics. 39. This is not the point I wanted to make. This is obviously just another extension of a proven theory. Such things did not lead to anything really new. I am well aware of other approaches, such as quantum loop gravity or string theory, which, despite all efforts, have yet to resolve the open questions. The question of what a theory must look like so that QM and GRT can be deduced from it have already been asked. Maybe it will look somewhat crazy from today's perspective, as the QM for classical physicists. My point is to take a fundamentally different perspective on the role of gravity in QM. A model like the one mentioned above indeed requires a rethink. After that, our universe, as we perceive it, evolves according to a collapse of its wave function. This clearly contradicts the not explicitly justified assumption of leading physicists that it develops along a Schrödinger equation. But why is it like that? Is there a clear justification and vice versa? What, in essence, is against assuming a collapse? I have not even seen a discussion among physicists about this aspect. Even with well-known authors like Penrose, Greene, Hawking, who otherwise like to talk about the wildest speculation, nowhere is there any hint that the collapse of its wave function is the source of reality in our universe. Can anyone help me here? Are there any works that consider this perspective? At least in a nutshell, I can prove that such an approach can be quite effective. I can perform concrete calculations of a space-time metric for a spin1 / 2 particle and actually prove that the dynamics during the measurement satisfy Einstein's field equations. That should justify at least a discussion about the view. 40. @Lawrence Crowell Thanks for the reply. I found papers specific to Bell states and two qubit geometries. In many ways this relates to what I have been doing. There is, for example, a diagram which occurs in several contexts that I use to decide the well ordering of my 16-set of logical constants. It is a tetrahedron inscribed in a cube. Similarly, some of the papers start looking at block designs. This is another aspect of what I have been doing. The fact that the truth tables relate to one another as points in a finite affine geometry is foundationally significant, although philosophers and logicians will simply deny or not understand the matter. Incompleteness is generalized with respect to theories whose axiom sets are recursively enumerable. Finite group theory is not such a theory. Thanks again. 41. Thank you for this exceptionally thoughtful post. I do think that a good question to ask people on both sides of the argument is: What is your cutoff? That is: For supporters of the collider, I'd like to ask "How expensive would this thing have to be before you stopped supporting it? 30 billion? 50 billion? 100 billion?" And for opponents: "How inexpensive would this thing have to be before you stopped opposing it? 15 billion? 10 billion? 1 billion?" As a general rule, I think people who are able to answer these questions --- and to defend their answers --- are likely to have thought a lot harder about the tradeoffs than those who reflexively just support or oppose. 1. Steven, Yes, a good question. I'll make a go at it and say about $2 billion. A larger collider currently has less scientific promise than LIGO had, which came in at a cost somewhat below $1 billion. It has also less scientific promise than the SKA, whose full proposal would come in at $2 billion. So that would seem a reasonable amount. Comment moderation on this blog is turned on.
0c645a0465500bba
Skip to Main Content Have library access? Log in through your library Wave Propagation Peter Markoš Costas M. Soukoulis Copyright Date: 2008 Edition: STU - Student edition Pages: 376 • Cite this Item • Book Info Wave Propagation Book Description: This textbook offers the first unified treatment of wave propagation in electronic and electromagnetic systems and introduces readers to the essentials of the transfer matrix method, a powerful analytical tool that can be used to model and study an array of problems pertaining to wave propagation in electrons and photons. It is aimed at graduate and advanced undergraduate students in physics, materials science, electrical and computer engineering, and mathematics, and is ideal for researchers in photonic crystals, negative index materials, left-handed materials, plasmonics, nonlinear effects, and optics. Peter Markos and Costas Soukoulis begin by establishing the analogy between wave propagation in electronic systems and electromagnetic media and then show how the transfer matrix can be easily applied to any type of wave propagation, such as electromagnetic, acoustic, and elastic waves. The transfer matrix approach of the tight-binding model allows readers to understand its implementation quickly and all the concepts of solid-state physics are clearly introduced. Markos and Soukoulis then build the discussion of such topics as random systems and localized and delocalized modes around the transfer matrix, bringing remarkable clarity to the subject. Total internal reflection, Brewster angles, evanescent waves, surface waves, and resonant tunneling in left-handed materials are introduced and treated in detail, as are important new developments like photonic crystals, negative index materials, and surface plasmons. Problem sets aid students working through the subject for the first time. eISBN: 978-1-4008-3567-6 Subjects: Physics, Technology Table of Contents 1. Front Matter (pp. I-IV) 2. Table of Contents (pp. V-VIII) 3. Preface (pp. IX-XIV) P. Markoš and C. M. Soukoulis 4. 1 Transfer Matrix (pp. 1-27) In this chapter we introduce and discuss a mathematical method for the analysis of the wave propagation in one-dimensional systems. The method uses the transfer matrix and is commonly known as thetransfer matrixmethod [7, 29]. The transfer matrix method can be used for the analysis of the wave propagation of quantum particles, such as electrons [29, 46, 49, 81, 82, 115–117, 124, 103, 108, 131, 129, 141] and of electromagnetic [39, 123, 124], acoustic, and elastic waves. Once this technique is developed for one type of wave, it can easily be applied to any other wave problem.... 5. 2 Rectangular Potentials (pp. 28-55) The rectangular potential barrier, as shown in figure 2.1, represents one of the simplest quantum mechanical problems. We will use our transfer matrix formalism, developed in the previous chapter, to determine the transmission and reflection coefficients. Our transfer matrix results will be compared with those obtained with more traditional methods [7, 15, 25, 30]. We will show that the transfer matrix is easy to use and can be readily extended to more complicated shapes of potentials and to disordered systems. Schrödinger’s equation is given by\[-\frac{{{\hbar }^{2}}}{2m}\frac{{{\partial }^{2}}\Psi }{\partial {{x}^{2}}}+[V(x)-E]\Psi =0, \caption {(2.1)}\]with a potential\[V(x)=\left\{ \begin{array}{*{35}{l}} 0, & x\, \textless -a, \\ {{V}_{0}}, & -a\, \textless\, x \textless\, a, \\ 0, & a\, \textless\, x, \\ \end{array} \right. \caption {(2.2)}\]and can be solved analytically; the solution can be found... 6. 3 δ-Function Potential (pp. 56-73) In physical applications, it is often useful to consider a simplified form of the rectangular potential, namely, theδ-function potential,\[V(x)=\frac{{{\hbar }^{2}}}{2m}\Lambda \delta (x). \caption {(3.1)}\] The potential (3.1) can be obtained from the rectangular potential, defined by equation (2.2) in the limit of infinitesimally narrow barrier width, 2a→ 0, (3.2) and infinitesimally high barrier height,\[{{V}_{0}}=\frac{{{\hbar }^{2}}}{2m}\frac{\Lambda }{2a}\to \infty , \caption {(3.3)}\]in such a way that the product 2a V0=ħ2Λ/(2m) is constant. The potential (3.1) represents either a potential barrier (Λ > 0) or a potential well (Λ < 0) [7, 15]. In this chapter, we will study first the transmission of a quantum particle through a single... 7. 4 Kronig-Penney Model (pp. 74-97) In section 3.4 we studied the transmission of a quantum particle (electron) throughNidenticalδ-function repulsive or attractive potentials. We calculated how the transmission coefficient depends on the parameterkℓ, whereis the distance between two neighboringδ-function potential barriers andkis the wave vector of the incident particle. We found intervals ofkℓin which the transmission coefficient decreases exponentially asNincreases and becomes infinitesimally small in the limit ofN→ ∞. These intervals were separated by other intervals in which the transmission coefficient, as a function ofkℓ, oscillates and is close to... 8. 5 Tight Binding Model (pp. 98-119) In this chapter, we introduce the most important ideas of electron propagation in periodic lattices, such as energy bands and gaps, the density of states, effective mass and group velocity of the electron, and the Fermi energy. We also derive the transfer matrix that enables us to find the energy of a bound state and to calculate the transmission of an electron through a system ofNparticles. We begin by introducing and examining a very simple model, the so-calledtight binding model[11, 23, 75, 103, 115], defined by Schrödinger’s equation,\[i\hbar \frac{\partial {{c}_{n}}}{\partial t}={{\varepsilon }_{n}}{{c}_{n}}+{{V}_{n}}{{c}_{n+1}}+V_{n}^{*}{{c}_{n-1}}. \caption {(5.1)}\] The tight binding model given by equation... 9. 6 Tight Binding Models of Crystals (pp. 120-136) In this chapter we study how the spatial periodicity of the system influences the structure of the energy spectrum. We introduce two tight binding models, the first one with a period= 2a, and the second one with a period= 4a. We show that the spectrum of the allowed energies changes considerably when the spatial period of the lattice increases. The energy band splits into subbands separated from each other by gaps [23, 39]. Also, the wave function does not have the simple form of a plane wave, but possesses more complicated spatial structure, known as Bloch... 10. 7 Disordered Models (pp. 137-172) In chapter 6 we studied the transmission of a quantum particle in an infinite periodic system. We found that the periodicity of the system creates bands and gaps in the energy spectrum. In the band, the particle moves freely throughout the sample for all allowed energies. This is due to the periodicity of the system, which enables successful interference of the back and forth scattered waves. In the band gap, there are no states at all and the transmission coefficient is zero. We have also learned that a single impurity creates an isolated energy level which lies in the band... 11. 8 Numerical Solution of the Schrödinger Equation (pp. 173-180) The one-dimensional Schrödinger equation\[-\frac{{{\hbar }^{2}}}{2m}\frac{{{\partial }^{2}}}{\partial {{x}^{2}}}\Psi +V(x)=E\Psi (x) \caption {(8.1)}\]can be solved analytically only for a few elementary problems [15]. In most of the applications, we have to find the transmission and the energy spectrum numerically. In this chapter, we describe the simplest numerical algorithm for solution of the Schrödinger equation, and discuss the accuracy of the results obtained. We describe a simple numerical algorithm that enables us to treat various scattering problems numerically. Applying this algorithm to the simplest problem—that of the free particle—enables us to discuss the accuracy of the numerical algorithm and to estimate the numerical error of our... 12. 9 Transmission and Reflection of Plane Electromagnetic Waves on an Interface (pp. 181-204) In this chapter, we will investigate the very basic phenomena of transmission and reflection of electromagnetic wave propagating through the interface between two media. From the requirements of the continuity of the tangential components of the electric and the magnetic fields, we derive the transfer matrix for a single interface between two media. Its elements determine the transmission and reflection amplitudes for both electric and magnetic fields. Next, we study the behavior of the electromagnetic waves incident on the surface of a dielectric and a metal. We learn how different electromagnetic properties of these materials influence the transmission and reflection... 13. 10 Transmission and Reflection Coefficients for a Slab (pp. 205-224) Consider now a slab of thicknesswith permittivityε2and permeabilityμ2, located between two semi-infinite media with electromagnetic parameters (ε1,μ1) and (ε3,μ3), respectively. We want to calculate transmission and reflection amplitudes for a plane wave arriving from the left for both TE and TM polarizations. As in section 9.2, we assume that the permittivity and permeability of incoming and outgoing media are real. This might not be true for the parameters of the slab. Transmission through a planar slab is schematically shown in figure 10.1. We see that the problem is more complicated than that of... 14. 11 Surface Waves (pp. 225-242) In this chapter we study an interesting phenomenon, namely, the excitation of surface waves. We will see that for an appropriate choice of the electromagnetic parameters an interface between two media can support the excitation of surface waves [2, 58]. Surface waves can propagate along the interface and decay exponentially as a function of the distance from the surface, shown in figure 11.1. This phenomenon has no analogy in quantum physics. We analyze first the surface waves on a single interface between two media. We find that surface waves propagate only along the interface separating two media with opposite signs... 15. 12 Resonant Tunneling through Double-Layer Structures (pp. 243-248) In chapter 10 we learned that the transmission through a dielectric slab is determined by the relation of the slab thickness to the wavelength of the electromagnetic wave inside the slab. In particular, the transmission coefficient is close to 1 when the wavelength of the electromagnetic wave in thez-direction is proportional to even multiples of the slab thickness. Now, we will study the transmission of the electromagnetic wave through a system of two slabs, embedded in homogeneous material, as shown in figure 12.1 [39]. We concentrate on the case when there is no wave propagation in the layera,... 16. 13 Layered Electromagnetic Medium: Photonic Crystals (pp. 249-274) Previously, in chapters 9 and 10, we analyzed the transmission of electromagnetic waves through a single interface and through a thin slab of width. In chapter 12 we learned that the transmission through two slabs of the same material leads to resonant transmission, even if resonant transmission through one layer is not possible. This indicates that more complicated structures could have new transmission properties not observable in single components of the structure. To investigate this problem in more detail, in this chapter we will apply the transfer matrix formalism to analysis of the transmission coefficient through layeredperiodicmedia,... 17. 14 Effective Parameters (pp. 275-285) Up to now, we have analyzed the transmission of electromagnetic waves in homogeneous media. The only inhomogeneities were given by the interfaces between two homogeneous materials. We assumed that the distance between two adjacent interfaces is larger than, or at least comparable to, the wavelength of the propagating electromagnetic wave. Now we will analyze structures that possess inhomogeneities much smaller than the wavelength. A simple example of such a structure is the layered medium shown in figure 13.2, where the thickness of each slab,aandb, is much smaller thanλ. In such a case, the propagating electromagnetic wave... 18. 15 Wave Propagation in Nonlinear Structures (pp. 286-297) In this chapter, we will investigate the nonlinear response of wave propagation in one-dimensional structures. When dielectric materials are arranged periodically, electromagnetic waves at some frequencies are forbidden to propagate. This was discussed in detail in chapter 13. Most of the interest in multilayer structures focuses on thelinearregime in which the dielectric constant is independent of the field strength. However, the presence of opticalnonlinearityin a system leads to a much richer and more complex response to radiation. We will see that the transmission coefficient is a function of the intensity of the incoming electromagnetic wave. This... 19. 16 Left-Handed Materials (pp. 298-320) In this chapter, we summarize the electromagnetic properties of so-called left-handed materials. This name was given to man-made composites that possess, in a certain frequency region, negative real parts of both the permittivity and permeability. We have discussed some properties of left-handed materials in previous chapters. In chapter 9, we found that an interface between a vacuum and a left-handed medium might allow perfect transmission. In chapter 11, we analyzed in detail the existence of surface electromagnetic waves localized at the interface between vacuum and a left-handed material. In chapter 13, we used left-handed materials in construction of infinite layered... 20. Appendix A Matrix Operations (pp. 321-326) 21. Appendix B Summary of Electrodynamics Formulas (pp. 327-340) 22. Bibliography (pp. 341-348) 23. Index (pp. 349-352)
64273da4e150b4ae
Ehrenfest's theorem, to my level of understanding, says that expectation values for quantum mechanical observables obey their Newtonian mechanics counterparts, which means that we can use newton's laws on expectation values. However, in the case of the quantum harmonic oscillator, this clearly does not look newtonian because the expectation value of the position does not oscillate like the newtonian $\sin\omega t$. These states are of the form $\psi=K(n, \xi)e^{-\xi^2/2}$. Why do they not obey Ehrenfest's theorem? They don't give a harmonic oscillator, imo. 3 Answers 3 It is actually true, in an almost trivial way. The Ehrenfest theorem states that, \begin{equation} \frac{d}{dt}\langle x\rangle=\langle p\rangle,\quad \frac{d}{dt}\langle p\rangle =- \langle V'(x)\rangle \end{equation} However for all eigenfunctions for the Harmonic oscillator $\langle x\rangle=0$ (and therefore $\langle V'(x)\rangle=0$) and $\langle p\rangle=0$. So the Ehrenfest theorem on the eigenstates reduces to $0=0$. You can see that the general version of the Ehrenfest theorem works trivially for all eigenstates. It states that for the arbitrary observable $A$ its expectation value satisfies the equation, \begin{equation} \frac{d}{dt} \langle A\rangle=\frac{1}{i\hbar}\langle [A,H]\rangle+\langle \frac{\partial A}{\partial t}\rangle \end{equation} However on the eigenstates, \begin{equation} \langle\psi_n| [A,H]|\psi_n\rangle=\langle\psi_n|AH-HA|\psi_n\rangle=E_n\langle\psi_n|A-A|\psi_n\rangle=0 \end{equation} So the expectation value of the observable that doesn't explicitly depend on time, doesn't evolve on the eigenstates which is what you would expect. So where does the Ehrenfest theorem lead to the classical dynamics? You need to consider the localized wavepackets. The simplest example would be the coherent state of the Harmonic oscillator that is the Gaussian wavepacket that follows the classical trajectory Coherent state evolution For the Harmonic oscillator the Ehrenfest theorem is always "classical" if only in a trivial way (as in case of the eigenstates). However in general the Ehrenfest theorem reduces to the classical equation of motion only on such localized wavepackets that concentrate near the classical trajectory as $\hbar$ goes to zero. The key point happens to be the interchange $\langle V'(x)\rangle \mapsto V'(\langle x\rangle)$ that on general states can't be done. So if you want to recover some classical dynamics from the quantum theory look on the localized wavepackets. 1. Your version of $\psi$ is (I'm sure you know) derived from the Time Independent Schrodinger equation, $\hat{H}\psi=E\psi$. To find time-dependent solutions, we solve $$i\frac{\partial}{\partial t}\psi=\hat{H}\psi.$$ You were trying to solve for stationary states, and the entire point of those is that $|\psi(x)|^2$ does not change over time. Still, for these stationary solutions solutions,$$\frac{\mathrm{d}}{\mathrm{d}t}\left<x\right>=\left<p\right>=0; \frac{\mathrm{d}}{\mathrm{d}t}\left<p\right>=-\left<\frac{\mathrm{d}}{\mathrm{d}x}V (x)\right>=0,$$ which is in accordance with the Ehrenfest theorem (albeit uninformatively). 2. Be careful about the time dependence of your reported $\psi$: you're actually dealing with $\Psi(x, t)=\psi(x)e^{-itE/\hbar}$ for those stationary states. Of course, this doesn't relate to the Ehrenfest theorem, but it's something worth mentioning: the complex and real parts are oscillating, as shown by the pink and blue lines in this diagram (from the wikipedia page on QHO):complex and real parts of some QHO states Do not make the mistake of suggesting that all derivatives with respect to time are necessarily automatically equal to zero because the wavefunction is time-independent. Contrary to what the shorthand notation suggests, we do have a (separable) time-dependent bit. 3. Referring to the same diagram, observe parts G and H: these represent coherent states, which can be understood using Ehrenfest's theorem because $|\psi|^2$ looks like a Gaussian which follows a classical $\sin$ or $\cos$ function. Kanasugi, H., and H. Okada. “Systematic Treatment of General Time-Dependent Harmonic Oscillator in Classical and Quantum Mechanics.” Progress of Theoretical Physics, vol. 93, no. 5, 1995, pp. 949–960., doi:10.1143/ptp/93.5.949. What you wrote down is just a complete set of solutions of the time independent Schrödinger equation. \begin{align} [- \frac{\hbar²}{2m} \Delta + V(x) ] \Psi(x) = E \Psi(x) \end{align} Of course this solutions don't carry any time dependence, because the time-independent Schrödinger equation (in this representation) only makes statements about functions that depend on the spatial coordinates only (in this case, this is only x). How to get to the time-dependent solutions? A particular property of the solutions $\Psi_{E}$ of the time-independent Schrödinger Equation is that $\Psi_{E}(x) e^{-i \frac{E}{\hbar}t}$ is a solution to the time-dependent Schrödinger equation. If you would apply this to your suggested set of solutions of the harmonic oscillator, you would arrive at time dependent solutions. THOSE are the ones that the Ehrenfest-Theorem makes a statement about. You would then calculate that the expectation values $<X>$ and $<P>$ do not change. But you would as well calculate that both of this values are 0. This is in perfect agreement with the Theorems of Ehrenfest, and the classical analogon would be a particle resting at the deepest spot of the harmonic potential. Your Answer
57d6fcdce27715d6
I want to solve numerically the one-dimensional time-dependent Schrödinger equation $$i \psi_t(x,t)=-\frac{\hbar}{2m} \psi''(x,t)$$ My issue is that I don't have the physical background to understand what are the correct boundary conditions/initial state and I don't know how to know if the solution I get is the correct one. So I want to try to reproduce the same solution I found on wikipedia, shown below: What I've seen is that one usually discretize $\psi''(x,t)$ with the usual centeral finite difference scheme $$\psi''(x_i,t) = \frac{\psi_{i+1}(t) - 2\psi_i(t) + \psi_{i-1}(t)}{dx^2} + \mathcal{O}(h^2)$$ and hence the PDE becomes a system of ODEs that I can solve with an appropriate method. Here are my questions: • What boundary conditions do I have to impose to have a behaviour like in the figure? The solution does not appear to have a "fixed" value. How can I impose them (I'd need an answer in terms of what entries of the matrix I should change) • What could be an initial condition do I have to impose to have a "wave" like the one in the picture? Following the suggestion of @AloneProgrammer, I focus on the particle in a box case, where my domain now is $[0,1]$ and I have $0$ potential inside the domain, and $V(x) = \infty$ outside. In this configuration, the boundary conditions at $0$ and $L$ are Dirichlet homogeneous,i.e. $$ \psi(0,t) = \psi(L,t) = 0 $$ Hence the PDE becomes (I don't consider for the moment $\hbar$ and set $m=1$): \begin{cases} \psi_t = \frac{\mathbf{i}}{2} \psi_{xx} \\ \psi(0,t) = \psi(1,t) = 0 \\ \psi(x,0) = \sin(2 \pi x) \end{cases} where I choose as initial datum $\psi(x,0)$ a sinus. I integrate up to time $T=1$ using a suitable numerical method for the time integration and discretizing with finite difference in space as written above. I show in the following the plot of real and imaginary part at different times. t=0.1 t=0.3 • 1 $\begingroup$ Do you have any insist on reproducing that particular animation? I think it's better to solve a free particle in a box, where you have analytical solution and then compare your numerical results with that. If you agree I will elaborate it a bit more in an answer. Also, I don't think it's a good idea to convert Schrodinger equation, which is a PDE, to an ODE. I will elaborate a bit more about that as well in my answer if you want. $\endgroup$ Dec 19 '19 at 3:42 • $\begingroup$ Yes, if you could elaborate it as an answer it would be perfect ! Especially for the case where I have an analytical solution to compare. anyway, I prefer to keep my approach for the moment, since it's just a simple method of lines $\endgroup$ – VoB Dec 19 '19 at 6:46 • $\begingroup$ @AloneProgrammer I edited my answer considering a free particle in a box as you suggested. Is it okay in your opinion? I know that usually one end up with solving the time independent one, but I have to solve the time dependent. I don't know what could be the analytical solution, and moreover I don't know what am I supposed to plot once I found $\psi(x,t)$. I just plotted the real and imaginary part of the soluition at different times like $t=0.1,0.3$ $\endgroup$ – VoB Dec 19 '19 at 9:19 • $\begingroup$ Moreover, I noticed that the norm square of the solution at each time step is always equal to the norm of the initial data $\psi(x,0)= \sin(2 \pi x)$, which I know that it's a propery that has to be satisfied. $\endgroup$ – VoB Dec 19 '19 at 10:05 • 1 $\begingroup$ It's a standard absorbing boundary condition. There are many references one can find for it regarding the Schroedinger equation. $\endgroup$ Dec 19 '19 at 20:06 Regarding the boundary conditions: Don't be fooled by Wikipedia. Yes, the scenario in the picture suggests an absorption at the boundaries, and yes, one could use absorbing boundary conditions in order to reproduce that numerically. In the simple case of a wavepacket these are readily available, because in the end, there exists an analytical solution for the wavefunction. For more complex scenarios, however, absorbing boundary conditions are usually not that simple and the numerical schemes always entail approximations, i.e. they won't be completely reflectionless. In this answer, I tried to give a short overview on basically the simplest approach to absorbing boundaries (or better, absorbing boundary regions). In quantum mechanics, this often goes under the name complex absorbing potentials (CAPs). So, if you simply want to reproduce this picture, just take a grid that is 10-times as large, and show only the inner region. The resulting picture will look exactly as the one from Wikipedia, but is obtained with a much smaller effort ... which is why I strongly believe the picture on Wikipedia has been produced in exactly this way. Note further that the particle-in-a-box case is completely different to the wave equation, as this model necessarily implies reflections at the boundaries. It would absolutely make no sense to apply absorbing BCs here. Initial conditions: The initial condition is a wavepacket. In general, this means you have an arbitrary mixture of plane waves $e^{ikx}$. A common example is to construct it by applying a another Gaussian to the plane wave on the basis of a plane-wave $e^{i(kx)}$ by applying a Gaussian with mean $x_0$ and width $\lambda$ to it, i.e. $$ \Psi(x, t=0) = e^{i(kx)} e^{-\frac{(x-x_0)^2}{2\lambda^2}} $$ As usual, this should further be normalized so that $||\Psi(x, t=0)||=1$. As David said, absorbing boundary conditions won't be completely reflectionless. That said, we can reduce relfections quite a bit, which helps to avoid influence from the boundaries while the particle still travelling inside. Since this is a time dependent problem, one simple choice of boundary conditions will look like this. • At the left boundary: $$\frac{\partial \Psi}{\partial x}=-i k(E)\Psi =-i \sqrt{\frac{2mE}{\hbar^2}} \Psi$$ • At the right boundary: $$\frac{\partial \Psi}{\partial x}=i k(E)\Psi =i \sqrt{\frac{2mE}{\hbar^2}} \Psi$$ What we are doing here, is telling our solution that it could pass the left boundary if it wants to travel in the negative x direction and it could pass the right boundary if it wants to travel in the positive direction. As for the energy, we should clearly set: $$E=\frac{i}{\hbar} \frac{\partial }{\partial t}$$ Now you may wonder how to extract the square root from a derivative, and one good choice is to approximate it: $$\sqrt{\frac{2mE}{\hbar^2}} \approx \sqrt{\frac{2mE_0}{\hbar^2}} \frac{3E+E_0}{E+3E_0}$$ It's a good and well known rational approximation to square root. In case of wavepacket, we can take: $$E_0=\frac{\hbar^2 k_0^2}{2m}$$ Where $k_0$ is the initial momentum. Now that we have a rational function, we can write for the left boundary. $$\left(\frac{i}{\hbar} \frac{\partial }{\partial t}+3E_0 \right) \frac{\partial \Psi}{\partial x}=-i \sqrt{\frac{2mE_0}{\hbar^2}} \left(3\frac{i}{\hbar} \frac{\partial }{\partial t}+E_0 \right) \Psi$$ Now any proper finite difference scheme can be used with these boundary conditions in a very direct way. Now, since we have taken an approximation for the square root, with an arbitrary parameter $E_0$, there's still going to be reflections. There are ways to improve this approximation, for example use rational functions of higher order. Here's an illustrations on how these boundary conditions work. You can see a small part of the wavepacket being reflected at the right boundary, but there seems to be no reflection on the left. Note that it depends on both the group velocity and the spread of the packet. enter image description here Here's a more impressive example. Note how we can clearly observe the two parts of the initial wavepacket - reflected and transmitted. If we change the boundaries to hard walls here, they would quickly reflect back at each other and make a mess. enter image description here Your Answer
46e0eada46884017
Quantum cognition Quantum cognition is an emerging field which applies the mathematical formalism of quantum theory to model cognitive phenomena such as information processing by the human brain, language, decision making, human memory, concepts and conceptual reasoning, human judgment, and perception.[1][2][3][4] The field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum mechanical about the brain. Quantum cognition is based on the quantum-like paradigm[5][6] or generalized quantum paradigm[7] or quantum structure paradigm[8] that information processing by complex systems such as the brain, taking into account contextual dependence of information and probabilistic reasoning, can be mathematically described in the framework of quantum information and quantum probability theory. Quantum cognition uses the mathematical formalism of quantum theory to inspire and formalize models of cognition that aim to be an advance over models based on traditional classical probability theory. The field focuses on modeling phenomena in cognitive science that have resisted traditional techniques or where traditional models seem to have reached a barrier (e.g., human memory),[9] and modeling preferences in decision theory that seem paradoxical from a traditional rational point of view (e.g., preference reversals).[10] Since the use of a quantum-theoretic framework is for modeling purposes, the identification of quantum structures in cognitive phenomena does not presuppose the existence of microscopic quantum processes in the human brain.[11] Main subjects of researchEdit Quantum-like models of information processing ("quantum-like brain")Edit The brain is definitely a macroscopic physical system operating on the scales (of time, space, temperature) which differ crucially from the corresponding quantum scales. (The macroscopic quantum physical phenomena, such as the Bose-Einstein condensate, are also characterized by the special conditions which are definitely not fulfilled in the brain.) In particular, the brain's temperature is simply too high to be able to perform the real quantum information processing, i.e., to use the quantum carriers of information such as photons, ions, electrons. As is commonly accepted in brain science, the basic unit of information processing is a neuron. It is clear that a neuron cannot be in the superposition of two states: firing and non-firing. Hence, it cannot produce superposition playing the basic role in the quantum information processing. Superpositions of mental states are created by complex networks of neurons (and these are classical neural networks). Quantum cognition community states that the activity of such neural networks can produce effects formally described as interference (of probabilities) and entanglement. In principle, the community does not try to create the concrete models of quantum (-like) representation of information in the brain.[12] The quantum cognition project is based on the observation that various cognitive phenomena are more adequately described by quantum information theory and quantum probability than by the corresponding classical theories (see examples below). Thus the quantum formalism is considered an operational formalism that describes nonclassical processing of probabilistic data. Recent derivations of the complete quantum formalism from simple operational principles for representation of information support the foundations of quantum cognition. Although at the moment we cannot present the concrete neurophysiological mechanisms of creation of the quantum-like representation of information in the brain,[13] we can present general informational considerations supporting the idea that information processing in the brain matches with quantum information and probability. Here, contextuality is the key word, see the monograph of Khrennikov for detailed representation of this viewpoint.[1] Quantum mechanics is fundamentally contextual.[14] Quantum systems do not have objective properties which can be defined independently of measurement context. (As was pointed by N. Bohr, the whole experimental arrangement must be taken into account.) Contextuality implies existence of incompatible mental variables, violation of the classical law of total probability and (constructive and destructive) interference effects. Thus the quantum cognition approach can be considered as an attempt to formalize contextuality of mental processes by using the mathematical apparatus of quantum mechanics. Decision makingEdit Suppose a person is given an opportunity to play two rounds of the following gamble: a coin toss will determine whether the subject wins $200 or loses $100. Suppose the subject has decided to play the first round, and does so. Some subjects are then given the result (win or lose) of the first round, while other subjects are not yet given any information about the results. The experimenter then asks whether the subject wishes to play the second round. Performing this experiment with real subjects gives the following results: 1. When subjects believe they won the first round, the majority of subjects choose to play again on the second round. 2. When subjects believe they lost the first round, the majority of subjects choose to play again on the second round. Given these two separate choices, according to the sure thing principle of rational decision theory, they should also play the second round even if they don't know or think about the outcome of the first round.[15] But, experimentally, when subjects are not told the results of the first round, the majority of them decline to play a second round.[16] This finding violates the law of total probability, yet it can be explained as a quantum interference effect in a manner similar to the explanation for the results from double-slit experiment in quantum physics.[2][17][18] Similar violations of the sure-thing principle are seen in empirical studies of the Prisoner's Dilemma and have likewise been modeled in terms of quantum interference.[19] The above deviations from classical rational expectations in agents’ decisions under uncertainty produce well known paradoxes in behavioral economics, that is, the Allais, Ellsberg and Machina paradoxes.[20][21][22] These deviations can be explained if one assumes that the overall conceptual landscape influences the subject's choice in a neither predictable nor controllable way. A decision process is thus an intrinsically contextual process, hence it cannot be modeled in a single Kolmogorovian probability space, which justifies the employment of quantum probability models in decision theory. More explicitly, the paradoxical situations above can be represented in a unified Hilbert space formalism where human behavior under uncertainty is explained in terms of genuine quantum aspects, namely, superposition, interference, contextuality and incompatibility.[23][24][25][18] Considering automated decision making, quantum decision trees have different structure compared to classical decision trees. Data can be analyzed to see if a quantum decision tree model fits the data better.[26] Human probability judgmentsEdit Quantum probability provides a new way to explain human probability judgment errors including the conjunction and disjunction errors.[27] A conjunction error occurs when a person judges the probability of a likely event L and an unlikely event U to be greater than the unlikely event U; a disjunction error occurs when a person judges the probability of a likely event L to be greater than the probability of the likely event L or an unlikely event U. Quantum probability theory is a generalization of Bayesian probability theory because it is based on a set of von Neumann axioms that relax some of the classic Kolmogorov axioms.[28] The quantum model introduces a new fundamental concept to cognition—the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments. Quantum probability provides a simple account of conjunction and disjunction errors as well as many other findings such as order effects on probability judgments.[29][30][31] The liar paradox - The contextual influence of a human subject on the truth behavior of a cognitive entity is explicitly exhibited by the so-called liar paradox, that is, the truth value of a sentence like "this sentence is false". One can show that the true-false state of this paradox is represented in a complex Hilbert space, while the typical oscillations between true and false are dynamically described by the Schrödinger equation.[32][33] Knowledge representationEdit Concepts are basic cognitive phenomena, which provide the content for inference, explanation, and language understanding. Cognitive psychology has researched different approaches for understanding concepts including exemplars, prototypes, and neural networks, and different fundamental problems have been identified, such as the experimentally tested non classical behavior for the conjunction and disjunction of concepts, more specifically the Pet-Fish problem or guppy effect,[34] and the overextension and underextension of typicality and membership weight for conjunction and disjunction.[35][36] By and large, quantum cognition has drawn on quantum theory in three ways to model concepts. 1. Exploit the contextuality of quantum theory to account for the contextuality of concepts in cognition and language and the phenomenon of emergent properties when concepts combine[11][37][38][39][40] 2. Use quantum entanglement to model the semantics of concept combinations in a non-decompositional way, and to account for the emergent properties/associates/inferences in relation to concept combinations[41] 3. Use quantum superposition to account for the emergence of a new concept when concepts are combined, and as a consequence put forward an explanatory model for the Pet-Fish problem situation, and the overextension and underextension of membership weights for the conjunction and disjunction of concepts.[29][37][38] The large amount of data collected by Hampton[35][36] on the combination of two concepts can be modeled in a specific quantum-theoretic framework in Fock space where the observed deviations from classical set (fuzzy set) theory, the above-mentioned over- and under- extension of membership weights, are explained in terms of contextual interactions, superposition, interference, entanglement and emergence.[29][42][43][44] And, more, a cognitive test on a specific concept combination has been performed which directly reveals, through the violation of Bell's inequalities, quantum entanglement between the component concepts.[45][46] Semantic analysis and information retrievalEdit The research in (iv) had a deep impact on the understanding and initial development of a formalism to obtain semantic information when dealing with concepts, their combinations and variable contexts in a corpus of unstructured documents. This conundrum of natural language processing (NLP) and information retrieval (IR) on the web – and data bases in general – can be addressed using the mathematical formalism of quantum theory. As basic steps, (a) K. Van Rijsbergen introduced a quantum structure approach to IR,[47] (b) Widdows and Peters utilised a quantum logical negation for a concrete search system,[40][48] and Aerts and Czachor identified quantum structure in semantic space theories, such as latent semantic analysis.[49] Since then, the employment of techniques and procedures induced from the mathematical formalisms of quantum theory – Hilbert space, quantum logic and probability, non-commutative algebras, etc. – in fields such as IR and NLP, has produced significant results.[50] Gestalt perceptionEdit There are apparent similarities between Gestalt perception and quantum theory. In an article discussing the application of Gestalt to chemistry, Anton Amann writes: "Quantum mechanics does not explain Gestalt perception, of course, but in quantum mechanics and Gestalt psychology there exist almost isomorphic conceptions and problems: • Similarly as with the Gestalt concept, the shape of a quantum object does not a priori exist but it depends on the interaction of this quantum object with the environment (for example: an observer or a measurement apparatus). • Quantum mechanics and Gestalt perception are organized in a holistic way. Subentities do not necessarily exist in a distinct, individual sense. • In quantum mechanics and Gestalt perception objects have to be created by elimination of holistic correlations with the 'rest of the world'."[51] Each of the points mentioned in the above text in a simplified manner (Below explanations correlate respectively with the above-mentioned points): • As an object in quantum physics doesn't have any shape until and unless it interacts with its environment; Objects according to Gestalt perspective do not hold much of a meaning individually as they do when there is a "group" of them or when they are present in an environment. • Both in quantum mechanics and Gestalt perception, the objects must be studied as a whole rather than finding properties of individual components and interpolating the whole object. • In Gestalt concept creation of a new object from another previously existing object means that the previously existing object now becomes a sub entity of the new object, and hence "elimination of holistic correlations" occurs. Similarly a new quantum object made from a previously existing object means that the previously existing object looses its holistic view. Amann comments: "The structural similarities between Gestalt perception and quantum mechanics are on a level of a parable, but even parables can teach us something, for example, that quantum mechanics is more than just production of numerical results or that the Gestalt concept is more than just a silly idea, incompatible with atomistic conceptions."[51] Ideas for applying the formalisms of quantum theory to cognition first appeared in the 1990s by Diederik Aerts and his collaborators Jan Broekaert, Sonja Smets and Liane Gabora, by Harald Atmanspacher, Robert Bordley, and Andrei Khrennikov. A special issue on Quantum Cognition and Decision appeared in the Journal of Mathematical Psychology (2009, vol 53.), which planted a flag for the field. A few books related to quantum cognition have been published including those by Khrennikov (2004, 2010), Ivancivic and Ivancivic (2010), Busemeyer and Bruza (2012), E. Conte (2012). The first Quantum Interaction workshop was held at Stanford in 2007 organized by Peter Bruza, William Lawless, C. J. van Rijsbergen, and Don Sofge as part of the 2007 AAAI Spring Symposium Series. This was followed by workshops at Oxford in 2008, Saarbrücken in 2009, at the 2010 AAAI Fall Symposium Series held in Washington, D.C., 2011 in Aberdeen, 2012 in Paris, and 2013 in Leicester. Tutorials also were presented annually beginning in 2007 until 2013 at the annual meeting of the Cognitive Science Society. A Special Issue on Quantum models of Cognition appeared in 2013 in the journal Topics in Cognitive Science. See alsoEdit 1. ^ a b Khrennikov, A. (2010). Ubiquitous Quantum Structure: from Psychology to Finances. Springer. ISBN 978-3-642-42495-3. 2. ^ a b Busemeyer, J.; Bruza, P. (2012). Quantum Models of Cognition and Decision. Cambridge: Cambridge University Press. ISBN 978-1-107-01199-1. 3. ^ Pothos, E. M.; Busemeyer, J. R. (2013). "Can quantum probability provide a new direction for cognitive modeling". Behavioral and Brain Sciences. 36 (3): 255–274. doi:10.1017/S0140525X12001525. PMID 23673021. 4. ^ Wang, Z.; Busemeyer, J. R.; Atmanspacher, H.; Pothos, E. M. (2013). "The potential of using quantum theory to build models of cognition". Topics in Cognitive Science. 5 (4): 672–688. doi:10.1111/tops.12043. PMID 24027215. 5. ^ Khrennikov, A. (2006). "Quantum-like brain: 'Interference of minds'". Biosystems. 84 (3): 225–241. doi:10.1016/j.biosystems.2005.11.005. PMID 16427733. 6. ^ Khrennikov, A. (2004). Information Dynamics in Cognitive, Psychological, Social, and Anomalous Phenomena. Fundamental Theories of Physics. Vol. 138. Kluwer. ISBN 1-4020-1868-1. 7. ^ Atmanspacher, H.; Römer, H.; Walach, H. (2002). "Weak quantum theory: Complementarity and entanglement in physics and beyond". Foundations of Physics. 32 (3): 379–406. doi:10.1023/A:1014809312397. S2CID 118583726. 9. ^ Bruza, P.; Kitto, K.; Nelson, D.; McEvoy, C. (2009). "Is there something quantum-like about the human mental lexicon?". Journal of Mathematical Psychology. 53 (5): 362–377. doi:10.1016/j.jmp.2009.04.004. PMC 2834425. PMID 20224806. 10. ^ Lambert Mogiliansky, A.; Zamir, S.; Zwirn, H. (2009). "Type indeterminacy: A model of the KT (Kahneman–Tversky)-man". Journal of Mathematical Psychology. 53 (5): 349–361. arXiv:physics/0604166. doi:10.1016/j.jmp.2009.01.001. S2CID 15463046. 11. ^ a b de Barros, J. A.; Suppes, P. (2009). "Quantum mechanics, interference, and the brain". Journal of Mathematical Psychology. 53 (5): 306–313. doi:10.1016/j.jmp.2009.03.005. 12. ^ Khrennikov, A. (2008). "The Quantum-Like Brain on the Cognitive and Subcognitive Time Scales". Journal of Consciousness Studies. 15 (7): 39–77. ISSN 1355-8250. 13. ^ Van den Noort, Maurits; Lim, Sabina; Bosch, Peggy (26 December 2016). "On the need to unify neuroscience and physics". Neuroimmunology and Neuroinflammation. 3 (12): 271. doi:10.20517/2347-8659.2016.55. 14. ^ Khrennikov, A. (2009). Contextual Approach to Quantum Formalism. Fundamental Theories of Physics. Vol. 160. Springer. ISBN 978-1-4020-9592-4. 15. ^ Savage, L. J. (1954). The Foundations of Statistics. John Wiley & Sons. 16. ^ Tversky, A.; Shafir, E. (1992). "The disjunction effect in choice under uncertainty". Psychological Science. 3 (5): 305–309. doi:10.1111/j.1467-9280.1992.tb00678.x. S2CID 144374616. 17. ^ Pothos, E. M.; Busemeyer, J. R. (2009). "A quantum probability explanation for violations of 'rational' decision theory". Proceedings of the Royal Society. B: Biological Sciences. 276 (1665): 2171–2178. doi:10.1098/rspb.2009.0121. PMC 2677606. PMID 19324743. 18. ^ a b Yukalov, V. I.; Sornette, D. (21 February 2010). "Decision theory with prospect interference and entanglement" (PDF). Theory and Decision. 70 (3): 283–328. doi:10.1007/s11238-010-9202-y. hdl:20.500.11850/29070. S2CID 15377072. 19. ^ Musser, George (16 October 2012). "A New Enlightenment". Scientific American. 307 (5): 76–81. doi:10.1038/scientificamerican1112-76. 20. ^ Allais, M. (1953). "Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'ecole Americaine". Econometrica. 21 (4): 503–546. doi:10.2307/1907921. JSTOR 1907921. 21. ^ Ellsberg, D. (1961). "Risk, ambiguity, and the Savage axioms" (PDF). Quarterly Journal of Economics. 75 (4): 643–669. doi:10.2307/1884324. JSTOR 1884324. 22. ^ Machina, M. J. (2009). "Risk, Ambiguity, and the Rank-Dependence Axioms". American Economic Review. 99 (1): 385–392. doi:10.1257/aer.99.1.385. 23. ^ Aerts, D.; Sozzo, S.; Tapia, J. (2012). "A quantum model for the Ellsberg and Machina paradoxes". In Busemeyer, J.; Dubois, F.; Lambert-Mogilansky, A. (eds.). Quantum Interaction 2012. LNCS. Vol. 7620. Berlin: Springer. pp. 48–59. 24. ^ Aerts, D.; Sozzo, S.; Tapia, J. (2014). "Identifying quantum structures in the Ellsberg paradox". International Journal of Theoretical Physics. 53 (10): 3666–3682. arXiv:1302.3850. Bibcode:2014IJTP...53.3666A. doi:10.1007/s10773-014-2086-9. S2CID 119158347. 25. ^ La Mura, P. (2009). "Projective expected utility". Journal of Mathematical Psychology. 53 (5): 408–414. arXiv:0802.3300. doi:10.1016/j.jmp.2009.02.001. S2CID 12099816. 26. ^ Kak, S. (2017). Incomplete Information and Quantum Decision Trees. IEEE International Conference on Systems, Man, and Cybernetics. Banff, Canada, October. doi:10.1109/SMC.2017.8122615. 27. ^ Tversky, A.; Kahneman, D. (1983). "Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment". Psychological Review. 90 (4): 293–315. doi:10.1037/0033-295X.90.4.293. 28. ^ Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C. (2018). "A quantum framework for likelihood ratios". International Journal of Quantum Information. 16 (1): 1850002. arXiv:1508.00936. Bibcode:2018IJQI...1650002B. doi:10.1142/s0219749918500028. ISSN 0219-7499. S2CID 85523100. 29. ^ a b c Aerts, D. (2009). "Quantum structure in cognition". Journal of Mathematical Psychology. 53 (5): 314–348. arXiv:0805.3850. doi:10.1016/j.jmp.2009.04.005. S2CID 14436506. 30. ^ Busemeyer, J. R.; Pothos, E.; Franco, R.; Trueblood, J. S. (2011). "A quantum theoretical explanation for probability judgment 'errors'" (PDF). Psychological Review. 118 (2): 193–218. doi:10.1037/a0022542. PMID 21480739. 31. ^ Trueblood, J. S.; Busemeyer, J. R. (2011). "A quantum probability account of order effects in inference". Cognitive Science. 35 (8): 1518–1552. doi:10.1111/j.1551-6709.2011.01197.x. PMID 21951058. 32. ^ Aerts, D.; Broekaert, J.; Smets, S. (1999). "The liar paradox in a quantum mechanical perspective". Foundations of Science. 4 (2): 115–132. doi:10.1023/A:1009610326206. S2CID 119404170. 33. ^ Aerts, D.; Aerts, S.; Broekaert, J.; Gabora, L. (2000). "The violation of Bell inequalities in the macroworld". Foundations of Physics. 30 (9): 1387–1414. arXiv:quant-ph/0007044. Bibcode:2000quant.ph..7044A. doi:10.1023/A:1026449716544. S2CID 3262876. 34. ^ Osherson, D. N.; Smith, E. E. (1981). "On the adequacy of prototype theory as a theory of concepts". Cognition. 9 (1): 35–58. doi:10.1016/0010-0277(81)90013-5. PMID 7196818. S2CID 10482356. 35. ^ a b Hampton, J. A. (1988). "Overextension of conjunctive concepts: Evidence for a unitary model for concept typicality and class inclusion". Journal of Experimental Psychology: Learning, Memory, and Cognition. 14 (1): 12–32. doi:10.1037/0278-7393.14.1.12. 36. ^ a b Hampton, J. A. (1988). "Disjunction of natural concepts". Memory & Cognition. 16 (6): 579–591. doi:10.3758/BF03197059. PMID 3193889. 37. ^ a b Aerts, D.; Gabora, L. (2005). "A state-context-property model of concepts and their combinations I: The structure of the sets of contexts and properties". Kybernetes. 34 (1&2): 167–191. arXiv:quant-ph/0402207. doi:10.1108/03684920510575799. S2CID 15124657. 38. ^ a b Aerts, D.; Gabora, L. (2005). "A state-context-property model of concepts and their combinations II: A Hilbert space representation". Kybernetes. 34 (1&2): 192–221. arXiv:quant-ph/0402205. doi:10.1108/03684920510575807. S2CID 13988880. 39. ^ Gabora, L.; Aerts, D. (2002). "Contextualizing concepts using a mathematical generalization of the quantum formalism". Journal of Experimental and Theoretical Artificial Intelligence. 14 (4): 327–358. arXiv:quant-ph/0205161. doi:10.1080/09528130210162253. S2CID 10643452. 40. ^ a b Widdows, D.; Peters, S. (2003). Word Vectors and Quantum Logic: Experiments with negation and disjunction. Eighth Mathematics of Language Conference. pp. 141–154. 41. ^ Bruza, P. D.; Cole, R. J. (2005). "Quantum logic of semantic space: An exploratory investigation of context effects in practical reasoning". In Artemov, S.; Barringer, H.; d'Avila Garcez, A. S.; Lamb, L. C.; Woods, J. (eds.). We Will Show Them: Essays in Honour of Dov Gabbay. College Publications. ISBN 1-904987-11-7. 42. ^ Aerts, D. (2009). "Quantum particles as conceptual entities: A possible explanatory framework for quantum theory". Foundations of Science. 14 (4): 361–411. arXiv:1004.2530. doi:10.1007/s10699-009-9166-y. S2CID 119209842. 43. ^ Aerts, D.; Broekaert, J.; Gabora, L.; Sozzo, S. (2013). "Quantum structure and human thought". Behavioral and Brain Sciences. 36 (3): 274–276. doi:10.1017/S0140525X12002841. PMID 23673022. 44. ^ Aerts, Diederik; Gabora, Liane; Sozzo, Sandro (September 2013). "Concepts and Their Dynamics: A Quantum-Theoretic Modeling of Human Thought". Topics in Cognitive Science. 5 (4): 737–772. arXiv:1206.1069. doi:10.1111/tops.12042. PMID 24039114. S2CID 6300002. 45. ^ Aerts, D.; Sozzo, S. (2012). "Quantum structures in cognition: Why and how concepts are entangled". In Song, D.; Melucci, M.; Frommholz, I. (eds.). Quantum Interaction 2011. LNCS. Vol. 7052. Berlin: Springer. pp. 116–127. ISBN 978-3-642-24970-9. 46. ^ Aerts, D.; Sozzo, S. (2014). "Quantum entanglement in concept combinations". International Journal of Theoretical Physics. 53 (10): 3587–3603. arXiv:1302.3831. Bibcode:2014IJTP...53.3587A. doi:10.1007/s10773-013-1946-z. S2CID 17064563. 47. ^ Van Rijsbergen, K. (2004). The Geometry of Information Retrieval. Cambridge University Press. ISBN 0-521-83805-3. 48. ^ Widdows, D. (2006). Geometry and meaning. CSLI Publications. ISBN 1-57586-448-7. 49. ^ Aerts, D.; Czachor, M. (2004). "Quantum aspects of semantic analysis and symbolic artificial intelligence". Journal of Physics A. 37 (12): L123–L132. arXiv:quant-ph/0309022. doi:10.1088/0305-4470/37/12/L01. S2CID 16701954. 50. ^ Sorah, Michael. "Parserless Extraction; Using a Multidimensional Transient State Vector Machine" (PDF). 51. ^ a b Anton Amann: The Gestalt Problem in Quantum Theory: Generation of Molecular Shape by the Environment, Synthese, vol. 97, no. 1 (1993), pp. 125–156, JSTOR 20117832 Further readingEdit • Busemeyer, J. R.; Bruza, P. D. (2012). Quantum models of cognition and decision. Cambridge University Press. ISBN 978-1-107-01199-1. • Busemeyer, J. R.; Wang, Z. (2019). "Primer on quantum cognition". Spanish Journal of Psychology. 22. e53. doi:10.1017/sjp.2019.51. PMID 31868156. S2CID 209446824. • Conte, E. (2012). Advances in application of quantum mechanics in neuroscience and psychology: a Clifford algebraic approach. Nova Science Publishers. ISBN 978-1-61470-325-9. • Ivancevic, V.; Ivancevic, T. (2010). Quantum Neural Computation. Springer. ISBN 978-90-481-3349-9. External linksEdit
2e8baa87e3cf2138
Stanford Encyclopedia of Philosophy Religion and Science First published Tue Feb 20, 2007; substantive revision Thu May 27, 2010 Modern western empirical science has surely been the most impressive intellectual development since the 16th century. Religion, of course, has been around for much longer, and is presently flourishing, perhaps as never before. (True, there is the thesis of secularism, according to which science and technology, on the one hand, and religion, on the other, are inversely related: as the former waxes, the latter wanes. Recent resurgences of religion and religious belief in many parts of the world, however, cast considerable doubt on this thesis.) The relation between these two great cultural forces has been tumultuous, many-faceted, and confusing. This entry will concentrate on the relation between science and the theistic religions: Christianity, Judaism, Islam, where theism is the belief that there is an all-powerful, all-knowing perfectly good immaterial person who has created the world, has created human beings ‘in his own image,’ and to whom we owe worship, obedience and allegiance. Most of what follows will also apply to monotheistic and henotheistic varieties of Buddhism and Hinduism. There are many important issues and questions in this neighborhood; this entry concentrates on just a few. Perhaps the most salient question is whether the relation between religion and science is characterized by conflict or by concord. (Of course it is possible that there be both conflict and concord: conflict along certain dimensions, concord along others.) This question will be the central focus of what follows. Other important issues to be considered are the nature of religion, the nature of science, the epistemologies of science and, in particular, of religious belief, and the question how the latter figures into the (alleged or actual) conflict or concord between religion and science. 1. The Nature of Science and the Nature of Religion 1.1 Science The first thing to say, here, is that it is exceedingly difficult to characterize these phenomena. First, consider science: what exactly is science? How can we characterize it? What are the necessary and sufficient conditions for a given inquiry or theory or claim to be scientific, a part of science? This is far from easy to say. Many conditions have been proposed as essential to science. According to Jacques Monod, “The cornerstone of the scientific method is the postulate that nature is objective…. In other words, the systematic denial that ‘true’ knowledge can be got by interpreting nature in terms of final causes …” (Monod 1971, 21, Monod's emphasis). In the 1930s, the eminent German Chemist Walther Nernst claimed that science, by definition, requires an infinite universe; hence Big Bang theory, he said, isn't science (von Weizsäcker 1964, 151). Another proposed constraint: science can't involve moral judgments, or value judgments more generally. Clearly there is an intimate connection between the nature of science and its aim, the conditions under which something is successful science. Some say the aim of science is explanation (whether or not this is put in the service of truth). Some (realists) say the aim of science is to produce true theories; others say the aim of science is to produce empirically adequate theories, whether or not they are true (van Fraassen 1980). Some say science can't deal with the subjective, but only with what is public and sharable (and thus reports of consciousness are a better subject for scientific study than consciousness itself). Some say that science can deal only with what is repeatable; others deny this. In the furor over the teaching of “Intelligent Design” (ID) in public schools, some have said that scientific theories must be falsifiable, and, since the proposition that living things (rabbits, say) have been designed by one or more intelligent designers isn't falsifiable, ID isn't science. Others point out that many eminently scientific claims—for example, there are electrons—aren't falsifiable in isolation: what is falsifiable are whole theories about electrons. And while the proposition living things have been designed by an intelligent being is not falsifiable in isolation, the proposition an intelligent being has designed and created 800 lb. rabbits that live in Cleveland is clearly falsifiable (and false). The first group may reply that this proposition about 800 lb. rabbits is really just equivalent to its empirical implications, i.e., to the proposition that there are 800 lb. rabbits that live in Cleveland, so that the bit about the designer really drops out. The second group may then retort that if so, the same must hold for theories about electrons; but then theories about electrons are really just equivalent to their empirical implications, so that electrons drop out. Still others claim that science is constrained by ‘methodological naturalism’ (MN)—the idea that neither the data for a scientific investigation nor a scientific theory can properly refer to supernatural beings (God, angels, demons); thus one couldn't properly propose (as part of science) a theory according to which the recent outbreak of weird and irrational behavior in Washington D.C. is to be accounted for in terms of increased demonic behavior in that neighborhood. How do we know that MN really is an essential constraint on science? Some claim that it is simply a matter of definition; thus Nancey Murphy: “… there is what we might call methodological atheism, which is by definition common to all natural science” (Murphy 2001, 464). She continues: “This is simply the principle that scientific explanations are to be in terms of natural (not supernatural) entities and processes”. Similarly for Michael Ruse: “The Creationists believe that the world started miraculously. But miracles lie outside of science, which by definition deals only with the natural, the repeatable, that which is governed by law” (Ruse 1982, 322). By definition of what? By definition of the term ‘science’ one supposes. But others then ask: what about the Big Bang: if it turns out to be unrepeatable, must we conclude that it can't be studied scientifically? And consider the claim that science, by definition, deals only with that which is governed by law—natural law, one supposes. Some empiricists (in particular, Bas van Fraassen) argue that there aren't any natural laws (but only regularities): if they are right, would it follow that there is nothing at all for science to study? Still further, while some people argue that MN is an essential constraint on science, others dispute this: but can a serious dispute be settled just by citing a definition? Giving plausible necessary and sufficient conditions for science, therefore, is far from trivial; and many philosophers of science have given up on the “demarcation problem,” the problem of proposing such conditions (Laudan 1988). Perhaps the best we can do is point to paradigmatic examples of science and paradigmatic examples of non-science. Of course it may be a mistake to suppose that there is just one activity here, and just one aim. The sciences are enormously varied; there is the sort of activity that goes on in highly theoretical branches of physics (for example, investigating what happened during the first 10−43 seconds, or trying to figure out how to subject string theory to empirical check). But there is also the sort of project exemplified by an attempt to learn how the population of touconderos has responded to the decimation of the Amazon jungle over the last 25 years. In the first kind of account it may make sense to think what is desired is an empirically adequate theory, with the question of the truth of the theory at least temporarily bracketed. Not so in cases of the second kind; here nothing but the sober truth will do. Similarly with methodological naturalism. Some scientific projects are clearly constrained by MN (see below); a condition for theoretical adequacy, for them, will certainly be that the account in question is naturalistic. But is MN just part of the very nature of science as such? According to Isaac Newton, often said to be the greatest scientist of all time, the orbits of the planets would decay into chaos without outside intervention; he therefore proposed that God periodically adjusted their orbits. While that hypothesis is one of which we no longer have need, is it clear that its addition to Newton's account of the motions of the planets resulted in something that wasn't science at all? That seems unduly harsh. Perhaps we should think of the concept of science as one of those cluster concepts called to our attention by Thomas Aquinas and Ludwig Wittgenstein. Perhaps there are several quite different activities that go under the name ‘science’; these activities are related to each other by similarity and analogy, but there is no one single activity which is just science as such. There are projects for which the criterion of success involves producing true theories; there are others where the criterion of success involves producing theories that are empirically adequate, whether or not they are also true. There are projects constrained by MN; there are other projects that are not so constrained. These projects or activities all fall under the meaning of the term ‘science’; but there is no single activity of which all are examples. (In the same way, chess, basketball and poker are all games; but there is no single game of which they are all versions.) Perhaps the best we can do, with respect to characterizing science, is to say that the term ‘science’ applies to any activity that is (1) a systematic and disciplined enterprise aimed at finding out truth about our world,[1] and (2) has significant empirical involvement. This is of course vague (How systematic? How disciplined? How much empirical involvement?) and perhaps unduly permissive. (Does astrology count as science, even if only bad science?) Still, we do have many excellent examples of science, and excellent examples of non-science. 1.2 Religion If it is difficult to give an account of the nature of science, it is not much easier to say just what a religion is. Of course there are multifarious examples: Christianity, Islam, Judaism, Hinduism, Buddhism and many others. What characteristics are necessary and sufficient for something's being a religion? How does one distinguish a religion from a way of life, such as Confucianism? That's not easy to say. Not all religions involve belief in something like the almighty and all-knowing, morally perfect God of the theistic religions, or even in any supernatural beings at all. (Of course a substantial majority of them do.) With respect to our present inquiry, what is of special importance is the notion of a religious belief: what does a belief have to be like to be religious? Once more, that's not easy to say. To cite the furor over intelligent design again, some say the proposition that there is an intelligent designer of the living world is religion, not science. But not just any belief involving an intelligent designer, indeed, not just any belief involving God, is automatically religious. According to the New Testament book of James, “the devils believe [that God exists] and tremble”; the devils' beliefs, presumably, aren't religious.[2] Someone might propose theories about an omnipotent, omniscient and wholly good being as a key part of a metaphysical system: belief in such theories need not be religious. And what about a system of beliefs that answers the same great human questions answered by the clear examples of religion: questions about the fundamental nature of the universe and what is most real and basic in it, about the place of human beings in that universe, about whether there is such a thing as sin or an analogue, and if there is, what there is to be done about it, where we must look to improve the human condition, whether human beings survive their deaths and how a rational person should act? Will any system of beliefs that provides answers to those questions count as a religion? Again, not easy to say; probably not. The truth here, perhaps, is that a belief isn't religious just in itself. The property of being religious isn't intrinsic to a belief; it is rather one a belief acquires when it functions in a certain way in the life of a given person or community. To be a religious belief, the belief in question would have to be appropriately connected with characteristically religious attitudes on the part of the believer, such attitudes as worship, love, commitment, awe, and the like. Consider someone who believes that there is such a person as God, all right, because the existence of God helps with several metaphysical problems (for example, the nature of causation, the nature of propositions, properties and sets, and the nature of proper function in creatures that are not human artifacts). However, this person has no inclination to worship or love God, no commitment to try to further God's projects in our world; perhaps, like the devils, he hates God and intentionally does whatever he can to frustrate God's purposes in the world. For such a person, belief that there is such a person as God need not be a religious belief. In this way it's possible that a pair of people share a given belief which functions as a religious belief in the life of only one of them. It is therefore extremely difficult to give (informative) necessary and sufficient conditions for either science or religion. Perhaps for present purposes that is not a really serious problem; we do have many excellent examples of each, and perhaps that will suffice for our inquiry. 2. Epistemology and Science and Religion There are many interesting epistemological questions about science. A central topic has been the under determination of theory by evidence: evidence for a theory seldom entails the theory, in which case there will be several empirically equivalent theories—theories with the same consequences with respect to experience. Can empirically equivalent theories differ in epistemic status or value? If so, what makes the difference? Here it is common to appeal to the so-called theoretical virtues, such as simplicity, fecundity, beauty and the like. What shall we think of the “pessimistic induction” according to which nearly all past scientific theories have been later rejected; should that reduce our confidence in present scientific theories? How much, if any, of current scientific lore constitutes knowledge? And how far does the scientific method reach? Are there subjects science isn't competent to deal with? Is science more competent to deal with some subjects than others? Scientific modes of procedure seem to have been most successful in the hard sciences; the human sciences seem to lag. Are there differences in epistemic well-foundedness between different sciences, or perhaps between the hard sciences and the softer sciences? Questions of this sort, while of great intrinsic interest, aren't directly relevant to our present inquiry. What is most important to see is that the epistemology of science is really the epistemology of the main human cognitive faculties: memory, perception, rational intuition (logic and mathematics), testimony, perhaps Reid's sympathy, induction, and the like. What is characteristic of science is that these faculties are employed in a particularly disciplined and systematic way, and that there is particular emphasis upon perceptual experience. With respect to religious belief, there are also several sorts of epistemological questions. Are there good arguments for the existence of God? If there aren't, does it matter? Is the existence of evil, in all the horrifying forms it displays, evidence against theistic belief? Does it constitute a defeater for theistic belief? What about the question of pluralism: religion comes in so many kinds—Christianity, Islam, Judaism, Hinduism, Buddhism (with sub versions of each kind), but also a host of less widely practiced varieties. According to Jean Bodin, “each is refuted by all” (Bodin 1975, 256); does this variety constitute a defeater for each particular variety of religious belief? Some religious doctrines—Trinity, Incarnation, Atonement—are not easy to understand; does that mean they cannot be known or even rationally believed? If religious belief is based on faith rather than on reason, does that mean that it is at best seriously insecure, so that talk of a ‘leap of faith’, or ‘blind faith’ is appropriate? These questions have been most fully investigated with respect to Christian belief; hence what follows will concentrate on some questions about the epistemology of Christian belief. For present purposes, perhaps the main epistemological question is this: what is the source of rationality, or warrant, or positive epistemic status, if any, enjoyed by religious belief? Is it of the same sort as that enjoyed by belief in the teachings of current science? Is the evidence, if any, for religious belief of the same sort as that for scientific beliefs? Or is there some special source of positive epistemic status for religious belief? This is really a contemporary version of a question that goes back a long way: the question about the relation between faith and reason. It is connected with the question whether there are cogent arguments (rational arguments, arguments drawn from the deliverances of reason) for theistic belief, and whether the existence of cogent argument is required for rational acceptance of religious belief. Here there are fundamentally two views. According to ‘evidentialism’, the source of positive epistemic status for religious belief, if indeed it has such status, is just reason—the ensemble of rational faculties including, preeminently, perception, memory, rational intuition, testimony, and the like. The source of positive epistemic status for religious belief, therefore, is the same as that for scientific belief. This view goes back at least to John Locke (1689) and has prominent contemporary representatives. On this view, the existence of cogent arguments for a religious belief is required for rational acceptance of that belief, or at any rate is intimately related to rational acceptance. Some who endorse this view believe there aren't any such cogent arguments; accordingly they reject religious belief as unfounded and rationally unacceptable (Mackie 1982); others hold that in fact there are excellent arguments for theism and even for specifically Christian belief. Here the most prominent contemporary spokesperson would be Richard Swinburne, whose work over the last 30 years or so has resulted in the most powerful, complete and sophisticated development of natural theology the world has so far seen (see, e.g., Swinburne (1979, 2004), (1981, 2005)). The other main view, one adopted by, for example, both Thomas Aquinas (Summa Theologiae) and John Calvin (1559), is that belief in God in the first place, and in the distinctive teachings of Christianity in the second, can be rationally accepted even if there are no cogent arguments for them from the deliverances of reason; they have a source of warrant or positive epistemic status independent of the deliverances of reason. This view also has prominent contemporary representation (Alston 1991; Plantinga and Wolterstorff 1984; Plantinga 2000). To use Calvin's terminology, there is the Sensus Divinitatis, which is a source of belief in God, and the Internal Testimony of the Holy Spirit, which is the source of belief in the distinctive doctrines of Christianity. Beliefs produced by these sources go beyond reason in the sense that the source of their warrant is not the deliverances of reason; of course it does not follow that such beliefs are irrational, or contrary to reason; nor does it follow that there is something especially dicey or insecure, or chancy about them, as if faith were necessarily blind or a leap in the dark. Indeed, John Calvin defines faith as “a firm and certain knowledge of God's benevolence towards us, … .” (Calvin, 1559, p. 551 (emphasis added)). On this view, religion and faith have a source of properly rational belief independent of reason and science; it would therefore be possible for religion and faith to correct as well as be corrected by science and reason. There is some reason to think that if theism is indeed true, if indeed there is an all-powerful, all-knowing perfectly good person who has created the world and created human beings in his image, then religious belief would be independent of arguments from reason; it would not require such argument for rationality or positive epistemic status. For if theism is true, God would presumably want human beings to know of his presence (and in fact the vast majority of the human population believe in God or something very much like him); he would therefore arrange for human beings to be able to come to knowledge of him. But if knowledge of God depended on the theistic arguments, or other arguments from the deliverances of reason, then, as Aquinas says, only a few human beings would ever come to a knowledge of this truth, and they only after a long time, and with a substantial admixture of error. 3. Conflict and Concord 3.1 Concord Let's begin with concord. The early pioneers and heroes of modern Western science—Copernicus, Galileo, Kepler, Newton, Boyle, and so on—were all serious Christians, if occasionally, as with Newton, Christologically unorthodox. Furthermore, many (Foster 1934, 1935, 1936; Ratzsch 2009) have pointed out that theistic belief and empirical science display a deep concord, fit together neatly. This is in part a result of the doctrines of creation embraced by theistic religions—in particular two aspects of those doctrines. First, there is the thought that God has created the world, and has of course therefore also created human beings. Furthermore, he has created human beings in his own image. Now God, according to theistic belief, is a person: a being who has knowledge, affection (likes and dislikes), and executive will, and who can act on his beliefs in order to achieve his ends. One of the chief features of the divine image in human beings, then, is the ability to form beliefs and to acquire knowledge. As Thomas Aquinas puts it, “Since human beings are said to be in the image of God in virtue of their having a nature that includes an intellect, such a nature is most in the image of God in virtue of being most able to imitate God” (ST Ia q. 93 a. 4). God has therefore created both us and the world, and arranged for the former to know the latter. Thinking of science at the most basic level as the project of acquiring knowledge of ourselves and our world, it is clear, from this perspective, that the doctrine of imago dei underwrites this project. Indeed, the pursuit of science is a clear example of the development and enhancement of the image of God in human beings, both individually and collectively. Second, there is the thought that divine creation is contingent. According to theism, many of God's properties—his omniscience and omnipotence, his goodness and love—are essential to him: he has them in every possible world in which he exists. (And since, according to most theistic thought, he is a necessary being, one that exists in every possible world, he has those properties in every possible world.) Not so, however, with his property of creating. He isn't obliged, by his nature or anything else, to create the world; it is rather a free action on his part. Furthermore, given that he does create, he isn't obliged to do so in any particular way, or to create any particular kinds of things; that he has created the kinds of things we actually find is again contingent, a free action on his part. It is this doctrine of the contingency of divine creation that underwrites the empirical character of modern Western science (Ratzsch, 2009). For the realm of the necessary is (for the most part) the realm of a priori knowledge; here we have mathematics and logic and much philosophy.[3] What is contingent, on the other hand, is the domain or realm of a posteriori knowledge,[4] the sort of knowledge produced by perception, memory, and the empirical methods of science. This relationship between the contingency of creation and the importance of the empirical was recognized very early. Thus Roger Cotes, from the preface he wrote to Newton's Principia Mathematica: Without all doubt this world, so diversified with that variety of forms and motions we find in it, could arise from nothing but the perfect free will of God directing and presiding over it. From this fountain it is that those laws, which we call the laws of Nature, have flowed, in which there appear many traces of the most wise contrivance, but not the least shadow of necessity. These therefore we must not seek from uncertain conjectures, but learn them from observations and experiments (Cotes 1953, 132–33) [emphasis added]. What we've just seen is that in a certain way theistic belief supports modern science by licensing or endorsing the whole project of empirical investigation; it is also sometimes claimed that science supports theistic belief. Here there are several arguments, arguments that have historically fallen into two basic types: biological and cosmological. An example of the first type is the argument proposed by Michael Behe (Behe, 1996), according to which some structures at the molecular level exhibit “irreducible complexity.” These systems display several finely matched interacting parts all of which must be present and working properly in order for the system to do what it does; the removal of any part would preclude the thing's functioning. Among the phenomena Behe cites are the bacterial flagellum, the cilia employed by several kinds of cells for locomotion and other functions, blood clotting, the immune system, the transport of materials within cells, and the incredibly complex cascade of biochemical reactions and events that occur in vision. Such irreducibly complex structures and phenomena, he argues, can't have come to be by gradual, step-by-step Darwinian evolution (unguided by the hand of God or any other person); at any rate the probability that they should do so is vanishingly small. They therefore present what he calls a Lilliputian challenge to unguided Darwinism; if he is right, they present it with a Gargantuan challenge as well. Not only do they challenge Darwinism; they are also, he says, obviously designed: their design is about as obvious as an elephant in a living room: “to a person who does not feel obliged to restrict his search to unintelligent causes, the straightforward conclusion is that many biochemical systems were designed” (Behe, p. 193). Others, for example Paul Draper (2002) and Kenneth R. Miller (1999, 130–64), argue that Behe has not proved his case. A second type of argument for theism starts from the apparent fine-tuning of several of the physical parameters. Starting in the late sixties and early seventies, astrophysicists and others noted that several of the basic physical constants must fall within very narrow limits if there is to be the development of intelligent life—at any rate in a way anything like the way in which we think it actually happened. Thus B. J. Carr and M. J. Rees: The basic features of galaxies, stars, planets and the everyday world are essentially determined by a few microphysical constants and by the effects of gravitation… . several aspects of our Universe—some of which seem to be prerequisites for the evolution of any form of life—depend rather delicately on apparent ‘coincidences’ among the physical constants (Carr and Rees, 1979, 605). For example, if the force of gravity were even slightly stronger, all stars would be blue giants; if even slightly weaker, all would be red dwarfs; in neither case could life have developed (Carter 1979, 72). The same goes for the weak and strong nuclear forces; if either had been even slightly different, life, at any rate life of the sort we have, could probably not have developed. Apparently life is possible only because the universe is expanding at just the rate required to avoid recollapse. At an earlier time, the fine-tuning had to be even more remarkable: … we know that there has to have been a very close balance between the competing effect of explosive expansion and gravitational contraction which, at the very earliest epoch about which we can even pretend to speak (called the Planck time, 10–43 sec. after the big bang), would have corresponded to the incredible degree of accuracy represented by a deviation in their ratio from unity by only one part in 10 to the sixtieth (Polkinghorne 1989, 22). Other examples: the value of cosmological constant, of the vacuum expectation value of the Higgs field, and the ratio of the mass of the proton to the electron must all be fine-tuned to an incredible degree for the universe to be life-permitting (Barr 2003, 123-130). A particularly informed and technically detailed account of some of these fine-tunings is to be found in Robin Collins's “Evidence for Fine-Tuning” (Collins 2003). Many see these apparent enormous coincidences as substantiating the theistic claim that the universe has been created by a personal God who intends that there be life and indeed intelligent life; they take fine-tuning as offering the material for a properly restrained theistic argument. These arguments take several versions; perhaps the most successful versions argue that the epistemic probability of these fine-tuning phenomena on theism is much greater than their epistemic probability on the atheistic chance hypothesis. Here the conclusion is not (as such) that probably theism is true, but rather that theism is much better supported by these phenomena than the chance hypothesis is (Swinburne 2003; Collins 1999). Objections come in many varieties. Some who offer these arguments, in particular those associated with the so-called ‘Intelligent Design’ movement, take them to be contributions to science rather than philosophy or theology; the most common objection is that they don't meet the conditions for being science, in particular because their conclusion, that the universe has been designed by an intelligent being, isn't falsifiable. Others (as we saw above) reply that falsifiability is ordinarily not a property of individual propositions, but of entire theories, and that theories involving intelligent design can perfectly well be falsifiable. A more interesting objection to fine-tuning arguments is the “many universe” suggestion: perhaps there are very many, even infinitely many different universes or worlds; the cosmological constants take on different values in different worlds, so that very many (perhaps all possible) different sets of such values get exemplified in one world or another. Couldn't there be an eternal cycle of ‘big bangs’, with subsequent expansion to a certain limit and then subsequent contraction to a ‘big crunch’ at which the cosmological values are arbitrarily reset? (Dennett 1995, 179) Alternatively, couldn't it have been that at the Big Bang, there was enormous initial inflation, resulting in many cosmoi with many different settings for the physical constants? In either case it isn't at all surprising that in one or another of the resulting universes, the values of the cosmological constants are such as to be life-permitting. Nor is it at all surprising that the universe in which we find ourselves has life-permitting values; we couldn't exist elsewhere. If so, then the fine-tuning argument is ineffective: the probability of fine-tuning on the many worlds suggestion together with atheism is at least as large as the probability of fine-tuning on theism. There are responses (for example, that on this account there would have to be a universe generator which was itself fine-tuned (Collins 1999), or that even if it is likely that some universe be fine-tuned, nevertheless the likelihood that this universe be fine-tuned is unaffected by the pluriverse suggestion (White 2003)) and responses to the responses, and so on; not surprisingly, there is no consensus as to whether these fine-tuning arguments are successful. 3.2 Conflict? The Christian doctrine of creation supports a deep concord between Christian belief and science; yet it is of course compatible with this sort of concord that there also be conflict. Many have claimed that there is conflict, indeed warfare, between religion and science (Draper 1875) (White 1895). This is certainly too strong; but obviously the relation between the two has not always been smooth and irenic. There is the famous Galileo incident, often portrayed as a contest between the Catholic hierarchy, representing the forces of repression and tradition, the voice of the old world, the dead hand of the past, and, on the other hand, the forces of progress and the dulcet voice of reason and science. This way of looking at the matter is simplistic (Brooke 1991, 8–9); much more was involved. The dominant Aristotelian thought of the day was heavily a prioristic; hence part of what was involved was a dispute about the relative importance of observation and a priori thought in astronomy. Also involved were questions about what the Christian (and Jewish) Bible teaches in this area: does a passage like Joshua 10:12–15 (in which Joshua commanded the sun to stand still) favor the Ptolemaic system over the Copernican? And of course the usual questions of power and authority were also present.[5] More recently, a central locus of alleged conflict has been the theory of evolution. This particular flap is of course still very much with us. Many Christian fundamentalists accept a literal interpretation of the creation account in the first two chapters of Genesis; they therefore find incompatibility between the contemporary Darwinian evolutionary accounts of our origins and the Christian faith, at least as they understand it. Many Darwinian fundamentalists (as the late Stephen J. Gould called them) second that motion: they too claim there is conflict between Darwinian evolution and classical Christian or theistic belief. Contemporaries who champion this conflict view would include, for example, Richard Dawkins (1986, 2003), and Daniel Dennett (1995). An important part of the alleged conflict turns on the Christian belief that human beings and other creatures have been designed—designed by God; according to evolution, however, (so say Dawkins and Dennett), human beings have not been designed, but are a product of the unguided blind process of natural selection operating on some such source of genetic variation as random genetic mutation. Thus Dawkins: Others point out that this proposed conflict is far from obvious. The central feature of the modern doctrine of evolution is that the main driving force of the process is natural selection, winnowing some form of genetic variation, the most popular version being random genetic mutation. It is no part of the theory to say that these mutations occur just by chance in a sense of that term that implies that they are uncaused; they are random only in the sense that they do not arise from the design plan of the creatures to which they accrue, and do not occur because they enhance the organism's reproductive fitness. Thus Ernst Mayr, the dean of post-World War II biology: “When it is said that mutation or variation is random, the statement simply means that there is no correlation between the production of new genotypes and the adaptational needs of an organism in the given environment” (Mayr 1998, 98). If so, evolution, as currently stated and currently understood, is perfectly compatible with God's orchestrating and overseeing the whole process; indeed, it is perfectly compatible with that theory that God causes the random genetic mutations that are winnowed by natural selection. Those who claim that evolution shows that humankind and other living things have not been designed, so say their opponents, confuse a naturalistic gloss on the scientific theory with the theory itself. The claim that evolution demonstrates that human beings and other living creatures have not, contrary to appearances, been designed, is not part of or a consequence of the scientific theory, but a metaphysical or theological add-on (van Inwagen 2003).[6] A second area of alleged conflict has to do with divine action in the world. According to classical theistic religion, God has created the world; he also upholds and conserves it, preserves it in being. Apart from his conserving activity, the world would disappear like a candle flame in a high wind. So there is creation and conservation; but, so say the classical theistic religions, there is also special divine action, action going beyond creation and conservation. There are the miracles reported in both the Jewish and Christian Bibles: the parting of the Red Sea, for example, as well as Jesus's walking on water, feeding the 5,000, and rising from the dead. Miracles are also reported in the Koran. Many believers don't think of these special divine actions as restricted to Bible times: God still, at present, responds to prayers and accomplishes miraculous healings. Further, according to Christian ways of thought, God works in the hearts and minds of his children in such a way as to produce faith; Thomas Aquinas called this divine activity ‘the internal instigation of the Holy Spirit’ and John Calvin called it ‘the internal witness (or testimony) of the Holy Spirit.’ All of these would be examples of special divine action. Now many see here conflict with modern science. Among them are a large number of theologians; thus according to Langdon Gilkey, … contemporary theology does not expect, nor does it speak of, wondrous divine events on the surface of natural and historical life. The causal nexus in space and time which the Enlightenment science and philosophy introduced into the Western mind … is also assumed by modern theologians and scholars; since they participate in the modern world of science both intellectually and existentially, they can scarcely do anything else. Now this assumption of a causal order among phenomenal events, and therefore of the authority of the scientific interpretation of observable events, makes a great difference to the validity one assigns to biblical narratives and so to the way one understands their meaning. Suddenly a vast panoply of divine deeds and events recorded in scripture are no longer regarded as having actually happened… Whatever the Hebrews believed, we believe that the biblical people lived in the same causal continuum of space and time in which we live, and so one in which no divine wonders transpired and no divine voices were heard. (Gilkey 1983,31) Of course many philosophers and scientists would agree. The problem is alleged to be with God's special action in the world; there is no particular problem with creation and conservation, but divine action going beyond that is widely thought to be incompatible with modern science. Where exactly is this incompatibility thought to arise? The thought seems to be that special divine activity would be incompatible with the laws of nature as disclosed by science. Thus the distinguished biologist H. Allen Orr: Now Gilkey and the others are apparently thinking in terms of a Newtonian world-picture, according to which the universe is like a great machine proceeding according to the laws disclosed in science. This isn't sufficient for the hands-off, anti-interventionist theology of these theologians. After all, Newton himself, one hopes, accepted the Newtonian world-picture, and Newton proposed that God periodically adjusted the planetary orbits, which according to his calculations would otherwise gradually go awry. What Gilkey and his friends add, here, apparently, is determinism: the thought that the laws of nature together with the state of the universe at any time, entail the state of the universe at any other time. Here the classical source is Pierre Laplace: We ought then to regard the present state of the universe as the effect of its previous state and as the cause of the one which is to follow. Given for one instant a mind which could comprehend all the forces by which nature is animated and the respective situation of the beings that compose it—a mind sufficiently vast to subject these data to analysis—it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes. (Laplace 1796) It is the Laplacian world-picture that apparently animates Gilkey, et al. It is worth noting, however, that determinism and the Laplacian world-picture don't follow from classical science. That is because the great conservation laws deduced from Newton's Laws are stated for closed or isolated systems. Thus Sears and Zemansky (1963): The principle of conservation of energy states that the internal energy of an isolated system remains constant. This is the most general statement of the principle of conservation of energy. (p. 415) Newton's laws (as well as Maxwell's later physics of electricity and magnetism) apply to isolated or closed systems; they describe how the world works provided that the world is a closed (isolated) system, subject to no outside causal influence. But it is no part of Newtonian mechanics or classical science generally to declare that the material universe is indeed a closed system. (How could a thing like that be experimentally verified?) Hence there is nothing in classical science (at least in this area) incompatible with God's changing the velocity or direction of a particle, or a whole system of particles (or, for that matter, creating ex nihilo a full-grown horse). Energy, momentum and the like are conserved in a closed system; but the claim that the material universe is in fact a closed system is not part of classical physics; it is another metaphysical or theological add-on. So here there is no conflict between classical physics and special divine action in the world. This classical, Laplacian picture has of course been superseded by the development of quantum mechanics, beginning in the first couple of decades of the 20th century. According to quantum mechanics, associated with any physical system, a system of particles, for example, there is a wave function whose evolution through time is governed by the Schrödinger equation for that system. Now the interesting thing about quantum mechanics is that, unlike classical mechanics, it doesn't specify or predict a single configuration for this system of particles at a future time t. The wave function assigns a value at t to each of the configurations possibly resulting from the initial conditions; by applying Born's Rule to those values we get an assignment of probabilities to each of those possible configurations at t. Accordingly, we aren't told which configuration will in fact result (given the initial conditions) when the system is measured at t; instead we are given a distribution of probabilities for the many possible outcomes. Clearly miracles (parting the waters, rising from the dead, etc.) are not incompatible with these assignments. (No doubt such events would be assigned very low probabilities; but of course we don't need quantum mechanics to know that such events are improbable.) Further, on collapse interpretations such as those of Ghirardi, Rimini, and Weber, there is plenty of room for divine activity. Indeed, God could actually be the cause of the collapses, and of the way in which they occur (i.e., where P is the possibility that gets actualized at t, it could be that God causes P to be actualized then). (This could perhaps be seen as a halfway house between occasionalism and secondary causation.) With the advent of quantum mechanics, therefore, there seems to be even less reason to see special divine action in the world as somehow incompatible with science. Nevertheless, many who are entirely aware of the quantum mechanical revolution still find a problem with special divine action. For example, there is the “Divine Action Project” (Wildman 1988–2003, 31–75), a 15-year series of conferences and publications that began in 1988. So far these conferences have resulted in some 6 books of essays involving at least 50 or more authors from various fields of science together with philosophers and theologians, including many of the most prominent writers in the field. Most of these authors find a problem with special divine action. That is because they believe that a satisfactory account of God's action in the world would have to be noninterventionist, as Wildman says. Thus Arthur Peacocke, commenting on a certain proposal for divine action: God would have to be conceived of as actually manipulating micro-events (at the atomic, molecular, and according to some, quantum levels) in these initiating fluctuations on the natural world in order to produce the results at the macroscopic level which God wills. But such a conception of God's action … would then be no different in principle from that of God intervening in the order of nature with all the problems that that evokes for a rationally coherent belief in God as the creator of that order. (Peacocke 2004) Apparently, then, the project is to develop a conception of special divine action (action beyond creation and conservation) that doesn't involve intervention. But what would intervention be in the quantum mechanical picture? That's not easy to say. Indeed, it's not easy to see how intervention could be distinct from divine action beyond creation and conservation. If they aren't distinct, however, special divine action would just be intervention, in which case the project of developing a conception of special divine action that doesn't involve intervention is unhopeful. Still a third area of alleged conflict between religious belief and science has to do with the different epistemic attitudes associated with each. Thus, for example, John Worrall: In science, the dominant epistemic attitude (so the claim goes) is one of critical empirical investigation, issuing in theories which are held tentatively and provisionally; one is always prepared to give up a theory in favor of a more satisfactory successor. In religious (e.g., Christian) belief, the epistemic attitude of faith plays an important role, an attitude which differs both in the source of the belief in question, and in the readiness to give it up. Others (Ratzsch, 2004), however, point out that there isn't obviously a conflict here. Clearly those two attitudes are indeed different, and perhaps they can't be taken simultaneously with respect to the same proposition. Does that show a conflict between science and religious belief? Perhaps some ways of forming belief are appropriate in one area and others in other areas. To get a conflict, we must add that the scientific epistemic attitude is the only one appropriate to any area of cognitive endeavor. That claim, however, is not itself part of the scientific attitude; it is an epistemological declaration for which substantial argument is required (but not so far in evidence). Furthermore, scientists themselves don't seem to take the scientific epistemic attitude (as characterized above) to all of what they believe, or even all of what they believe as scientists. Thus it is common for scientists to believe that there has been a past, and indeed they sometimes tell us how long ago the earth, or our galaxy, or even the entire universe, was formed. Scientists seldom hold this belief—that there has been a past—as a result of empirical investigation; nor do they ordinarily hold it in that tentative, critical way, always looking for a better alternative. In these areas, therefore, it is hard to find conflict between theistic religious belief and contemporary science. 4. Where there is conflict? Other areas of science, however, do appear to produce conflict. First, there is the relatively new but rapidly growing discipline of evolutionary psychology. The heart and soul of this project is the effort to explain distinctive human traits—our art, humor, play, love, poetry, sense of adventure, love of stories, our music, our morality, and our religion—the heart and soul of this project is to explain all of these traits in terms of our evolutionary origin and history. And here we do find theories incompatible with religious belief. One important topic in this area has been altruistic behavior—behavior that promotes the reproductive fitness of someone else at the expense of the altruist's own reproductive fitness. How is it that there are people like missionaries and Mother Teresa, people who devote their entire lives to the service of others, paying little attention to their own reproductive prospects? Herbert Simon attempts to explain altruism from an evolutionary perspective in terms of two mechanisms, docility and limited rationality: Docile persons tend to learn and believe what they perceive others in the society want them to learn and believe. Thus the content of what is learned will not be fully screened for its contribution to personal fitness. Because of bounded rationality, the docile individual will often be unable to distinguish socially prescribed behavior that contributes to fitness from altruistic behavior [i.e., socially prescribed behavior that does not contribute to fitness]. In fact, docility will reduce the inclination to evaluate independently the contributions of behavior to fitness. … . By virtue of bounded rationality, the docile person cannot acquire the personally advantageous learning that provides the increment, d, of fitness without acquiring also the altruistic behaviors that cost the decrement (Simon 1990, 3, 4). Simon's theory is carefully worked out, well developed, and of considerable interest; it is also incompatible with theistic religious belief. According to his theory, the explanation of the altruist's behavior is failure to see that the behavior in question compromises evolutionary fitness. Hence, according to Simon's theory, the answer to the question ‘Why did Mother Teresa behave in such a way as to compromise her evolutionary fitness?’ is ‘Due to bounded rationality, she was unable to see that her mode of behavior would compromise her fitness.’ From a Christian perspective, that's not at all the right answer, which would rather be something like ‘She wanted to follow the example of Jesus and do what she could do to help the poor and sick.’ Another example from this area is provided by the many theories of religion and religious belief. According to some of these theories, religious belief is false but adaptive; according to others it is false and maladaptive. An example of the first group would be the theory proposed by David Sloan Wilson, who says that religion is a group adaptation: “Many features of religion, such as the nature of supernatural agents and their relationships with humans can be explained as adaptations designed to enable human groups to function as adaptive units” (Wilson 2002, p. 51). Religious belief, he says, is fictitious, but adaptive at the group level: it promotes cooperation, mutual respect, and solidarity, thus enabling the group to do well in competition with other groups. That religious belief can function as a group adaptation is of course consistent with theistic belief; what about the bit about religious belief's—theistic belief, for example—being fictitious? How could the claim that there is no such person as God be part of empirical science? And even if it could be, Wilson's theory, one thinks, would be on more solid ground if that easily detachable theological add-on were detached. What is not so easily detachable is the claim that religious belief (unlike memory, perceptual beliefs, rational intuition) is produced by cognitive faculties or processes that are not aimed at the production of true belief. According to Wilson, these processes or faculties have a function conferred on them by evolution; but it is not that of producing true beliefs. It is rather the function of producing beliefs that promote cooperation and solidarity; ultimately their function is to produce beliefs that are adaptive, i.e., promote reproductive fitness. Here a comparison with Sigmund Freud's views of theistic belief may be illuminating. Freud claims that theistic belief is illusion. This doesn't mean that theistic belief is false (although Freud thinks it is false); what it means is that theistic belief is produced by a cognitive process (wishful thinking) that is not ‘reality oriented’; its purpose is not the production of true belief, but (in this case) a belief that enables the believer to avoid the depression and apathy that would set in if she saw clearly the miserably appalling condition in which we human beings actually find ourselves. Wilson's view is like Freud's, then, in that he too proposes that theistic belief is produced by cognitive faculties that are not reality oriented. Whereas Freud takes a dim view of theistic belief, Wilson is much more appreciative: In the first place, much religious belief is not detached from reality …. Rather, it is intimately connected to reality by motivating behaviors that are adaptive in the real world—an awesome achievement when we appreciate the complexity that is required to become connected in this practical sense. … Adaptation is the gold standard against which rationality must be judged, along with all other forms of thought. Evolutionary biologists should be especially quick to grasp this point because they appreciate that the well-adapted mind is ultimately an organ of survival and reproduction (Wilson 2002, p. 228). Although Wilson has kind words for religion, his claim that religious belief is not aimed at the truth is incompatible with theistic religious belief. According to Christianity, for example, faith, including belief in the essentials of the Christian faith, is a divine gift; and the process producing it in the believer (the internal instigation of the Holy Spirit, according to Thomas Aquinas, the internal witness or testimony of the Holy Spirit, according to John Calvin) is indeed aimed at the truth and has as its function the production of true belief. So here there is conflict between science and religion. What accounts for this conflict? Several things, no doubt; but part of the explanation is to be found in methodological naturalism, a widely accepted constraint on science. According to methodological naturalism (MN), in doing science one must proceed “as if God is not given”, to use the words of Hugo Grotius. Exactly what does that mean? There are various suggestions; here is one. According to MN, (1) the data set (data model) for a proper scientific theory can't refer to God or other supernatural agents (angels, demons), or employ what one knows or thinks one knows by way of (divine) revelation. Thus the data for a theory wouldn't include, for example, the proposition that there has recently been an outbreak of demon possession in Washington, D. C. (2) A proper scientific theory can't refer to God or any other supernatural agents, or employ what one knows or thinks one knows by way of revelation. So if the data model contained the proposition that there has been an outbreak of weird and irrational behavior in Washington, one couldn't properly propose a theory involving demon possession to explain it. (3) Note first that the probability or plausibility of theory candidates and their capacity to explain the data, as well as their empirical implications, is always relative to an array of background information or an epistemic base. The third constraint, then, is that the epistemic base of a proper scientific theory can't include propositions obviously entailing[7] the existence of God or other supernatural agents, or propositions one knows or thinks one knows by way of revelation. So consider someone who in fact accepts the main lines of one of the theistic religions, and works in the area of evolutionary psychology. No doubt she will honor MN as a constraint on her scientific activity. If so, for scientific purposes she will eliminate from her evidence base propositions obviously entailing the existence of God or other supernatural beings, as well as what she knows or thinks she knows by way of faith or revelation. But then she might very well come up with theories of the kind we've been pointing to, theories incompatible with theistic religion. A rather different area with the same dialectic: historical biblical criticism (HBC). HBC is to be contrasted with traditional biblical commentary. The practitioner of the latter assumes that the bible is the word of God, and tries to lay bare the meaning of what is taught in various parts of the bible. The practitioner of HBC, on the other hand, specifically brackets the belief that the bible is divine revelation, and intends instead to study it scientifically. Thus the late Raymond Brown, a highly respected Catholic scripture scholar, believes that HBC is “scientific biblical criticism” (Brown 1973, p. 6); it yields “factual results” (p. 9); he intends his own contributions to be “scientifically respectable” (p. 11): and practitioners of HBC investigate the scriptures with “scientific exactitude” (pp. 18–19); see also Meier 1991, p. 6. To study the bible scientifically, therefore, is to study it in a way constrained by MN. (See also Sanders 1985, p. 5; Levenson 1993, p. 109; and Lindars 1986, p. 91). Naturally enough, there has been considerable tension between HBC, so construed, and traditional Christians, going back as least as far as David Strauss in 1835: “Nay, if we would be candid with ourselves, that which was once sacred history for the Christian believer is, for the enlightened portion of our contemporaries, only fable.” As for contemporary tensions, according to Luke Timothy Johnson: The Historical Jesus researchers insist that the ‘real Jesus’ must be found in the facts of his life before his death. The resurrection is, when considered at all, seen in terms of visionary experience, or as a continuation of an ‘empowerment’ that began before Jesus's death. Whether made explicit or not, the operative premise is that there is no ‘real Jesus’ after his death (Johnson 1997, p. 144). And according to Van Harvey “So far as the biblical historian is concerned, … there is scarcely a popularly held traditional belief about Jesus that is not regarded with considerable skepticism” (Harvey 1986, p. 193) An absolutely central characteristic of HBC is this effort to be scientific. Of course we might ask whether HBC, or any historical study, is really science; its advocates say that it is, but are they right? In view of the difficulty of the demarcation problem however, it is probably unwise to transform this question into an objection. (Further, even if historical studies of this kind are not precisely science, they are certainly very much like science.) And insofar as HBC requires conformity to MN, one who practices it brackets or suspends or sets aside any theological views, or what is known by revelation.[8] Just as with evolutionary psychology, therefore, one who works at HBC might in fact accept theistic religion of one sort or another, but in his work as a practitioner of HBC, come to conclusions incompatible with his religious belief. So far, therefore, there is the same dialectic here as with evolutionary psychology: theories incompatible with theistic religion arising (at least in part) out of MN. In at least these two areas, therefore, there is conflict between scientific theories and religious belief. In a certain very important respect, however, this conflict is superficial. That is because the theories and claims of evolutionary psychology and HBC need not constitute defeaters, even partial defeaters,[9] for those elements of religious belief with which they are incompatible—even though theism is committed to taking science with great seriousness and even if it is conceded that the theories in question constitute good science. And that is precisely because MN is taken as constraining scientific activity. We can see this as follows. As already suggested, scientific investigation or inquiry is always conducted against the background of an evidence base, a body of background knowledge or belief. An important part of MN, furthermore, is that this evidence base must not contain propositions obviously entailing the existence of supernatural beings, or propositions that are accepted by way of faith. It follows that the evidence base of an adherent of a theistic religion will contain the scientific evidence base as a proper part; it will include all the propositions to be found in the scientific evidence base, plus more—perhaps those specific to Christian belief. Now suppose a given theory—Simon's theory on altruism, or Wilson's on religion, or some minimalist account of Jesus's life and activity—is in fact proper science, and is indeed the most plausible, scientifically most satisfactory theoretical response to the evidence, given EBS, the scientific evidence base. This means that from the point of view of EBS together with current evidence, that theory is the scientifically best or most plausible result. Still, that doesn't automatically give a believer a defeater for those of her beliefs with which the theory are incompatible. That is because EBS is only part of her evidence base. And it can easily happen that a proposition P is the plausible response, given a part of my evidence base (together with the current evidence), that P is incompatible with one of my beliefs, and that P fails to provide me with a defeater for that belief. For example, suppose I tell you that I saw you at the mall yesterday afternoon. Then with respect to part of your total evidence base—a part that includes your knowledge that I told you I saw you there, together with your knowledge that I have decent vision and am ordinarily reliable, and the like—the right thing to think is that you were at the mall. Nevertheless, we may suppose, you know perfectly well that you weren't there; you remember that you were home all afternoon thinking about methodological naturalism. Here the right thing to think from the perspective of a proper part of your evidence base is that you were at the mall; but this does not give you a defeater for your belief that you were not there. Another example: we can imagine a renegade group of whimsical physicists proposing to reconstruct physics, refusing to use memory beliefs, or if that is too fantastic, memories of anything more than 1 minute ago. Perhaps something could be done along these lines, but it would be a poor, paltry, truncated, trifling thing. And now suppose that the best theory, from this limited evidence base, is inconsistent with general relativity. Should that give pause to the more traditional physicists who employ what they know by way of memory as well as what the renegade physicists use? I should think not. This truncated physics could hardly call into question physics of the fuller variety, and the fact that from a proper part of the scientific evidence base, something inconsistent with general relativity is the best theory—that fact would hardly give more traditional physicists a defeater for general relativity. Similarly for the case under question. The traditional Christian thinks she knows by faith that Jesus was divine and that he rose from the dead. But then she need not be moved by the fact that these propositions are not especially probable on the evidence base to which HBC limits itself—i.e., one constrained by MN and therefore one that deletes any knowledge or belief dependent upon faith. The findings of HBC, if findings they are, need not give her a defeater for those of her beliefs with which they are incompatible. The point is not that HBC, evolutionary psychology and other scientific theorizing couldn't in principle produce defeaters for Christian belief;[10] the point is only that its coming up with theories incompatible with Christian belief doesn't automatically produce such a defeater. Everything depends on the particular evidence adduced in the case in question, and the bearing of that evidence given the believer's total evidence base. In the case in question, for example, it may be that given EBS and the relevant data base, it is unlikely that Jesus arose from the dead. But given an evidence base including not only EBS but also belief in God together with the specifically Christian beliefs that Jesus is the second person of the trinity incarnate, and that the New Testament is a reliable source of information on these matters—given these things, the proposition that he rose from the dead may not be at all improbable. Similar considerations would hold, of course, for the other theistic religions and proposed scientific defeaters. Someone might complain that this looks like a recipe for intellectual irresponsibility, for hanging on to beliefs in the teeth of the evidence. Can't a believer always say something like this, no matter what proposed defeater presents itself? “Perhaps B (the proposed defeatee) is improbable or unlikely with respect to part of what I believe,” she says, “but it is certainly not improbable with respect to the totality of what I believe, that totality including, of course, B itself.” Obviously that can't be right; if it were, every putative defeater could be turned aside in this way and defeat would be impossible. But defeat is not impossible; it sometimes happens that one does acquire a defeater for a belief B, by learning that B is improbable with respect to some proper subset of one's evidence base. According to the book of Isaiah (41:9), God says “I took you from the ends of the earth, from its farthest corners I called you. I said, ‘You are my servant’; I have chosen you and have not rejected you.” Someone might believe R, the proposition that the earth is a rectangular solid with ends and corners, on the basis of this text; she will have a defeater for this belief when confronted with the scientific evidence—photographs of the earth from space, for example—against it. At any rate she will have a defeater for R if the rest of her noetic structure is at all like ours. The same goes for someone who holds pre-Copernican beliefs on the basis of such a text as “The earth stands fast; it shall not be moved” (Psalm 104:5). Why is there a defeater in some cases, but not in others? What makes the difference? Here is a suggestion. Consider some religious belief B incompatible with a deliverance of some current scientific theory: B might be, for example, the belief that Mother Teresa was perfectly rational in behaving in that altruistic fashion. Let the scientific theory in question be Herbert Simon's account of altruism, and let EBS be the believer's evidence base. Our question is whether A, the belief that Simon's theory is proper science (and that it entails the denial of B), is a defeater for B. Add A to S's evidence base; and now the right question, perhaps, is this: is B epistemically improbable or unlikely with respect to the conjunction of A with EBS? Of course B itself might initially be a member of EBS, in which case it will certainly not be improbable with respect to it. If that were sufficient for A's not being a defeater of B, however, no member of the evidence base could ever be defeated by a new discovery; and that can't be right. So let's delete B from EBS. Call the result of deleting B from S's evidence base ‘EBS reduced with respect to B’ — ‘EBSB’ for short.[11] And now the suggestion — call it ‘the reduction test for defeat’ — is that A is a defeater for B just if B is appropriately improbable with respect to the conjunction of A with EBSB. Suppose we apply this test to the belief B that Mother Teresa was rational in behaving altruistically, with A being the belief that Simon's theory of altruism is good science and is incompatible with B; and let's suppose that Sis a Christian believer. To apply the reduction test, we must ask whether B is improbable with respect to the conjunction of A with EBSB. The answer, I should think, is that B is not improbable with respect to that conjunction. For EBSB includes the empirical evidence, whatever exactly it is, appealed to by Simon, but also the proposition that we human beings have been created by God and created in his image, along with the rest of the main lines of the Christian story. With respect to the conjunction of A with that body of propositions, it is not likely that if Mother Teresa had been more rational, smarter, she would have acted so as to increase her reproductive fitness rather than live altruistically. Hence, on the proposed reduction test, the fact that Simon's theory is good science and is more likely than not with respect to the scientific evidence base—that fact does not give Sa defeater for what she thinks about Mother Teresa. Consider, on the other hand, the belief B* that the earth has corners and edges and the photographic evidence against that belief: here, plausibly, the reduction test gives the result that the latter is a defeater for B*. (True: a Christian might think that the Bible is infallible, since God is its ultimate author; but of course that leaves open the question what God intends to teach in the passage in question.) So the reduction test gives sensible results in these two cases. It can't be right in general, however—more exactly, it is right in general only on a certain very important assumption the believer is likely to reject. For it might be, clearly enough, that B has a lot of warrant on its own, warrant it doesn't get from the other members of EBS or indeed any other propositions. B may be basic with respect to warrant; B might get warrant from a source different from any involved in the scientific theory with which it is incompatible. If so, the fact that B is unlikely with respect to EBSB doesn't show that Shas a defeater for B in the fact that B is unlikely with respect to EBSB together with the relevant A. By way of illustrative example: you are on trial for some crime; the evidence against you is strong, and you are convicted. Nevertheless, you remember very clearly that at the time the crime occurred, you were on a solitary walk in the woods. Your belief that you were walking in the woods isn't based on argument or inference from other propositions (You don't note, e.g., that you feel a little tired and that your walking shoes are muddy, and that there is a map of the area in your parka pocket, concluding that the best explanation of these phenomena is that you were walking there.) So consider EByouP, your evidence base diminished with respect to P, the proposition that you didn't commit the crime and were walking in the woods when it was committed. With respect to EByouP, P is epistemically improbable; after all, you have the same evidence as the jury for ¬P, and the jury is quite properly (if mistakenly) convinced that you did the crime. Still, you certainly don't have a defeater, here, for your belief that you are innocent. The reason, of course, is that P has for you a source of warrant independent of the rest of your beliefs: you remember it. In a case like this, whether you have a defeater for the belief P in question will depend, on the one hand, upon the strength of the intrinsic warrant enjoyed by P, and, on the other, the strength of the evidence against P from EByouP. Very often the intrinsic warrant will be the stronger. The same will go for religious beliefs, if they do in fact have intrinsic warrant. If S holds a religious belief B and if B has warrant in the basic way, then even if the probability of B on EBSB together with the relevant A is low, it won't follow that A is a defeater of B for S. Perhaps the reduction test offers a necessary condition of A's being a defeater for B for S; it is also sufficient only if religious beliefs don't have warrant or positive epistemic status in the basic way, and only if they don't acquire warrant or positive epistemic status from a source other than those that confer that status on scientific beliefs. This is part of the importance of the question noted above in section 2. 5. Naturalism and Science So far we've examined alleged conflict between theistic religious belief and science with respect to several areas: evolution, divine action in the world, the difference between the scientific attitude and the religious attitude, evolutionary psychology, and HBC. But some have suggested a science/religion (or science/quasi-religion) conflict of a wholly different sort: one between naturalism and science. (Otte 2002; Plantinga 1993, 2002a; Rea 2002; Taylor 1963); there are also hints of this effect in Nietzsche (2003) and in Darwin himself (1887). Now naturalism comes in several different colors and flavors. First, there is the view that nature is all there is; there are no supernatural beings. Of course this is a bit slim as an explanation of naturalism; we need to know what nature is, and what allegedly supernatural beings might be like. Perhaps a way to proceed would be to say that naturalism, so conceived, is the view that there is no such person as the God of theism, or anything like God (see, e.g., Beilby 2002). Call this ‘naturalism1’. Another variety of naturalism, ‘scientific naturalism’, we might call it, would be the claim that there are no entities in addition to those endorsed by contemporary science (Kornblith 1994).[12] Given that current science endorses no supernatural beings, scientific naturalism implies naturalism1. There is also what we might call ‘epistemological naturalism’, according to which, roughly speaking, the methods of science are the only proper epistemic methods (Krikorian 1944). With the help of a couple of fairly obvious premises, epistemological naturalism also implies naturalism1, and I'll use ‘naturalism’ to refer to the disjunction of the three versions of naturalism sketched. Advocates of naturalism thus conceived would be (for example) Bertrand Russell (1957), Daniel Dennett (1995), Richard Dawkins (1986), David Armstrong (1978), and the many others that are sometimes said to endorse “The Scientific World-View.” Naturalism is presumably not a religion. In one very important respect, however, it resembles religion: it can be said to perform the cognitive function of a religion. There is that range of deep human questions to which a religion typically provides an answer (above, Section I): what is the fundamental nature of the universe: for example, is it mind first, or matter (non-mind) first? What is most real and basic in it, and what kinds of entities does it display? What is the place of human beings in the universe, and what is their relation to the rest of the world? Are there prospects for life after death? Is there such a thing as sin, or some analogue of sin? If so, what are the prospects of combating or overcoming it? Where must we look to improve the human condition? Is there such a thing as a summum bonum, a highest good for human beings, and if so what is it? Like a typical religion, naturalism gives a set of answers to these and similar questions. We may therefore say that naturalism performs the cognitive function of a religion, and hence can sensibly be thought of as a quasi-religion. Next, note many thinkers going back at least to Nietzsche (Nietzsche 2003) and possibly William Whewell (Curtis 1986) have pointed to a potentially worrisome implication of evolutionary theory. The worry can be put as follows. According to orthodox Darwinism, the process of evolution is driven mainly by two mechanisms: random genetic mutation and natural selection. The former is the chief source of genetic variability; by virtue of the latter, a mutation resulting in a heritable, fitness-enhancing trait is likely to spread through that population and be preserved as part of the genome. It is fitness-enhancing behavior and traits that get rewarded by natural selection; what get penalized are maladaptive traits and behaviors. In crafting our cognitive faculties, natural selection will favor cognitive faculties and processes that result in adaptive behavior; it cares not a whit about true belief (as such) or about cognitive faculties that reliably give rise to true belief. As evolutionary psychologist Donald Sloan Wilson puts it, “the well-adapted mind is ultimately an organ of survival and reproduction” (Wilson 2002, 228). What our minds are for (if anything) is not the production of true beliefs, but the production of adaptive behavior: that our species has survived and evolved at most guarantees that our behavior is adaptive; it does not guarantee or even make it likely that our belief-producing processes are for the most part reliable, or that our beliefs are for the most part true. That is because our behavior could perfectly well be adaptive, but our beliefs false as often as true. Darwin himself apparently worried about this question: “With me,” says Darwin, We can briefly state Darwin's doubt as follows. Let R be the proposition that our cognitive faculties are reliable, N the proposition that naturalism is true and E the proposition that we and our cognitive faculties have come to be by way of the processes to which contemporary evolutionary theory points us: what is the conditional probability of R on N&E? I.e., what is P(R | N&E)? Darwin fears it may be rather low. Of course it is only unguided natural selection that prompts the worry. If natural selection were guided and orchestrated by the God of theism, for example, the worry would disappear; God would presumably use the whole process to create creatures of the sort he wanted, creatures in his own image, creatures with reliable cognitive faculties. So it is unguided evolution, and metaphysical beliefs that entail unguided evolution, that prompt this worry about the reliability of our cognitive faculties. Now naturalism entails that evolution, if it occurs, is indeed unguided. But then, so the suggestion goes, it is unlikely that our cognitive faculties are reliable, given the conjunction of naturalism with the proposition that we and our cognitive faculties have come to be by way of natural selection winnowing random genetic variation. If so, one who believes that conjunction will have a defeater for the proposition that our faculties are reliable—but if that's true, she will also have a defeater for any belief produced by her cognitive faculties—including, of course, the conjunction of naturalism with evolution. That conjunction is thus seen to be self-refuting. If so, however, this conjunction cannot rationally be accepted, in which case there is conflict between naturalism and evolution, and hence between naturalism and science. We can state the argument schematically as follows: 1. P(R | N&E) is low. 2. Anyone who accepts N&E and sees that (1) is true has a defeater for R. 3. Anyone who has a defeater for R has a defeater for any other belief she holds, including N&E itself. 1. Anyone who accepts N&E and sees that (1) is true has a defeater for N&E; hence N&E can't be rationally accepted. Of course this is brief and merely a schematic version of the argument; there is no space here for the requisite qualifications. Support for (1) could go as follows. First, in order to avoid influence from our natural assumption that our cognitive faculties are reliable, think not about us, but about hypothetical creatures a lot like us, perhaps existing in some other part of the universe; and suppose N and E are true with respect to them. Next, note that naturalism apparently implies materialism (about human beings); current science does not endorse the existence of immaterial souls or minds or selves. So take naturalism to include materialism. What would a belief be, from this point of view? Presumably something like a long-term event or structure in the nervous system—perhaps a structured group of neurons connected and related in certain ways. Such a neural structure will have neurophysiological properties (‘NP properties’): properties specifying the number of neurons involved, the way in which those neurons are connected with each other and with other structures (with muscles, glands, sense organs, other neuronal events, etc.), the average rate and intensity of neuronal firing in various parts of this event, and the ways in which these rates of fire change over time and in response to input from other areas. If this event is really a belief, however, then it will also have content; it will be the belief that p, for some proposition p—perhaps the proposition naturalism is all the rage these days. What is the relation between NP properties, on the one hand, and content properties—such properties as having the proposition that naturalism is all the rage these days as content—on the other? Perhaps the most popular position here is “nonreductive materialism” (NRM): content properties are distinct from but supervene on (see the entry on supervenience) NP properties.[13] Supervenience can be either broadly logical or nomic. In the latter case, there would be psychophysical laws relating NP properties to content properties: laws of the sort any structure with such and such NP properties will have such and such content. These laws presumably will be contingent (in the broadly logical or metaphysical sense). In the former case, there will also be such laws, but they will be necessary rather than contingent. Now take any belief B you like on the part of a member of that hypothetical population: what is the (epistemic) probability that B is true, given N&E and nonreductive materialism—what is P(B | N&E&NRM)? What we know is that B has a certain content (call it ‘C’), and (we may assume or concede) having B is adaptive in the circumstances in which that creature finds itself. What, then, is the probability that C, the content of B, is true? Well, what is the probability that the relevant psychophysical law L connecting NP properties and content properties yields a true proposition as content in this instance? Having B is adaptive, in the circumstances in which the creature finds itself; its displaying the NP properties on which C supervenes causes adaptive behavior. But why think the content connected with those NP properties by L will be true in this creature's circumstances? What counts for adaptivity are the NP properties and the behavior they cause; it doesn't matter whether the supervening content is true. The NP properties are indeed adaptive; but that provides no reason, so far, for thinking the supervening content is true. Having B is adaptive by virtue of its causing adaptive behavior, not by virtue of having true content. Of course if theism is true, then human beings (as opposed to those hypothetical creatures, for whom naturalism is true) are made in the divine image, which includes the capacity for knowledge; so God would presumably have chosen the psychophysical laws in such a way that in the relevant circumstances, the neurophysiology yields true content. But nothing like that is true given naturalism; to suppose that the content properties that are adaptive, for the most part also lead to true content, would be wholly unjustified optimism. So what is P(B | N&E&NRM)? Well, since the truth of B doesn't make a difference to the adaptivity of B, B could indeed be true, but is equally likely to be false; we'd have to estimate the probability that it is true as about the same as the probability that it is false. But that means that it is improbable that the believer in question has reliable cognitive faculties, i.e., faculties that produce a sufficient preponderance of true over false beliefs. For example, if so, if the believer in question has 1000 independent beliefs, each as likely to be false as true, the probability that, say, 3/4 of them are true (and this would be a modest requirement for reliability) will be very low—less than 10−58. So P(B | N&E&NRM) specified to these creatures will be low. But of course the same would hold for us, if naturalism is true: P(B | N&E&NRM) specified to us is equally low.[14] That's the argument for the first premise. According to the second premise, one who sees this and also accepts N&E has a defeater for R, a reason to give it up, to cease believing it. The support offered for this premise is by way of analogy from clear cases. Suppose I believe there is a drug—call it XX—that destroys cognitive reliability; I believe 95% of those who ingest XX become cognitively unreliable. Suppose further that I now believe both that I've ingested XX and that P(R | I've ingested XX) is low; taken together, these two beliefs give me a defeater for my initial belief or assumption that my cognitive faculties are reliable. Furthermore, I can't appeal to any of my other beliefs to show or argue that my cognitive faculties are still reliable; any such other belief is also now suspect or compromised, just as R is. Any such other belief B is a product of my cognitive faculties: but then in recognizing this and having a defeater for R, I also have a defeater for B. Of course there will be many other examples: I'll get the same result if I believe that I am a brain in a vat and that P(R | I'm a brain in a vat) is low; the same goes for the classic Cartesian version of the same idea (namely that I've been created by a being who delights in deception) and for other more homely scenarios, for example, the belief that I've gone insane (perhaps by way of contracting mad cow disease). In all of these cases I get a defeater for R. Now according to the third premise, one who has a defeater for R has a defeater for any belief she takes to be a product of her cognitive faculties—which is, of course, all of her beliefs. She therefore has a defeater for N&E itself; so one who accepts N&E (and sees that P(R | N&E) is low) has a defeater for N&E, a reason to doubt or reject or be agnostic with respect to it. Nor could she get independent evidence for R; the process of doing so would of course presuppose that her faculties are reliable. She'd be relying on the accuracy of her faculties in believing that the alleged evidence is in fact present and that it is in fact evidence for R. Thomas Reid (1785, 276) put it like this: If a man's honesty were called into question, it would be ridiculous to refer to the man's own word, whether he be honest or not. The same absurdity there is in attempting to prove, by any kind of reasoning, probable or demonstrative, that our reason is not fallacious, since the very point in question is, whether reasoning may be trusted. The argument concludes that the conjunction of naturalism with the theory of evolution cannot rationally be accepted—at any rate by someone who is apprised of this argument and sees the connection between N&E and R. As one might expect, this argument has been controversial; a number of objections have been raised against it. (Beilby 1997; Ginet 1995, 403; O'Connor 1994, 527; Ross 1997; Fitelson and Sober 1998; Robbins 1994; Fales 1996; Lehrer 1996; Nathan 1997; Levin 1997; Fodor 1998) There have been responses to the objections (Plantinga 2002a; 2003), responses to those responses (Talbott, forthcoming), and so on; there is nothing like consensus regarding the argument. If the argument is correct, however, and N&E can't rationally be accepted, then there is a conflict between naturalism and evolution; one can't rationally accept them both. Hence there is conflict between naturalism and one of the chief pillars of contemporary science. Insofar as naturalism is a quasi-religion by virtue of performing the cognitive function of a religion, there is a sort of religion/science conflict—not between theistic religion and science, but between naturalism and science. Other Internet Resources Related Entries behaviorism | cognitive science | consciousness | consciousness: animal | consciousness: representational theories of | emergent properties | evil: problem of | free will | identity theory of mind | incompatibilism: (nondeterministic) theories of free will | knowledge: analysis of | language of thought hypothesis | memory | mental causation | mental imagery | naturalism | neuroscience, philosophy of | perception: epistemological problems of | quantum theory: and consciousness | realism: semantic challenges to | relativism | religion: epistemology of | sociobiology | supervenience | Wundt, Wilhelm Maximilian For wise counsel and good advice, I am grateful to Brian Boeninger, Thad Botham, EJ Coffman, Robin Collins, Tom Crisp, Chris Green, Jeff Green, Marcin Iwanicki, Nathan King, Dan McKaughan, Dolores Morris, Brian Pitts, Luke Potter and Del Ratzsch.
fe65a8bb812b4019
How can NMR spectroscopy help protein biopharmaceutical development? Nuclear magnetic resonance (NMR) spectroscopy is a technique that most have heard of but many avoid, especially when dealing with larger biological macromolecules. Why? The common wisdom is that NMR can be complicated, expensive and involves quantum physics and complex equations. In the fast-paced industrial biopharmaceutical environment there may be a disinclination to expend the necessary time and resources to set up the technique, but Jack Bramham and Alexander Golovanov highlight here some reasons why it may be worth the investment. THE SUBJECT of NMR may bring to mind Schrödinger equations, Hamiltonians and spin states. For non-specialists, these may appear too distant from the more prosaic day-to-day tasks of a bioprocessing or formulation scientist, such as how to make a biopharmaceutical formulation better, more stable, with increased shelf life and decreased aggregation. Since their introduction in 1982, protein-based biopharmaceuticals have become increasingly important therapies in the treatment of a wide range of diseases, including cancer, autoimmune and blood clotting diseases. Recent years have seen the advancement of more complex monoclonal antibody (mAb) products and the emergence of new protein-based engineered modalities, often applied as high concentration formulations or as co-formulations of several proteins together. Antibody-drug conjugates, mAbs and new modalities of ever increasing complexity may suffer from issues typical for any protein; that is, under certain conditions (eg, temperature, pH or over time) they may become destabilised and, due to inherent general ‘stickiness’ of their surface, proteins may form reversible assemblies or clusters, causing an increase in formulation viscosity, or form irreversible aggregates. In addition, protein solutions may undergo liquid-liquid phase separation, which can initially manifest as sample opalescence due to suspended dense liquid droplets, and later as formation of distinct protein-rich and lean layers in the sample. With all these problems to overcome, the quantum physics of NMR may not appear so scary after all. The complexity of biopharmaceutical entities and their formulations, and the multitude of possible things that can go wrong at the initial research stages, requires new approaches and orthogonal techniques that can deal with this complexity in situ in biopharmaceutical formulation. This is where the power of NMR comes into play. Scientific abstract conceptThere are multiple books and reviews that describe the theory and practice of NMR, but on an over-simplistic level, it can be described as a spectroscopic technique in which a sample is excited by a range of radiofrequencies and the characteristic response recorded. Each NMR‑active nucleus present in the sample produces a ‘signal’ picked up by the NMR spectrometer. The properties of these signals, their position in the spectrum (‘chemical shift’) and relaxation rates (longitudinal and transverse) are dependent on the neighbourhood and immediate environment at each site – ie, each nucleus – as well as tumbling rates, which are dependent on apparent molecular size and solution viscosity. Thus, it is possible to simultaneously detect changes of environment in the various components in a sample at different reporter sites and hence ‘see’ what is happening even in complex formulations containing multiple components. NMR experiments can also easily measure translational self-diffusion of both large and small molecules in the sample, providing additional information on protein clustering or aggregation. While NMR is routinely used in small molecule pharmaceutical development for assessment of chemical structure, purity and stability, it is less widely used in biopharmaceutical development. However, as we demonstrate here, NMR is a powerful technique, which by itself or in combination with orthogonal techniques, can be used to address a wide variety of biopharmaceutical issues. Protein-based biopharmaceuticals have become increasingly important therapies in the treatment of a wide range of diseases” The isotopes 1H, 13C, 15N and 19F are NMR-visible and are the most relevant for biopharmaceutical molecules. The most abundant, 1H, is present in all proteins as well as in most excipients, leading to significant signal overlap in one-dimensional 1H spectra. While these spectra may be complex for large proteins like mAbs, useful information can still be obtained.1 Particular regions of mAbs spectra free from excipient signals can be used to study protein behaviour via various NMR-measured parameters. In NMR, signal intensity reports on the concentration of atoms from which the signal is observed. For example, signal intensity can be used to determine protein concentration once the protein is denatured.2 In non-denatured protein solutions, signal intensity is affected by a number of factors, including viscosity and self-association. Therefore, by accounting for viscosity and protein concentration, 1H NMR signal intensity can report on mAb self-association, for example, in the presence of different excipients.1 The relaxation and translational diffusion measured for 1H signal of the mAb can also be independently used to characterise protein behaviour in solution, such as self-association.3-4 Alternatively, if the signals from small molecule excipients present in the same sample are of interest, spectral complexity can be reduced by using ‘relaxation filters’ to remove fast-relaxing signals from protein components, leaving only the slower relaxing small molecule signals. These filtered spectra can be used to detect extractables and leachables from bioprocessing5 or track the success of dialysis.6 Incorporating 13C and/or 15N labels in proteins, often in combination with multi-dimensional NMR spectroscopy, is a method routinely used to address complex questions about protein structure and dynamics. Proteins expressed in E. coli or common yeast systems can be easily enriched with these isotopes, permitting such experiments. While for many mAbs produced in mammalian cells this may not be a feasible strategy, advances in NMR hardware and techniques have enabled experiments to be conducted on these isotopes present at natural abundance. Perhaps the most prominent recent application of NMR to biopharmaceuticals is the ‘fingerprinting’ of protein higher order structure (HOS) in mAbs performed without any isotopic labelling.7-8 Like human fingerprints, these NMR fingerprints are unique to an individual mAb sequence and HOS, so can be used to directly compare protein identities. This approach is particularly relevant for biosimilars – generic versions of mAbs, which are on the rise following expiration of early patents. Although biosimilars may have an identical primary sequence to their reference mAbs, it is important for efficacy and safety to ensure that post-translational modifications and protein folding result in similar mAb HOS. NMR fingerprinting allows biopharmaceutical companies and regulators to confirm this, as well as to check batch‑to-batch variabilities. Antibody on cellNMR can also be used to study the chemical structure of mAbs and proteins. Structures of amino acid residues can potentially be altered during storage, which in turn can affect mAb activity and physical stability. As specific chemical structures give rise to specific NMR signals, NMR can be used to detect the presence of chemical degradation, such as cyclisation of Glu or Gln residues,9 and to study the effects of specific chemical instabilities, such as deamidation and oxidation, on mAb HOS.10- 11 NMR can also be used to study the non‑protein components of mAb structures. While the post-translational modification of mAbs with specific sugar glycans is difficult to fully characterise by conventional techniques, NMR can act as an orthogonal method to quantify glycan content and structure.12 NMR signals are also sensitive to even weak transient interactions between molecules, and so this method is increasingly used to characterise protein-excipient interactions, such as those between mAbs and polysorbates,13 a macrocycle14 or amino acids.15 The various types of NMR instruments available make it a versatile technique for addressing a broad range of formulation problems. Solid‑state NMR can be used to study stabilisation by excipients in solid state formulations16 and to quantify residual moisture after freeze drying.17 Alternatively, imaging instruments – similar to magnetic resonance imaging (MRI) scanners found in hospitals – can be used to study dynamic processes that occur differently across a sample, such as the reconstitution of a solid mAb formulation for injection.18 Finally, benchtop NMR spectrometers can be used to indirectly detect a number of biopharmaceutical protein properties, such as protein concentration or aggregation, based on the behaviour of the solvent water NMR signal.19-20 These much smaller instruments can be incorporated into conventional formulation labs or production lines for inline monitoring using ‘flow’ NMR to detect the emergence of changes, alerting on potential problems in the process.21 The various types of NMR instruments available make NMR a versatile technique for addressing a broad range of formulation problems” Recently, the use of 19F NMR has become increasingly popular; although these atoms are not native parts of protein molecules, they can be incorporated either using non-natural amino acids or by sparsely attaching 19F-containing tags to the protein surface via chemical linkage to cysteine or lysine sidechains. One advantage of observing 19F NMR signals from such probes, which represent the behaviour of the whole molecule, is that there are no other signals in the background. We have recently demonstrated the use of 19F tags to characterise the behaviour of individual mAbs or proteins in complex mixtures22 or co-formulations.23 By differentially labelling each mAb with a specific 19F tag with a unique characteristic chemical shift, the behaviour of the observed signal can be unambiguously linked with the behaviour of a specific protein molecule and tracked under a range of conditions. This technique makes it possible to characterise mAb-specific clustering in co-formulations24 and to optimise formulations to prevent such clustering. Untangling such complexity present in biopharmaceutical formulation samples in situ – from the viewpoint of different components, both large and small – is what NMR does best and is a capability that may be difficult to achieve by any other single biophysical technique. About the authors Jack BramhamJack Bramham graduated with a BSc in Biochemistry from the University of Manchester, UK, in 2016. He is currently a PhD student at the University of Manchester investigating liquid-liquid phase separation of proteins using NMR spectroscopy, funded by a BBSRC CASE studentship. Dr Alexander GolovanovDr Alexander P Golovanov is a Reader and group leader at the University of Manchester, UK. He is a structural biologist and NMR spectroscopist, conducting research on proteins, their biophysical properties and their interactions. Common frustrations with making proteins behave in NMR samples have fuelled his interest in how to characterise and improve the behaviour of biopharmaceutical formulations, and his group is currently working in this research area. 1. Kheddo P, Cliff MJ, Uddin S, van der Walle CF, Golovanov AP. Characterizing monoclonal antibody formulations in arginine glutamate solutions using 1H NMR spectroscopy. mAbs 2016, 8 (7), 1245-1258. 2. Bradley SA, Jackson WC, Mahoney PP. Measuring Protein Concentration by Diffusion-Filtered Quantitative Nuclear Magnetic Resonance Spectroscopy. Analytical Chemistry 2019, 91 (3), 1962-1967. 3. Kheddo P, Bramham JE, Dearman RJ, Uddin S, van der Walle CF, Golovanov AP. Investigating Liquid-Liquid Phase Separation of a Monoclonal Antibody Using Solution-State NMR Spectroscopy: Effect of Arg center dot Glu and Arg center dot HCI. Mol. Pharm. 2017, 14 (8), 2852-2860. 4. Falk BT, Liang YK, Bailly M, Raoufi F, Kekec A, Pissarnitski D, Feng D, Yan L, Lin SN, Fayadat-Dilman L, McCoy MA. NMR Assessment of Therapeutic Peptides and Proteins: Correlations That Reveal Interactions and Motions. Chembiochem 2019, 6. 5. Skidmore K, Hewitt D, Kao Y-H. Quantitation and characterization of process impurities and extractables in protein-containing solutions using proton NMR as a general tool. Biotechnol. Prog. 2012, 28 (6), 1526-1533. 6. Magarian N, Lee K, Nagpal K, Skidmore K, Mahajan E. Clearance of Extractables and Leachables from Single-Use Technologies via Ultrafiltration/Diafiltration Operations. Biotechnol. Prog. 2016, 32 (3), 718-724. 7. Arbogast LW, Brinson RG, Marino JP. Application of Natural Isotopic Abundance 1H–13C- and 1H–15N-Correlated Two-Dimensional NMR for Evaluation of the Structure of Protein Therapeutics. In Methods in Enzymology, Academic Press: 2015. 8. Brinson RG, et al. Enabling adoption of 2D-NMR for the higher order structure assessment of monoclonal antibody therapeutics. Mabs 2019, 11 (1), 94-105. 9. Hinterholzer A, Stanojlovic V, Cabrele C, Schubert M. Unambiguous Identification of Pyroglutamate in Full-Length Biopharmaceutical Monoclonal Antibodies by NMR Spectroscopy. Analytical Chemistry 2019, 91 (22), 14299-14305. 10. Bandi S, Singh SM, Shah DD, Upadhyay V, Mallela KMG. 2D NMR Analysis of the Effect of Asparagine Deamidation Versus Methionine Oxidation on the Structure, Stability, Aggregation, and Function of a Therapeutic Protein. Mol. Pharm. 2019, 16 (11), 4621-4635. 11. Majumder S, Saati A, Philip S, Liu LL, Stephens E, Rouse JC, Ignatius AA. Utility of High Resolution NMR Methods to Probe the Impact of Chemical Modifications on Higher Order Structure of Monoclonal Antibodies in Relation to Antigen Binding. Pharm Res 2019, 36 (9), 13. 12. Peng JN, Patil SM, Keire DA, Chen K. Chemical Structure and Composition of Major Glycans Covalently Linked to Therapeutic Monoclonal Antibodies by Middle-Down Nuclear Magnetic Resonance. Analytical Chemistry 2018, 90 (18), 11016-11024. 13. Singh SM, Bandi S, Jones DNM, Mallela KMG. Effect of Polysorbate 20 and Polysorbate 80 on the Higher-Order Structure of a Monoclonal Antibody and Its Fab and Fc Fragments Probed Using 2D Nuclear Magnetic Resonance Spectroscopy. Journal of Pharmaceutical Sciences 2017, 106 (12), 3486-3498. 14. Morales MM, Zalar M, Sonzini S, Golovanov AP, van der Walle CF, Derrick JP. Interaction of a Macrocycle with an Aggregation-Prone Region of a Monoclonal Antibody. Mol. Pharm. 2019, 16 (7), 3100-3108. 15. Svilenov HL, Kulakova A, Zalar M, Golovanov AP, Harris P, Winter G. Orthogonal Techniques to Study the Effect of pH, Sucrose, and Arginine Salts on Monoclonal Antibody Physical Stability and Aggregation During Long-Term Storage. Journal of Pharmaceutical Sciences 2020, 109 (1), 584-594. 16. Mensink MA, Nethercott MJ, Hinrichs WLJ, van der Voort Maarschalk K, Frijlink HW, Munson EJ, Pikal MJ. Influence of Miscibility of Protein-Sugar Lyophilizates on Their Storage Stability. Aaps j 2016, 18 (5), 1225-1232. 17. Abraham A, Elkassabany O, Krause ME, Ott A. A nondestructive and noninvasive method to determine water content in lyophilized proteins using low-field time-domain NMR. Magnetic Resonance in Chemistry 2019, 57 (10), 873-877. 18. Partridge TA, Ahmed M, Choudhary SB, van der Walle CF, Patel SM, Bishop SM, Mantle MD. Application of Magnetic Resonance to Assess Lyophilized Drug Product Reconstitution. Pharm Res 2019, 36 (5), 71. 19. Taraban MB, Briggs KT, Yu YB. Magnetic Resonance Relaxometry for Determination of Protein Concentration and Aggregation. Current protocols in protein science 2020, 99 (1), e102. 20. Taraban MB, DePaz RA, Lobo B, Yu YB. Water Proton NMR: A Tool for Protein Aggregation Characterization. Analytical Chemistry 2017, 89 (10), 5494-5502. 21. Taraban MB, Briggs KT, Merkel P, Yu YB. Flow Water Proton NMR: In-Line Process Analytical Technology for Continuous Biomanufacturing. Analytical Chemistry 2019, 91 (21), 13538-13546. 22. Edwards JM, Harris P, Bukrinski JT, Golovanov AP. Use of 19F Differential Labelling for the Simultaneous Detection and Monitoring of Three Individual Proteins in a Serum Environment. ChemPlusChem 2019, 84 (5), 443-446. 23. Edwards JM, Derrick JP, van der Walle CF, Golovanov AP. 19F NMR as a Tool for Monitoring Individual Differentially Labeled Proteins in Complex Mixtures. Mol. Pharm. 2018, 15 (7), 2785-2796. 24. Edwards JM, Bramham JE, Podmore A, Bishop SM, van der Walle CF, Golovanov AP. 19F Dark-State Exchange Saturation Transfer NMR Reveals Reversible Formation of Protein-Specific Large Clusters in High-Concentration Protein Mixtures. Analytical Chemistry 2019, 91 (7), 4702-4708.
db3aec69347e60ca
Friday, August 23, 2013 New Father-and-Son Quantum Text Book Samarkand, Uzbekistan by Richard-Karl Karlovitch Zommer Samarkand, one of the world's oldest inhabited cities, once prospered as a trading post on the Silk Road between China and Europe. During the Islamic Golden Age (750 AD -- 1258 AD) the city became a famous focus of Arab scholarship in astronomy, medicine and mathematics. In more modern times, there graduated from the State University of Samarkand a physicist Moses Fayngold, who with his son Vadim, also a physicist, has written a new text book on quantum mechanics, intended for advanced undergraduates and beginning graduate students. I found this book rich and unpredictable and, like the romantic Silk Road metropolis, offering something fresh and exotic around every corner. Why does the world need yet another book about quantum mechanics? This question was raised by the father. "[The father], who by his own admission, used to think of himself as something of an expert in QM, was not initially impressed by the idea, citing a huge number of excellent contemporary presentations of the subject. Gradually, however, as he grew involved in discussing the issues brought up by his younger colleague, he found it hard to explain some of them even to himself. Moreover, to his surprise, in many instances he could not find satisfactory explanations even in those texts he had previously considered to contain authoritative accounts on the subject." (from the Preface). Unlike most conventional quantum physics texts which merely explain things, this book also focuses on many of the loopholes, exceptions, imperfections, misunderstandings, man traps and pitfalls that exist in this complex field. When you buy a new car, you will find an Owner's Manual in the glove compartment that tells you how to change the oil and how to replace the light bulbs. But if you are handy with tools you will also want to purchase the Mechanic's Manual to learn how to do things that only professionals should attempt. And, in particular, to learn things that YOU SHOULD NOT DO. (Never unscrew part A before releasing part B.) This new quantum text book is the equivalent of a Mechanic's Manual that makes previous text books seem mere Owner's Manuals. Most quantum text books tell you how to do things, but I have never run across a text book like Moses and Vadim's which tells you WHAT NOT TO DO. Over and over again in this text, I ran across comments to the effect that "The naive way to do this is B, but B will give you the wrong answer. Here's how to do things right." The authors seem to have anticipated many pitfalls that lie in wait for the quantum neophyte and have posted the appropriate warnings. My guess is that these pitfalls are those into which Moses and Vadim have themselves fallen. Niels Bohr once claimed that the definition of an "expert" in a field is a person who has made all the mistakes in that field. In this unusual book Moses and Vadim give you the advantage of that kind of street-smart expertise. Their book begins by describing some major phenomena that classical physics could not explain (black-body radiation, photoelectric effect, low-temperature specific heats and atomic spectra), then show how one simple concept--the quantization of energy--could correctly reproduce these results. Moses and Vadim then describe the origin of Louis DeBroglie's hypothesis--that matter possesses a wave-like nature whose wavelength DeBroglie could calculate. Altho this textbook confines itself to non-relativistic quantum mechanics, I was surprised (one surprise of many) to discover that DeBroglie's calculation was motivated by special relativity which means that his discovery is deeper than necessary and transcends its non-relativistic buddies such as the Schrödinger equation. Using the DB hypothesis to physically justify energy quantization (similar to the way that resonance modes quantize the notes of stringed instruments), Moses and Vadim then use the Superposition Principle for waves to construct an "embryonic quantum mechanics" from which much more good physics can be derived without yet mentioning the Schrödinger Equation. This book includes in-depth discussions (always accompanied by Moses and Vadim's dependable pitfall warning signs) of most of the conventional topics in quantum theory including Hilbert space, Dirac notation, angular momentum, scattering theory, band structure, quantum tunneling, density matrices, Kaon and neutrino oscillations, quantum entanglement, CHSH, POVMs, CNOT and XOR gates, the Bloch sphere, Zeno's paradox, Schrödinger's Cat, and much much more. Moses and Vadim also introduce a novel topic they call "submissive quantum mechanics" in which they show how to manipulate potentials to create customized wave functions never before realized in nature--a useful skill that may prove profitable in the emerging field of nanotechnology. Again and again while reading this book I got the feeling of a wise adviser at my side. The ratio of explanatory text to equations is large--resulting in a lucidity reminiscent of the classic Feynman Lectures as well as Quantum Theory by David Bohm. Besides devising the shortest proof of Bell's theorem, Nick Herbert's main claim to physics fame is his FLASH (First Laser-Amplified Superluminal Hookup) proposal which purported to send signals faster-than-light using a "laser-like device" to clone single photons. The FLASH proposal was refuted by Wooters and Zurek who proved that "a single (unknown) photon cannot be cloned", a result which crucially limits what quantum computers can do--for instance, when quantum hard drives or quantum DVDs are built, the no-cloning theorem provides automatic copy protection courtesy of the laws of physics. Naturally I was curious about how Moses and Vadim would deal with my FLASH proposal in their hyper-informative "Mechanic's Manual" style. In this I was not disappointed. The authors agree that the W&Z "no perfect cloning of unknown states" proof definitively refutes my FLASH proposal. But what about "imperfect cloning"?, they ask. And what about the cloning of states that are not completely unknown but part of a small prearranged set of known states? Moses and Vadim carefully consider these loopholes (and a few more) to the standard FLASH refutation and definitively decide that FLASH won't work. But in the course of their detailed refutation the reader learns a lot about quantum cloning machines. This book is a wonderful Mechanic's Manual crammed full of intimate details about the operation of one of the most elegant intellectual sports cars we possess--the theory of non-relativistic quantum mechanics. But in addition to this Mechanic's Manual, I urge you to also purchase an Owner's Manual of your choice, a book that you can use to solve everyday problems in simple ways. (My own favorite Owner's Manual is the classic text by Leonard Schiff from which I learned QM in those bygone days when the world's largest particle accelerator was the Berkeley Bevatron.) But next to your trusted Owner's Manual, be sure to include this helpful Mechanic's Manual on your book shelf, both to deepen your knowledge of quantum mechanics and to help you avoid some of its more obvious pitfalls. This book is perfect for those quantum mechanics who know how to fix Volkswagons and now want to go to work on Porsches. New father-and-son quantum text book
175b39a36ca93645
The Born rule is obvious Philip Ball has just published an excellent article in the Quanta magazine about two recent attempts at understanding the Born rule: one by Masanes, Galley, and Müller, where they derive the Born rule from operational assumptions1, and another by Cabello, that derives the set of quantum correlations from assumptions about ideal measurements. I’m happy with how the article turned out (no bullshit, conveys complex concepts in understandable language, quotes me ;), but there is a point about it that I’d like to nitpick: Ball writes that it was not “immediately obvious” whether the probabilities should be given by $\psi$ or $\psi^2$. Well, it might not have been immediately obvious to Born, but this is just because he was not familiar with Schrödinger’s theory2. Schrödinger, on the other hand, was very familiar with his own theory, and in the very paper where he introduced the Schrödinger equation he discussed at length the meaning of the quantity $|\psi|^2$. He got it wrong, but my point here is that he knew that $|\psi|^2$ was the right quantity to look at. It was obvious to him because the Schrödinger evolution is unitary, and absolute values squared behave well under unitary evolution. Born’s contribution was, therefore, not mathematical, but conceptual. What he introduced was not the $|\psi|^2$ formula, but the idea that this is a probability. And the difficulty we have with the Born rule until today is conceptual, not mathematical. Nobody doubts that the probability must be given by $|\psi|^2$, but people are still puzzled by these high-level, ill-defined concepts of probability and measurement in an otherwise reductionist theory. And I think one cannot hope to understand the Born rule without understanding what probability is. Which is why I don’t think the papers of Masanes et al. and Cabello can explain the Born rule. They refuse to tackle the conceptual difficulties, and focus on the mathematical ones. What they can explain is why quantum theory immediately goes down in flames if we replace the Born rule with anything else. I don’t want to minimize this result: it is nontrivial, and solves something that was bothering me for a long time. I’ve always wanted to find a minimally reasonable alternative to the Born rule for my research, and now I know that there isn’t one. This is what I like, by the way, in the works of Saunders, Deutsch, Wallace, Vaidman, Carroll, and Sebens. They tackle the conceptual difficulties with probability and measurement head on3. I’m not satisfied with their answers, for several reasons, but at least they are asking the right questions. This entry was posted in Uncategorised. Bookmark the permalink. 4 Responses to The Born rule is obvious 1. gentzen says: Nice to have a non-aligned and sceptical commentator like Araújo in the article. Ty Rex is right, your contributions made the article more enjoyable. And this blog posts adds further clarity. 2. Curious says: This is a genuine question out of ignorance. Do you think there is anything to the fact that Cabello derived the Hilbert space framework as the most general probability theory you can apply to idealised measurements? He doesn’t derive the fact that it would be a complex Hilbert space, but it seems to show if you’re an agent sitting there doing measurements you should use the Born rule unless you find statistical properties in the system that allow you to assume the extra steps needed to narrow down to Kolmogorov probability, e.g. no measurements fundamentally disturb others etc. 3. Mateus Araújo says: I’m not very impressed, to be honest. The assumption that the probabilities come from ideal measurements is quite strong – why should one assume a priori that measurements are repeatable, or that joint measurability implies non-disturbance? I think what it shows is that if you have convinced yourself that your measurements behave like this, then you should expect the correlations you produce to be the quantum ones. Also, I wouldn’t use the expression “Kolmogorov probability”, as it is rather ill-defined. If you mean probabilities that don’t have any property other than positivity and normalisation, well, then your statement is false, because the set of quantum correlations is much more restricted than that. 4. Curious says: Thanks for that. You’re right, I should have said Classical Probability Theory. Comments are closed.
406228036b96b47b
Modeling and computation of Bose-Einstein condensates: stationary states, nucleation, dynamics, stochasticity. (English) Zbl 1344.35114 Besse, Christophe (ed.) et al., Nonlinear optical and atomic systems. At the interface of physics and mathematics. Based on lecture notes given at the 2013 Painlevé-CEMPI-PhLAM thematic semester. Cham: Springer; Lille: Centre Européen pour les Mathématiques, la Physiques et leurs Interactions (CEMPI) (ISBN 978-3-319-19014-3/pbk; 978-3-319-19015-0/ebook). Lecture Notes in Mathematics 2146, 49-145 (2015). The authors start with the description of the historical background of the discovery of Bose-Einstein condensates (BECs), beginning with the paper of S. N. Bose in 1924, who proposed for photons a new statistics which includes quantum effects in contrast to the Maxwell-Boltzmann statistics. The generalization of this idea to atoms by A. Einstein and his prediction of a new state of matter led to the now so-called Bose-Einstein condensates. Mentioned are also the most important attempts and progress of the experimentalists to approve the existence of the condensates before E. A. Cornell and C. E. Wiemar (using rubidium atoms) on the one hand, and W. Ketterle (using sodium atoms) otherwise have been awarded the Nobel 2001 price for their realization of Bose-Einstein condensates in 1995 in two independent experiments. The authors choose the Gross-Pitaevskii equations (GPEs) under various possibilities to mathematically describe the BECs. Starting with the Euler-Lagrange equations and the corresponding Hamiltonian equations, which characterize the dynamics in classical mechanics, the authors adopt the Hamiltonian approach to quantum particles which are realized through a wave function. The wave function, associated to the particle, determines the probability that the particle is located in a given volume at a time \(t\). The particle is described by the de Broglie’s relations. The total energy is given via the Hamiltonian. Using these facts an evolution equation for the wave function with the Hamiltonian is deduced, that is the Schrödinger equation, which describes the dynamics of the wave function associated to the particles. The wave function is generalized to a system of \(N\) particles and the Hamiltonian is formulated in the first instance for an example of \(N\) noninteracting particles subject to an exterior potential. The theory is applied to BECs, in which the set of condensed particles occupies the same ground state, that is the lowest level quantum energy state. The Hamiltonian of the system is deduced assuming that the condensate consists of \(N\) indistinguishable particles with the same wave function subject to an exterior potential and a force which depends on the interaction between the particles. The corresponding Schrödinger equation results in the so-called Gross-Pitaevskii equation using simplifications for the particle interaction force. Some classes of GPEs are deduced, such as for rotating BECs to describe superfluids, BECs without (e.g. alkali and hydrogen atoms) and including (e.g. chromium atoms) bipolar interactions, multi-components BECs, and BECs with stochastic effects. Furthermore, some details are outlined, such as that the stationary states are the eigenfunctions of the Hamiltonian operator and the corresponding eigenvalues quantified energies. It is proved that the stationary states are critical points of the energy functional. Various approaches for the potential are discussed. Dimensionless forms of GPEs and a dimension reduction are treated. The practical realization of a BEC and especially its imaging is a very difficult task. Thus, numerical simulations are required to compute the features of a BEC. Stationary states correspond to stable or metastable states of BECs. The stationary states can be computed solving a nonlinear eigenvalue problem or minimizing the energy functional under constraint. The last one is a nonlinear optimization problem and is discussed in this paper using the so-called Conjugate Normalized Gradient Flow (CNGF) method (also known as imaginary time method), which generates a minimizing sequence of the energy functional. Several time and space discretizations of the corresponding partial differential equation are discussed. The authors consider a semi-implicit backward Euler scheme in time with the advantage, that a minimizing sequence is produced without a Courant-Friedrichs-Lewy condition, and compare it with Crank-Nicolson schemes. Two approaches are represented for the spatial discretization: a second-order finite difference scheme and a pseudo-spectral discretization technique based on a Fast Fourier Transform (FFT). The advantages and disadvantages of the methods are discussed in detail and validated for different BECs. The next topic consists in the determination of an suitable initial guess to the nonlinear optimization problem for different situations and in the construction of simple approximations. Using the Thomas-Fermi approximation based on the neglecting of the kinetic energy in the strong interaction, simplified minimization problems are deduced for various potentials. Furthermore, it is outlined that Krylov subspace iterative solvers, such as GMRES and BiCGStab, accelerated by preconditioning, are the most robust and effective algorithms for the solution of the linear systems, which have to be solved in each iteration step of the minimization problem using the semi-implicit backward Euler scheme for the FFT based pseudo-spectral discretization. The presented numerical methods are implemented in a freely available Matlab toolbox named GPELab (Gross-Pitaevskii Equation Laboratory). The authors indicates that not only different kinds of Gross-Pitaevskii equations and systems can be solved but also nonlinear Schrödinger equations. The effectivity of the software is demonstrated by means of some examples. The numerical solution of the dynamics of deterministic or stochastic GPEs is the next topic of the paper. After the formulation of the corresponding GPEs time-splitting pseudo-spectral schemes and relaxation schemes for rotating GPEs are treated and applied for various BECs. Also the essential properties of other schemes are outlined. For the entire collection see [Zbl 1328.35002]. 35Q40 PDEs in connection with quantum mechanics 35Q55 NLS equations (nonlinear Schrödinger equations) 82B10 Quantum equilibrium statistical mechanics (general) 82B26 Phase transitions (general) in equilibrium statistical mechanics 82-08 Computational methods (statistical mechanics) (MSC2010) 82-03 History of statistical mechanics 01A60 History of mathematics in the 20th century 65T50 Numerical methods for discrete and fast Fourier transforms Full Text: DOI HAL
de9d18a8eae36c54
Quantum Computing First published Sun Dec 3, 2006; substantive revision Mon Sep 30, 2019 Combining physics, mathematics and computer science, quantum computing and its sister discipline of quantum information have developed in the past few decades from visionary ideas to two of the most fascinating areas of quantum theory. General interest and excitement in quantum computing was initially triggered by Peter Shor (1994) who showed how a quantum algorithm could exponentially “speed-up” classical computation and factor large numbers into primes far more efficiently than any (known) classical algorithm. Shor’s algorithm was soon followed by several other algorithms that aimed to solve combinatorial and algebraic problems, and in the years since theoretical study of quantum systems serving as computational devices has achieved tremendous progress. Common belief has it that the implementation of Shor’s algorithm on a large scale quantum computer would have devastating consequences for current cryptography protocols which rely on the premise that all known classical worst-case algorithms for factoring take time exponential in the length of their input (see, e.g., Preskill 2005). Consequently, experimentalists around the world are engaged in attempts to tackle the technological difficulties that prevent the realisation of a large scale quantum computer. But regardless whether these technological problems can be overcome (Unruh 1995; Ekert and Jozsa 1996; Haroche and Raimond 1996), it is noteworthy that no proof exists yet for the general superiority of quantum computers over their classical counterparts. The philosophical interest in quantum computing is manifold. From a social-historical perspective, quantum computing is a domain where experimentalists find themselves ahead of their fellow theorists. Indeed, quantum mysteries such as entanglement and nonlocality were historically considered a philosophical quibble, until physicists discovered that these mysteries might be harnessed to devise new efficient algorithms. But while the technology for harnessing the power of 50–100 qubits (the basic unit of information in the quantum computer) is now within reach (Preskill 2018), only a handful of quantum algorithms exist, and the question of whether these can truly outperform any conceivable classical alternative is still open. From a more philosophical perspective, advances in quantum computing may yield foundational benefits. For example, it may turn out that the technological capabilities that allow us to isolate quantum systems by shielding them from the effects of decoherence for a period of time long enough to manipulate them will also allow us to make progress in some fundamental problems in the foundations of quantum theory itself. Indeed, the development and the implementation of efficient quantum algorithms may help us understand better the border between classical and quantum physics (Cuffaro 2017, 2018a; cf. Pitowsky 1994, 100), and perhaps even illuminate fundamental concepts such as measurement and causality. Finally, the idea that abstract mathematical concepts such as computability and complexity may not only be translated into physics, but also re-written by physics bears directly on the autonomous character of computer science and the status of its theoretical entities—the so-called “computational kinds”. As such it is also relevant to the long-standing philosophical debate on the relationship between mathematics and the physical world. 1. A Brief History of the Field 1.1 Physical Computational Complexity The mathematical model for a “universal” computer was defined long before the invention of computers and is called the Turing machine (Turing 1936). A Turing machine consists of an unbounded tape, a head capable of reading from and writing to it which can occupy one of a potentially infinite number of possible states, and an instruction table (i.e. a transition function). This table, given the head’s initial state and the input it reads from the tape in that state, determines (a) the symbol that the head will write on the tape, (b) the internal state it will occupy, and (c) the displacement of the head on the tape. In 1936 Turing showed that since one can encode the instruction table of a Turing machine \(T\) and express it as a binary number \(\#(T)\), there exists a universal Turing machine \(U\) that can simulate the instruction table of any Turing machine on any given input. That the Turing machine model captures the concept of computability in its entirety is the essence of the Church-Turing thesis, according to which any effectively calculable function can be computed using a Turing machine. Admittedly, no counterexample to this thesis (which is the result of convergent ideas of Turing, Post, Kleene and Church) has yet been found. But since it identifies the class of computable functions with the class of those functions which are computable using a Turing machine, this thesis involves both a precise mathematical notion and an informal and intuitive notion, hence cannot be proved or disproved. Simple cardinality considerations show, however, that not all functions are Turing-computable (the set of all Turing machines is countable, while the set of all functions from the natural numbers to the natural numbers is not), and the discovery of this fact came as a complete surprise in the 1930s (Davis 1958). Computability, or the question whether a function can be computed, is not the only question that interests computer scientists. Beginning especially in the 1960s (Cobham 1965; Edmonds 1965; Hartmanis and Stearns 1965), the question of the cost of computing a function (which was to some extent already anticipated in 1956 by Gödel) also came to be of great importance. This cost, also known as computational complexity, is measured naturally in the physical resources (e.g., time, space, energy) invested in order to solve the computational problem at hand. Computer scientists classify computational problems according to the way their cost function behaves as a function of their input size, \(n\) (the number of bits required to store the input) and in particular, whether it increases exponentially or polynomially with \(n\). Tractable problems are those which can be solved in polynomial cost, while intractable problems are those which can only be solved with exponential cost (the former solutions are commonly regarded as efficient although an exponential-time algorithm could turn out to be more efficient than a polynomial-time algorithm for some range of input sizes). So far, the Turing machines we have been discussing have been deterministic; for such machines, their behaviour at any given time is wholly determined by their state plus whatever their input happens to be. In other words such machines have a unique “instruction table” (i.e. transition function). We can generalise the Turing model, however, by allowing a machine to instantiate more than one transition function simultaneously. A nondeterministic Turing machine (NTM), upon being presented with a given input in a given state, is allowed to ‘choose’ which of its transition functions to follow, and we say that it solves a given problem whenever, given some input, there exists at least one path through its state space leading to a solution. Exactly how an NTM “chooses” whether to follow one transition function rather than another is left undefined (in his 1936 paper, Turing originally conceived these choices as those of an external operator). In particular, we do not assume that any probabilities are attached to these choices. In a probabilistic Turing machine (PTM), on the other hand, we characterise the computer’s choices by associating a particular probability with each of its possible transitions. Probabilistic and deterministic Turing machines (DTMs) have different success criteria. A successful deterministic algorithm for a given problem is guaranteed to yield the correct answer given its input. Of a successful probabilistic algorithm, on the other hand, we only demand that it yield a correct answer with “high” probability (minimally, we demand that it be strictly greater than 1/2). It was believed, until relatively recently, that for some problems (see, e.g. Rabin 1976) probabilistic algorithms are dramatically more efficient than any deterministic alternatives; in other words that the set or “class” of problems efficiently solvable by PTM is larger than the class of problems efficiently solvable by DTM. Fascinatingly, evidence has been mounting in recent years (e.g. Agrawal, Kayal, and Saxena 2004) that this is not the case, and it is now believed that the PTM model in fact does not offer a computational advantage in this sense over the DTM model (Arora and Barak 2009 Ch. 20). Probabilistic (Turing) computation is nevertheless interesting to consider, because abstractly a quantum computer is just a variation on the PTM which does appear to offer computational advantages over deterministic computation, although as already mentioned this conjecture still awaits a proof. See Hagar (2007) and Cuffaro (2018b) for divergent opinions over what this purported quantum computational advantage tells us about the theory of computational complexity as a whole. The class \(\mathbf{P}\) (for Polynomial) is the class containing all the computational decision problems that can be solved by a DTM in polynomial time. The class NP (for Non-deterministic Polynomial) is the class containing all the computational decision problems that can be solved by an NTM in polynomial time.[1] The most famous problems in NP are called “NP-complete”, where “complete” designates the fact that these problems stand or fall together: Either they are all tractable, or none of them is! If we knew how to solve an NP-complete problem efficiently (i.e., with polynomial cost) we could use it to efficiently solve any other problem in NP (Cook 1971). Today we know of hundreds of examples of NP-complete problems (Garey and Johnson 1979), all of which are reducible one to another with polynomial slowdown, and since the best known algorithm for any of these problems is exponential, the widely believed conjecture is that there is no polynomial algorithm that can solve them. Clearly \(\mathbf{P} \subseteq \mathbf{NP}\). Proving or disproving the conjecture that \(\mathbf{P} \ne \mathbf{NP}\), however, remains perhaps one of the most important open questions in computer science and complexity theory. Although the original Church-Turing thesis involves the abstract mathematical notion of computability, physicists as well as computer scientists often interpret it as saying something about the scope and limitations of physical computing machines. Wolfram (1985) claims that any physical system can be simulated (to any degree of approximation) by a universal Turing machine, and that complexity bounds on Turing machine simulations have physical significance. For example, if the computation of the minimum energy of some system of \(n\) particles requires at least an exponentially increasing number of steps in \(n\), then the actual relaxation of this system to its minimum energy state will also take exponential time. Aharonov (1999) strengthens this thesis (in the context of showing its putative incompatibility with quantum mechanics) when she says that a PTM can simulate any reasonable physical device at polynomial cost. In order for the physical Church-Turing thesis to make sense we have to relate physical space and time parameters to their computational counterparts: memory capacity and number of computation steps, respectively. There are various ways to do that, leading to different formulations of the thesis (Pitowsky 1990). For example, one can encode the set of instructions of a universal Turing machine and the state of its infinite tape in the binary development of the position coordinates of a single particle. Consequently, one can physically ‘realise’ a universal Turing machine as a billiard ball with hyperbolic mirrors (Moore 1990; Pitowsky 1996). For the most intuitive connection between abstract Turing machines and physical devices see the pioneering work of Gandy (1980), simplified later by Sieg and Byrnes (1999), and discussed, for example, in Copeland (2018). It should be stressed that there is no relation between the original Church-Turing thesis and its physical version (Pitowsky and Shagrir 2003), and while the former concerns the concept of computation that is relevant to logic (since it is strongly tied to the notion of proof which requires validation), it does not analytically entail that all computations should be subject to validation. Indeed, there is a long historical tradition of analog computations which use continuous physical processes (Dewdney 1984), and the output of these computations is validated either by repetitive “runs” or by validating the physical theory that presumably governs the behaviour of the analog computer. 1.2 Physical “Short-cuts” of Computation Do physical processes exist which contradict the physical Church-Turing thesis? Apart from analog computation, there exist at least two main kinds of example purporting to show that the notion of recursion, or Turing-computability, is not a natural physical property (Pour-el and Richards 1981; Pitowsky 1990; Hogarth 1994). Although the physical systems involved (a specific initial condition for the wave equation in three dimensions and an exotic solution to Einstein’s field equations, respectively) are somewhat contrived, a thriving school of “hypercomputation” that aspires to extend the limited examples of physical “hypercomputers” and in so doing to physically “compute” the non-Turing-computable has nevertheless emerged (for a review see Copeland (2002); for a criticism: Davis (2003); for a recent proposal and response to criticisms see Andréka et al. (2018)). Quantum hypercomputation is rarely discussed in the literature (see, e.g., Adamyan, Calude, and Pavlov 2004), but the most concrete attempt to harness quantum theory to compute the non-computable is the suggestion to use the quantum adiabatic algorithm (see below) to solve Hilbert’s Tenth Problem (Kieu 2002, 2004)—a Turing-undecidable problem equivalent to the halting problem. Criticism, however, has exposed the unphysical character of the alleged quantum adiabatic hypercomputer (see Hodges 2005; Hagar and Korolev 2007). Setting aside “hypercomputers”, even if we restrict ourselves only to Turing-computable functions, one can still find many proposals in the literature that purport to display “short-cuts” in computational resources. Consider, e.g., the DNA model of computation that was claimed (Adleman 1994; Lipton 1995) to solve NP-complete problems in polynomial time. A closer inspection shows that the cost of the computation in this model is still exponential since the number of molecules in the physical system grows exponentially with the size of the problem. Or take an allegedly instantaneous solution to another NP-complete problem using a construction of rods and balls (Vergis, Steiglitz, and Dickinson 1986) that unfortunately ignores the accumulating time-delays in the rigid rods that result in an exponential overall slowdown. It appears that these and other similar models cannot serve as counter-examples to the physical Church-Turing thesis (as far as complexity is concerned) since they all require some exponential physical resource. Note, however, that all these models are based on classical physics, hence the unavoidable question: Can the shift to quantum physics allow us to find “short-cuts” in computational resources? The quest for the quantum computer began with the possibility of giving a positive answer to this question. 1.3 Milestones The idea of a computational device based on quantum mechanics was explored already in the 1970s by physicists and computer scientists. As early as 1969 Steven Wiesner suggested quantum information processing as a possible way to better accomplish cryptologic tasks. But the first four published papers on quantum information (Wiesner published his only in 1983), belong to Alexander Holevo (1973), R. P. Poplavskii (1975), Roman Ingarden (1976), and Yuri Manin (1980). Better known are contributions made in the early 1980s by Charles H. Bennett of the IBM Thomas J. Watson Research Center, Paul A. Benioff of Argonne National Laboratory in Illinois, David Deutsch of the University of Oxford, and Richard P. Feynman of the California Institute of Technology. The idea emerged when scientists were investigating the fundamental physical limits of computation. If technology continued to abide by “Moore’s Law” (the observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every 18 months since the integrated circuit was invented), then the continually shrinking size of circuitry packed onto silicon chips would eventually reach a point where individual elements would be no larger than a few atoms. But since the physical laws that govern the behaviour and properties of the putative circuit at the atomic scale are inherently quantum mechanical in nature, not classical, the natural question arose whether a new kind of computer could be devised based on the principles of quantum physics. Inspired by Ed Fredkin’s ideas on reversible computation (see Hagar 2016), Feynman was among the first to attempt to provide an answer to this question by producing an abstract model in 1982 that showed how a quantum system could be used to do computations. He also explained how such a machine would be able to act as a simulator for quantum physics, conjecturing that any classical computer could do the same task only inefficiently. In 1985 David Deutsch proposed the first universal quantum Turing machine and paved the way to the quantum circuit model (Deutsch 1989). The young and thriving domain also attracted philosophers’ attention. In 1983 David Albert showed how a quantum mechanical automaton behaves remarkably differently from a classical automaton, and in 1990 Itamar Pitowsky raised the question of whether the superposition principle may allow quantum computers to efficiently solve NP-complete problems. He also stressed that although one could in principle ‘squeeze’ information of exponential complexity into polynomially many quantum states, the real problem lay in the efficient retrieval of this information. Progress in quantum algorithms began in the 1990s, with the discovery of the Deutsch-Josza algorithm (1992) and of Simon’s algorithm (1994). The latter supplied the basis for Shor’s algorithm for factoring. Published in 1994, this algorithm marked a ‘phase transition’ in the development of quantum computing and sparked a tremendous interest even outside the physics community. In that year the first experimental realisation of the quantum CNOT gate with trapped ions was proposed by Cirac and Zoller (1995). In 1995, Peter Shor and Andrew Steane proposed (independently) the first scheme for quantum error-correction. In that same year the first realisation of a quantum logic gate was done in Boulder, Colorado, following Cirac and Zoller’s proposal. In 1996, Lov Grover from Bell Labs invented a quantum search algorithm which yields a provable (though only quadratic) “speed-up” compared to its classical counterparts. A year later the first model for quantum computation based on nuclear magnetic resonance (NMR) techniques was proposed. This technique was realised in 1998 with a 2-qubit register, and was scaled up to 7 qubits in the Los Alamos National Lab in 2000. Since 2000 the field has seen tremendous growth. New paradigms of quantum algorithms have appeared, such as adiabatic algorithms, measurement-based algorithms, and topological-quantum-field-theory-based algorithms, as well as new physical models for realising a large scale quantum computer with cold ion traps, quantum optics (using photons and optical cavity), condensed matter systems and solid state physics (meanwhile, the first NMR model had turned out to be a dead-end with respect to scaling; see DiVincenzo (2000)). The basic questions, however, remain open even today: (1) theoretically, can quantum algorithms efficiently solve classically intractable problems? (2) operationally, can we actually realise a large scale quantum computer to run these algorithms? 2. Basics In this section we review the basic paradigm for quantum algorithms, namely the quantum circuit model, which is composed of the basic quantum units of information (qubits) and the basic logical manipulations thereof (quantum gates). For more detailed introductions see Nielsen and Chuang (2000) and Mermin (2007). 2.1 The Qubit The qubit is the quantum analogue of the bit, the classical fundamental unit of information. It is a mathematical object with specific properties that can be realised in an actual physical system in many different ways. Just as the classical bit has a state (either 0 or 1), a qubit also has a state. Yet contrary to the classical bit, \(\lvert 0\rangle\) and \(\lvert 1\rangle\) are but two possible states of the qubit, and any linear combination (superposition) thereof is also physically possible. In general, thus, the physical state of a qubit is the superposition \(\lvert\psi \rangle = \alpha \lvert 0\rangle + \beta \lvert 1\rangle\) (where \(\alpha\) and \(\beta\) are complex numbers). The state of a qubit can be described as a vector in a two-dimensional Hilbert space, a complex vector space (see the entry on quantum mechanics). The special states \(\lvert 0\rangle\) and \(\lvert 1\rangle\) are known as the computational basis states, and form an orthonormal basis for this vector space. According to quantum theory, when we try to measure the qubit in this basis in order to determine its state, we get either \(\lvert 0\rangle\) with probability \(\lvert \alpha\rvert^2\) or \(\lvert 1\rangle\) with probability \(\lvert \beta\rvert^2\). Since \(\lvert \alpha\rvert^2 + \lvert\beta\rvert^2 = 1\) (i.e., the qubit is a unit vector in the aforementioned two-dimensional Hilbert space), we may (ignoring the overall phase factor) effectively write its state as \(\lvert \psi \rangle =\) cos\((\theta)\lvert 0\rangle + e^{i\phi}\)sin\((\theta)\lvert 1\rangle\), where the numbers \(\theta\) and \(\phi\) define a point on the unit three-dimensional sphere, as shown here. This sphere is often called the Bloch sphere, and it provides a useful means to visualise the state of a single qubit. \(\lvert 0\rangle\) state of qubit \(\lvert 1\rangle\) The Bloch Sphere Since \(\alpha\) and \(\beta\) are complex and therefore continuous variables one might think that a single qubit is capable of storing an infinite amount of information. When measured, however, it yields only the classical result (0 or 1) with certain probabilities specified by the quantum state. In other words, the measurement changes the state of the qubit, “collapsing” it from a superposition to one of its terms. In fact one can prove (Holevo 1973) that the amount of information actually retrievable from a single qubit (what Timpson (2013, 47ff.) calls its “accessible information”) is no more than one bit. If the qubit is not measured, however, the amount of “hidden” information it “stores” (what Timpson calls its “specification information”) is conserved under its (unitary) dynamical evolution. This feature of quantum mechanics allows one to manipulate the information stored in unmeasured qubits with quantum gates (i.e. unitary transformations), and is one of the sources for the putative power of quantum computers. To see why, let us suppose we have two qubits at our disposal. If these were classical bits, then they could be in four possible states (00, 01, 10, 11). Correspondingly, a pair of qubits has four computational basis states (\(\lvert 00\rangle\), \(\lvert 01\rangle\), \(\lvert 10\rangle\), \(\lvert 11\rangle)\). But while a single classical two-bit register can store these numbers only one at a time, a pair of qubits can also exist in a superposition of these four basis states, each with its own complex coefficient (whose mod square, being interpreted as a probability, is normalised). For example, using a “Hadamard gate”—which unitarily transforms a single qubit to the state \(\frac{\lvert 0\rangle + \lvert 1\rangle}{\sqrt 2}\) whenever it is in the state \(\lvert 0\rangle\), and to the state \(\frac{\lvert 0\rangle - \lvert 1\rangle}{\sqrt 2}\) whenever it is in the state \(\lvert 1\rangle\)—we can transform the \(n\)-qubit state \(\lvert 0...01 \rangle\) as follows: \[\tag{1} | 0^n \rangle| 1 \rangle \xrightarrow{H} \sum_{x \in \{0,1\}^n}\frac{1}{2^{n/2}}~\lvert x \rangle~\lvert - \rangle,\] where \(\lvert - \rangle =_{df} \frac{| 0 \rangle - | 1 \rangle}{\sqrt 2}\). The resulting state is a superposition of \(2^n\) terms and can be imagined to “store” that many bits of (specification) information. The difficult task, however, is to use this information efficiently in light of the bound on the state’s accessible information. 2.2 Quantum Gates Classical computational gates are Boolean logic gates that manipulate information stored in bits. In quantum computing such gates are represented by matrices, and can be visualised as rotations over the Bloch sphere. This visualisation represents the fact that quantum gates are unitary operators, i.e., they preserve the norm of the quantum state (if \(U\) is a matrix describing a single qubit gate, then \(U^{\dagger}U=I\), where \(U^{\dagger}\) is the adjoint of \(U\), obtained by transposing and then complex-conjugating \(U)\). In classical computing some gates are “universal”. For example the NAND gate is a gate that evaluates the function “not both A and B” over its two inputs. By stringing together a number of NAND gates it is possible to compute any computable function. Another universal gate is the NOR gate, which evaluates the function “not (A or B)”. In the context of quantum computing it was shown (DiVincenzo 1995) that two-qubit gates (i.e. which transform two qubits) are sufficient to realise a general quantum circuit, in the sense that a circuit composed exclusively from a small set of one- and two-qubit gates can approximate to arbitrary accuracy any unitary transformation of \(n\) qubits. Barenco et. al. (1995) showed in particular that any multiple qubit logic gate may be composed in this sense from a combination of single-qubit gates and the two-qubit controlled-not (CNOT) gate, which either flips or preserves its “target” input bit depending on the state of its “control” input bit (specifically: in a CNOT gate the output state of the target qubit is the result of an operation analogous to the classical exclusive-OR (XOR) gate on the inputs). One general feature of quantum gates that distinguishes them from classical gates is that they are always reversible: the inverse of a unitary matrix is also a unitary matrix, and thus a quantum gate can always be inverted by another quantum gate. The CNOT Gate Unitary gates manipulate information stored in the “quantum register”—a quantum system—and in this sense ordinary (unitary) quantum evolution can be regarded as a computation. In order to read the result of this computation, however, the quantum register must be measured. The measurement gate is a non-unitary gate that “collapses” the quantum superposition in the register onto one of its terms with a probability corresponding to its complex coefficient. Usually this measurement is done in the computational basis (see the previous section), but since quantum mechanics allows one to express an arbitrary state as a linear combination of basis states, provided that the states are orthonormal (a condition that ensures normalisation) one can in principle measure the register in any arbitrary orthonormal basis. This, however, doesn’t mean that measurements in different bases are equivalent complexity-wise. Indeed, one of the difficulties in constructing efficient quantum algorithms stems exactly from the fact that measurement collapses the state, and some measurements are much more complicated than others. 2.3 Quantum Circuits Quantum circuits are similar to classical computer circuits in that they consist of wires and logical gates. The wires are used to carry the information, while the gates manipulate it (note that the wires are abstract and do not necessarily correspond to physical wires; they may correspond to a physical particle, e.g. a photon, moving from one location to another in space, or even to time-evolution). Conventionally, the input of the quantum circuit is assumed to be a number of qubits each initialised to a computational basis state (typically \(\lvert 0\rangle\)). The output state of the circuit is then measured in the computational basis, or in any other arbitrary orthonormal basis. The first quantum algorithms (i.e. Deutsch-Jozsa, Simon, Shor and Grover) were constructed in this paradigm. Additional paradigms for quantum computing exist today that differ from the quantum circuit model in many interesting ways. So far, however, they all have been demonstrated to be computationally equivalent to the circuit model (see below), in the sense that any computational problem that can be solved by the circuit model can be solved by these new models with only a polynomial overhead in computational resources. This is analogous to the fact that in classical computation every “reasonable” model can be efficiently simulated by any other. For discussion see Cuffaro (2018b, 274). 3 Quantum Algorithms Algorithm design is a highly complicated task, and in quantum computing, delicately leveraging the features of quantum mechanics in order to make our algorithms more efficient makes the task even more complicated. But before discussing this aspect of quantum algorithm design, let us first convince ourselves that quantum computers can be harnessed to perform standard, classical, computation without any computational speed-up. In some sense this is obvious, given the belief in the universal character of quantum mechanics, and the observation that any quantum computation that is diagonal in the computational basis, i.e., that involves no interference between the qubits, is effectively classical. Yet the demonstration that quantum circuits can be used to simulate classical circuits is not straightforward (recall that the former are always reversible while the latter use gates which are in general irreversible). Indeed, quantum circuits cannot be used directly to simulate classical computation, but the latter can still be simulated on a quantum computer using an intermediate gate, namely the Toffoli gate. This universal classical gate has three input bits and three output bits. Two of the input bits are control bits, unaffected by the action of the gate. The third input bit is a target bit that is flipped if both control bits are set to 1, and otherwise is left alone. This gate is reversible (its inverse is itself), and by stringing a number of such gates together one can simulate any classical irreversible circuit. Consequently, using the quantum version of the Toffoli gate (which by definition permutes the computational basis states similarly to the classical Toffoli gate) one can simulate, although rather tediously, irreversible classical logic gates with quantum reversible ones. Quantum computers are thus capable of performing any computation which a classical deterministic computer can do. What about probabilistic computation? Not surprisingly, a quantum computer can also simulate this type of computation by using another famous quantum gate, namely the Hadamard gate, a single-qubit gate which receives as input the state \(\lvert 0\rangle\) and produces the state \(\frac{\lvert 0\rangle + \lvert 1\rangle}{\sqrt{2}}\). Measuring this output state yields \(\lvert 0\rangle\) or \(\lvert 1\rangle\) with 50/50 probability, which can be used to simulate a fair coin toss. The Hadamard Gate Obviously, if quantum algorithms could be used only to simulate classical algorithms, then the technological advancement in information storage and manipulation, encapsulated in “Moore’s law”, would have only trivial consequences on computational complexity theory, leaving the latter unaffected by the physical world. But while some computational problems will always resist quantum “speed-up” (in these problems the computation time depends on the input, and this feature will lead to a violation of unitarity hence to an effectively classical computation even on a quantum computer—see Myers (1997) and Linden and Popescu (1998)), the hope is, nonetheless, that quantum algorithms may not only simulate classical ones, but that they will actually outperform the latter in some cases, and in so doing help to re-define the abstract notions of tractability and intractability and violate the physical Church-Turing thesis, at least as far as computational complexity is concerned. 3.1 Quantum-Circuit-Based Algorithms 3.1.1 Oracles The first quantum algorithms were designed to solve problems which essentially involve the use of an “oracle”, so let us begin by explaining this term. Oracles are used by computer scientists as conceptual aids in the complexity-theoretic analysis of algorithms. We can think of an oracle as a kind of imaginary magic black box (Arora and Barak (2009, 72–73); Aaronson (2013a, 29ff.)) to which, like the famous oracle at Delphi, one poses (yes or no) questions. Unlike that ancient oracle, the oracles considered in computer science always return an answer in a single time step. For example, we can imagine an oracle to determine whether a given Boolean formula is satisfiable or not: Given as input the description of a particular propositional formula, the oracle outputs—in a single time step—a single bit indicating whether or not there is a truth-value assignment satisfying that formula. Obviously such a machine does not really exist—SAT is an NP-complete problem—but that is not the point. The point of using such imaginary devices is to abstract away from certain “implementational details” which are for whatever reason deemed unimportant for the complexity-theoretic analysis of a given problem. For example, Simon’s problem (Simon 1994, see below) is that of determining the period of a given function \(f\) that is periodic under bit-wise modulo-2 addition. Relative to Simon’s problem, we judge the internal complexity of \(f\) to be unimportant, and so abstract away from it by imagining that we have an oracle to evaluate it in a single step. As useful as these conceptual devices are, however, their usefulness has limitations. To take one example, there are oracles relative to which P = NP, as well as oracles relative to which P \(\not =\) NP. This (and many other) questions are not clarified by oracles (see Fortnow 1994). 3.1.2 Deutsch’s Algorithm Deutsch (1989) asks the following question: Suppose we have a function \(f\) which can be either constant—i.e. such that it produces the same output value for each of its possible inputs, or balanced—i.e. such that the output of one half of its possible inputs is the opposite of the output of the other half. The particular example considered is the function \(f : \{0,1\} \rightarrow \{0,1\}\), which is constant if \(f\)(0) \(= f\)(1) and balanced if \(f\)(0) \(\ne f\)(1). Classically it would take two evaluations of the function to tell whether it is one or the other. Quantumly, we can answer this question in one evaluation. For Deutsch, the explanation for this complexity reduction involves an appeal to “many computational worlds” (see section 5.1.1). Arguably, however, a fully satisfactory answer appeals only to the superposition principle and entanglement (Bub 2010). After initially preparing the first and second qubits of the computer in the state \(\lvert 0\rangle\lvert 0\rangle\), one then “flips” the second qubit using a “NOT” gate (i.e. a Pauli X operation) to \(\lvert 1 \rangle\), and then subjects each qubit to a Hadamard gate. We now send the two qubits through an oracle or ‘black box’ which we imagine as a unitary gate, \(\mathbf{U}_f\), representative of the function whose character (of being either constant or balanced) we wish to determine. We define \(\mathbf{U}_f\) so that it takes inputs like \(\lvert x,y\rangle\) to \(\lvert x, y\oplus f (x)\rangle\), where \(\oplus\) is addition modulo two (i.e. exclusive-or). The first qubit is then fed into a further Hadamard gate, and the final output of the algorithm (prior to measurement) is the state: \[\pm\lvert f(0)\oplus f(1)\rangle~\lvert - \rangle,\] where \(\lvert - \rangle =_{df} \frac{| 0 \rangle - | 1 \rangle}{\sqrt 2}\). Since \(f\)(0)\(\oplus f\)(1) is 0 if the function is constant and 1 if the function is balanced, a single measurement of the first qubit suffices to retrieve the answer to our original question regarding the function’s nature. And since there are two possible constant functions and two possible balanced functions from \(f : \{0,1\} \rightarrow \{0,1\}\), we can characterise the algorithm as distinguishing, using only one oracle call, between two quantum disjunctions without finding out the truth values of the disjuncts themselves, i.e. without determining which balanced or which constant function \(f\) is (Bub 2010). A generalisation of Deutsch’s problem, called the Deutsch-Jozsa problem (Deutsch and Jozsa 1992), enlarges the class of functions under consideration so as to include all of the functions \(f:\{0,1\}^n\to\{0,1\}\), i.e. rather than only considering \(n = 1\). The best deterministic classical algorithm for determining whether a given such function is constant or balanced requires \(\frac{2^{n}}{2}+1\) queries to an oracle in order to solve this problem. In a quantum computer, however, we can answer the question using one oracle call. Generalising our conclusion regarding the Deutsch algorithm, we may say that the Deutsch-Jozsa algorithm allows one to evaluate a global property of the function in one measurement because the output state is a superposition of balanced and constant states such that the balanced states all lie in a subspace orthogonal to the constant states and can therefore be distinguished from the latter in a single measurement (Bub 2006a) 3.1.3 Simon’s Algorithm Suppose we have a Boolean function \(f\) on \(n\) bits that is 2-to-1, i.e. that takes \(n\) bits to \(n-1\) bits in such a way that for every \(n\)-bit integer \(x_1\) there is an \(n\)-bit integer \(x_2\) for which \(f (x_{1}) = f (x_{2})\). The function is moreover periodic in the sense that \(f(x_1)\) = \(f(x_2)\) if and only if \(x_1 = x_2 \oplus a\), where \(\oplus\) designates bit-wise modulo 2 addition and \(a\) is an \(n\)-bit nonzero number called the period of \(f\). Simon’s problem is the problem to find \(a\) given \(f\). Relative to an oracle \(U_f\) which evaluates \(f\) in a single step, Simon’s quantum algorithm (Simon 1994) finds the period of \(f\) in a number of oracle calls that grows only linearly with the length of \(n\), while the best known classical algorithm requires an exponentially greater number of oracle calls. Simon’s algorithm reduces to Deutsch’s algorithm when \(n=2\), and can be regarded as an extension of the latter, in the sense that in both cases a global property of a function is evaluated in no more than a (sub-)polynomial number of oracle invocations, owing to the fact that the output state of the computer just before the final measurement is decomposed into orthogonal subspaces, only one of which contains the problem’s solution. Note that one important difference between Deutsch’s and Simon’s algorithms is that the former yields a solution with certainty, whereas the latter only yields a solution with probability very close to 1. For more on the logical analysis of these first quantum-circuit-based algorithms see Bub (2006a) and Bub (2010). 3.1.4 Shor’s Algorithm The algorithms just described, although demonstrating the potential superiority of quantum computers over their classical counterparts, nevertheless deal with apparently unimportant computational problems. Moreover the speed-ups in each of them are only relative to their respective oracles. It is doubtful whether research into quantum computing would have attracted so much attention and evolved to its current status if its merit could be demonstrated only with these problems. But in 1994, Peter Shor realised that Simon’s algorithm could be harnessed to solve a much more interesting and crucial problem, namely factoring, which lies at the heart of current cryptographic protocols such as RSA (Rivest, Shamir, and Adleman 1978). Shor’s algorithm has turned quantum computing into one of the most exciting research domains in quantum mechanics. Shor’s algorithm exploits the ingenious number theoretic argument that two prime factors \(p,q\) of a positive integer \(N=pq\) can be found by determining the period of a function \(f(x) = y^x \textrm{mod} N,\) for any \(y < N\) which has no common factors with \(N\) other than 1 (Nielsen and Chuang 2000 App. 4). The period \(r\) of \(f(x)\) depends on \(y\) and \(N\). Once one knows it, one can factor \(N\) if \(r\) is even and \(y^{\,\frac{r}{2}} \neq -1\) mod \(N\), which will be jointly the case with probability greater than \(\frac{1}{2}\) for any \(y\) chosen randomly (if not, one chooses another value of \(y\) and tries again). The factors of \(N\) are the greatest common divisors of \(y^{\,\frac{r}{2}} \pm 1\) and \(N\), which can be found in polynomial time using the well known Euclidean algorithm. In other words, Shor’s remarkable result rests on the discovery that the problem of factoring reduces to the problem of finding the period of a certain periodic function \(f: Z_{n} \rightarrow Z_{N}\), where \(Z_{n}\) is the additive group of integers mod \(n\) (Note that \(f(x) = y^{x}\ \textrm{mod}\ N\) so that \(f(x+r) = f(x)\) if \(x+r \le n\). The function is periodic if \(r\) divides \(n\) exactly, otherwise it is almost periodic). That this problem can be solved efficiently by a quantum computer is hinted at by Simon’s algorithm, which considers the more restricted case of functions periodic under bit-wise modulo-2 addition as opposed to the periodic functions under ordinary addition considered here. Shor’s result is the most dramatic example so far of quantum “speed-up” of computation, notwithstanding the fact that factoring is believed to be only in NP and not in NP-complete (see Aaronson 2013a, 64–66). To verify whether \(n\) is prime takes a number of steps which is a polynomial in \(\log_{2}n\) (the binary encoding of a natural number \(n\) requires \(\log_{2}n\) resources). But nobody knows how to factor numbers into primes in polynomial time, and the best classical algorithms we have for this problem are sub-exponential. This is yet another open problem in the theory of computational complexity. Modern cryptography and Internet security protocols are based on these facts (Giblin 1993): It is easy to find large prime numbers fast, and it is hard to factor large composite numbers in any reasonable amount of time. The discovery that quantum computers can solve factoring in polynomial time has had, therefore, a dramatic effect. The implementation of the algorithm on a physical machine would have economic, as well as scientific consequences (Alléaume et al. 2014). 3.1.5 Grover’s Algorithm In a brilliant undercover operation, Agent 13 has managed to secure two crucial bits of information concerning the whereabouts of the arch-villain Siegfried: the phone number of the secret hideout from which he intends to begin carrying out KAOS’s plans for world domination, and the fact that the number is a listed one (apparently an oversight on Siegfried’s part). Unfortunately you and your colleagues at CONTROL have no other information besides this. Can you find Siegfried’s hideout using only this number and a phone directory? In theoretical computer science this task is known as an unstructured search. In the worst case, if there are \(n\) entries in the directory, the computational resources required to find the entry will be linear in \(n\). Grover (1996) showed how this task could be done with a quantum algorithm using computational resources on the order of only \(\sqrt{n}\). Agreed, this “speed-up” is more modest than Shor’s since unstructured search belongs to the class \(\mathbf{P}\), but contrary to Shor’s case, where the classical complexity of factoring is still unknown, here the superiority of the quantum algorithm, however modest, is definitely provable. That this quadratic “speed-up” is also the optimal quantum “speed-up” possible for this problem was proved by Bennett, Bernstein, Brassard, & Vazirani (1997). Although the purpose of Grover’s algorithm is usually described as “searching a database”, it may be more accurate to describe it as “inverting a function”. Roughly speaking, if we have a function \(y=f(x)\) that can be evaluated on a quantum computer, Grover’s algorithm allows us to calculate \(x\) given \(y\). Inverting a function is related to searching a database because we could come up with a function that produces a particular value of \(y\) if \(x\) matches a desired entry in a database, and another value of \(y\) for other values of \(x\). The applications of this algorithm are far-reaching (even more so than foiling Siegfried’s plans for world domination). For example, it can be used to determine efficiently the number of solutions to an \(N\)-item search problem, hence to perform exhaustive searches on a class of solutions to an NP-complete problem and substantially reduce the computational resources required for solving it. 3.2 Adiabatic Algorithms Many decades have passed since the discovery of the first quantum algorithm, but so far little progress has been made with respect to the “Holy Grail” of solving an NP-complete problem with a quantum-circuit. In 2000 a group of physicists from MIT and Northeastern University (Farhi et al. 2000) proposed a novel paradigm for quantum computing that differs from the circuit model in several interesting ways. Their goal was to try to solve with this algorithm an instance of the satisfiability problem (see above), one of the most famous NP-complete problems (Cook 1971). According to the adiabatic theorem (e.g. Messiah 1961) and given certain specific conditions, a quantum system remains in its lowest energy state, known as the ground state, along an adiabatic transformation in which the system is deformed slowly and smoothly from an initial Hamiltonian to a final Hamiltonian (as an illustration, think of moving a sleeping baby in a cradle from the living room to the bedroom. If the transition is done slowly and smoothly enough, and if the baby is a sound sleeper, then it will remain asleep during the whole transition). The most important condition in this theorem is the energy gap between the ground state and the next excited state (in our analogy, this gap reflects how sound asleep the baby is). Being inversely proportional to the evolution time \(T\), this gap controls the latter. If this gap exists during the entire evolution (i.e., there is no level crossing between the energy states of the system), the theorem dictates that in the adiabatic limit (when \(T\rightarrow \infty)\) the system will remain in its ground state. In practice, of course, \(T\) is always finite, but the longer it is, the less likely it is that the system will deviate from its ground state during the time evolution. The crux of the quantum adiabatic algorithm which rests on this theorem lies in the possibility of encoding a specific instance of a given decision problem in a certain Hamiltonian (this can be done by capitalising on the well-known fact that any decision problem can be derived from an optimisation problem by incorporating into it a numerical bound as an additional parameter). One then starts the system in a ground state of another Hamiltonian which is easy to construct, and slowly evolves the system in time, deforming it towards the desired Hamiltonian. According to the quantum adiabatic theorem and given the gap condition, the result of such a physical process is another energy ground state that encodes the solution to the desired decision problem. The adiabatic algorithm is thus a rather ‘laid back’ algorithm: one needs only to start the system in its ground state, deform it adiabatically, and measure its final ground state in order to retrieve the desired result. But whether or not this algorithm yields the desired “speed-up” depends crucially on the behaviour of the energy gap as the number of degrees of freedom in the system increases. If this gap decreases exponentially with the size of the input, then the evolution time of the algorithm will increase exponentially; if the gap decreases polynomially, the decision problem so encoded could be solved efficiently in polynomial time. Although physicists have been studying spectral gaps for almost a century, they have never done so with quantum computing in mind. How this gap behaves in general remains thus far an open empirical question. The quantum adiabatic algorithm holds much promise (Farhi et al. 2001). It has been shown (Aharonov et al. 2008) to be polynomially equivalent to the circuit model (that is, each model can simulate the other with only polynomial overhead in the number of qubits and computational steps), but the caveat that is sometimes left unmentioned is that its application to an intractable computational problem may sometimes require solving another, as intractable a task (this general worry was first raised by a philosopher; see Pitowsky (1990)). Indeed, Reichardt (2004) has shown that there are simple problems for which the algorithm will get stuck in a local minimum, in which there are exponentially many eigenvalues all exponentially close to the ground state energy, so applying the adiabatic theorem, even for these simple problems, will take exponential time, and we are back to square one. 3.3 Measurement-Based Algorithms Measurement-based algorithms differ from circuit algorithms in that instead of employing unitary evolution as the basic mechanism for the manipulation of information, these algorithms essentially make use of non-unitary measurements in the course of a computation. They are especially interesting from a foundational perspective because they have no evident classical analogues and because they offer new insight on the role of entanglement in quantum computing (Jozsa 2006). They may also have interesting engineering-related consequences, suggesting a different kind of computer architecture which is more fault tolerant (Nielsen and Dawson 2005). Measurement-based algorithms fall into two categories. The first is teleportation quantum computing (based on an idea of Gottesman and Chuang (1999), and developed into a computational model by Nielsen (2003) and Leung (2004)). The second is the “one way quantum computer”, known also as the “cluster state” model (Raussendorf and Briegel 2002). The interesting feature of these models is that they are able to simulate arbitrary quantum dynamics, including unitary dynamics, using basic non-unitary measurements. The measurements are performed on a pool of highly entangled states and are adaptive, i.e., each measurement is done in a different basis which is calculated classically, given the result of earlier measurements. Exotic models such as these might seem redundant, especially since they have been shown to be polynomially equivalent to the standard circuit model in terms of computational complexity (Raussendorf, Browne, and Briegel 2003). Their merit, however, lies in the foundational lessons they drive home: with these models the separation between the classical (i.e., the calculation of the next measurement-basis) and quantum (i.e., measurements on the entangled qubits) parts of the computation becomes evident, hence it may be easier to pinpoint the quantum resources that are responsible for the putative “speed-up”. 3.4 Topological-Quantum-Field-Theory (TQFT) Algorithms Another exotic model for quantum computing which has attracted a lot of attention, especially from Microsoft inc. (Freedman 1998), is the Topological Quantum Field Theory model. In contrast to the easily visualisable circuit model, this model resides in the most abstract reaches of theoretical physics. The exotic physical systems TQFT describes are topological states of matter. That the formalism of TQFT can be applied to computational problems was shown by Witten (1989) and the idea was later developed by others. The model has been proved to be efficiently simulatable on a standard quantum computer (Freedman, Kitaev, and Wang 2002; Aharonov, Jones, and Landau 2009). Its main merit lies in its high tolerance to the errors which are inevitably introduced in the implementation of a large scale quantum computer (see below). Topology is especially helpful here because many global topological properties are, by definition, invariant under deformation, and given that most errors are local, information encoded in topological properties is robust against them. 4 Realisations The quantum computer might be the theoretician’s dream, but as far as experimentalists are concerned, its realisation is a nightmare. The problem is that while some prototypes of the simplest elements needed to build a quantum computer have already been implemented in the laboratory, it is still an open question how to combine these elements into scalable systems (see Van Meter and Horsman 2013). Shor’s algorithm may break RSA encryption, but it will remain an anecdote if the largest number that it can factor is 15. In the circuit-based model the problem is to achieve a scalable quantum system that at the same time will allow one to (1) robustly represent quantum information with (2) a time to decoherence significantly longer than the length of the computation, (3) implement a universal family of unitary transformations, (4) prepare a fiducial initial state, and (5) measure the output result (these are DiVincenzo (2000)’s five criteria). Alternative paradigms may trade some of these requirements with others, but the gist will remain the same, i.e., one would have to achieve control of one’s quantum system in such a way that the system will remain “quantum” albeit macroscopic or at least mesoscopic in its dimensions. In order to deal with these challenges, several ingenious solutions have been devised, including quantum error correction codes and fault tolerant computation (Shor 1995; Shor and DiVincenzo 1996; Aharonov and Ben-Or 1997; Raussendorf, Harrington, and Goyal 2008; Horsman et al. 2012; De Beaudrap and Horsman 2019) which can dramatically reduce the spread of errors during a ‘noisy’ quantum computation. An important criticism of these active error correction schemes, however, is that they are devised for a very unrealistic noise model which treats the computer as quantum and the environment as classical (Alicki, Lidar, and Zinardi 2006). Once a more realistic noise model is allowed, the feasibility of large scale, fault tolerant and computationally superior quantum computers is less clear (Hagar 2009; Tabakin 2017). In the near term, a promising avenue for realising a quantum advantage in a limited number of problem domains is the Noisy Intermediate-Scale Quantum (NISQ) paradigm (Preskill 2018). The NISQ paradigm does not employ any error correction mechanisms (postponing the problem to implement scalable versions of these to the future) but rather focuses on building computational components, and on tackling computational problems, which are inherently more resilient to noise. These include, for example, certain classes of optimisation problems, quantum semidefinite programming, and digital quantum simulation (Tacchino et al. 2019). A caveat here is that as the resiliency to noise of a circuit increases, the more classically it behaves. Nevertheless, research into NISQ computing is believed to be on track to realise a 50–100 qubit machine—large enough to achieve a quantum advantage over known classical alternatives for the envisioned applications—within the next 5–10 years. As mentioned, one of the envisioned applications of NISQ computing is for digital quantum simulation (i.e. simulation using a gate-based programmable quantum computer). There is an older tradition of analog quantum simulation, however, wherein one utilises a quantum system whose dynamics resemble the dynamics of a particular target system of interest. Although it is believed that digital quantum simulation will eventually supersede it, the field of analog quantum simulation has progressed substantially in the years since it was first proposed, and analog quantum simulators have already been used to study quantum dynamics in regimes thought to be beyond the reach of classical simulators (see, e.g., Bernien et al. (2017); for further discussion of the philosophical issues involved, see Hangleiter, Carolan, and Thébault (2017)). 5. Philosophical Questions 5.1 What is Quantum in Quantum Computing? Notwithstanding the excitement around the discovery of Shor’s algorithm, and putting aside the presently insurmountable problem of practically realising and implementing a large scale quantum computer, a crucial theoretical question remains open: What physical resources are responsible for quantum computing’s putative power? Put another way, what are the essential features of quantum mechanics that would in principle allow one to solve problems or simulate certain systems more efficiently than on a classical computer? A number of candidates have been put forward. Fortnow (2003) posits interference as the key, though it has been suggested that this is not truly a quantum phenomenon (Spekkens 2007). Jozsa (1997) and many others point to entanglement, although there are purported counter-examples to this thesis (see, e.g., Linden and Popescu (1999), Gottesman (1999), Biham et al. (2004), and finally see Cuffaro (2017) for a philosophical discussion). Howard et al. (2014) appeal to quantum contextuality. For Bub (2010) the answer lies in the logical structure of quantum mechanics (cf. Pitowsky 1989). Duwell (2018) argues for quantum parallelism, and for Deutsch (1997) and Hewitt-Horsman (2009) it is “parallel worlds” which are the resource. Speculative as it may seem, the question “what is quantum in quantum computing?” has significant practical consequences. One of the embarrassments of quantum computing is the paucity of quantum algorithms which have actually been discovered. It is almost certain that one of the reasons for this is the lack of a full understanding of what makes a quantum computer quantum (see also Preskill (1998) and Shor (2004)). As an ultimate answer to this question one would like to have something similar to Bell’s famous theorem, i.e., a succinct crisp statement of the fundamental difference between quantum and classical systems. Quantum computers, unfortunately, do not seem to allow such a simple characterisation (see Cuffaro 2017, 2018a). Quantum computing skeptics (Levin 2003) happily capitalise on this puzzle: If no one knows why quantum computers are superior to classical ones, how can we be sure that they are, indeed, superior? 5.1.1 The Debate over Parallelism and Many Worlds The answer that has tended to dominate the popular literature on quantum computing is motivated by evolutions such as: \[\tag{2} \Sigma_{x} \lvert x\rangle \lvert 0\rangle \rightarrow \Sigma_{x} \lvert x\rangle \lvert f(x)\rangle,\] which were common to many early quantum algorithms. Note the appearance that \(f\) is evaluated for each of its possible inputs simultaneously. The idea that we should take this at face value—that quantum computers actually do compute a function for many different input values simultaneously—is what Duwell (2018) calls the Quantum Parallelism Thesis (QPT). For Deutsch, who accepts it as true, the only reasonable explanation for the QPT is that the many worlds interpretation (MWI) of quantum mechanics is also true. For Deutsch, a quantum computer in superposition, like any other quantum system, exists in some sense in many classical universes simultaneously. These provide the physical arena within which the computer effects its parallel computations. This conclusion is defended by Hewitt-Horsman (2009) and by Wallace (2012). Wallace notes, however, that the QPT—and hence the explanatory need for many worlds—may not be true of all or even most quantum algorithms. For Steane (2003), in contrast, quantum computers are not well described in terms of many worlds or even quantum parallelism. Among other things, Steane argues that the motivation for the QPT is at least partly due to misleading aspects of the standard quantum formalism. Additionally, comparing the information actually produced by quantum and classical algorithms (state collapse entails that only one evaluation instance in (2) is ever accessible, while a classical computer must actually produce every instance) suggests that quantum algorithms perform not more but fewer, cleverer, computations than classical algorithms (see, also, section 5.1.2 below). Another critic is Duwell, who (contra Steane) accepts the QPT (Duwell 2018a), but nevertheless denies that it uniquely supports the MWI (Duwell 2007). Considering the phase relations between the terms in a superposition such as (2) is crucially important when evaluating a quantum algorithm’s computational efficiency. Phase relations, however, are global properties of a state. Thus a quantum computation, Duwell argues, does not consist solely of local parallel computations. But in this case, the QPT does not uniquely support the MWI over other explanations. Defending the MWI, Hewitt-Horsman (2009) argues (contra Steane) that to state that quantum computers do not actually generate each of the evaluation instances represented in (2) is false according to the view: on the MWI such information could be extracted in principle given sufficiently advanced technology. Further, Hewitt-Horsman emphasises that the MWI is not motivated simply by a suggestive mathematical representation. Worlds on the MWI are defined according to their explanatory usefulness, manifested in particular by their stability and independence over the time scales relevant to the computation. Wallace (2012) argues similarly. Cuffaro (2012) and Aaronson (2013b) point out that the Many Worlds Explanation of Quantum Computing (MWQC) and the MWI are not actually identical. The latter employs decoherence as a criterion for distinguishing macroscopic worlds from one another. Quantum circuit model algorithms, however, utilise coherent superpositions. To distinguish computational worlds, therefore, one must weaken the decoherence criterion, but Cuffaro argues that this move is ad hoc. Further, Cuffaro argues that the MWQC is for all practical purposes incompatible with measurement based computation, for even granting a weakened world identification criterion, there is no natural way in this model to identify worlds that are stable and independent in the way required. 5.1.2 The Elusive Nature of Speed-Up Even if we could rule out the MWQC, the problem of finding the physical resource(s) responsible for quantum “speed-up” would remain a difficult one. Consider a solution of a decision problem, say satisfiability, with a quantum algorithm based on the circuit model. What we are given here as input is a proposition in the propositional calculus and we have to decide whether it has a satisfying truth assignment. As Pitowsky (2002) shows, the quantum algorithm appears to solve this problem by testing all \(2^{n}\) assignments “at once” as suggested by (2), yet this quantum ‘miracle’ helps us very little since, as previously mentioned, any measurement performed on the output state collapses it, and if there is one possible truth assignment that solves this decision problem, the probability of retrieving it is \(2^{-n}\), just as in the case of a classical probabilistic Turing machine which guesses the solution and then checks it. Pitowsky’s conclusion (echoed, as we saw, by Steane (2003) and Duwell (2007)) is that in order to enhance computation with quantum mechanics we must construct ‘clever’ superpositions that increase the probability of successfully retrieving the result far more than that of a pure guess. Shor’s algorithm and the class of algorithms that evaluate a global property of a function (this class is known as the hidden subgroup class of algorithms) are (so far) a unique example of both a construction of such ‘clever’ superpositions and a retrieval of the solution in polynomial time. The quantum adiabatic algorithm may give us similar results, contingent upon the existence of an energy gap that decreases polynomially with the input. This question also raises important issues about how to measure the complexity of a given quantum algorithm. The answer differs, of course, according to the particular model at hand. In the adiabatic model, for example, one needs only to estimate the energy gap behaviour and its relation to the input size (encoded in the number of degrees of freedom of the Hamiltonian of the system). In the measurement-based model, one counts the number of measurements needed to reveal the solution that is hidden in the input cluster state (since the preparation of the cluster state is a polynomial process, it does not add to the complexity of the computation). But in the circuit model things are not as straightforward. After all, the whole of the quantum-circuit-based computation can be be simply represented as a single unitary transformation from the input state to the output state. This feature of the quantum circuit model supports the conjecture that the power of quantum computers, if any, lies not in quantum dynamics (i.e., in the Schrödinger equation), but rather in the quantum state, or the wave function. Another argument in favour of this conjecture is that the Hilbert subspace “visited” during a quantum computational process is, at any moment, a linear space spanned by all of the vectors in the total Hilbert space which have been created by the computational process up to that moment. But this Hilbert subspace is thus a subspace spanned by a polynomial number of vectors and is thus at most a polynomial subspace of the total Hilbert space. A classical simulation of the quantum evolution on a Hilbert space with polynomial number of dimensions (that is, a Hilbert space spanned by a number of basis vectors which is polynomial in the number of qubits involved in the computation), however, can be carried out in a polynomial number of classical computations. Were quantum dynamics the sole ingredient responsible to the efficiency of quantum computing, the latter could be mimicked in a polynomial number of steps with a classical computer (see, e.g. Vidal 2003). This is not to say that quantum computation is no more powerful than classical computation. The key point, of course, is that one does not end a quantum computation with an arbitrary superposition, but aims for a very special, ‘clever’ state—to use Pitowsky’s term. Quantum computations may not always be mimicked with a classical computer because the characterisation of the computational subspace of certain quantum states is difficult, and it seems that these special, ‘clever’, quantum states cannot be classically represented as vectors derivable via a quantum computation in an optimal basis, or at least that one cannot do so in such way that would allow one to calculate the outcome of the final measurement made on these states. Consequently, in the quantum circuit model one should count the number of computational steps in the computation not by counting the number of transformations of the state, but by counting the number of one- or two-qubit local transformations that are required to create the ‘clever’ superposition that ensures the desired “speed-up”. (Note that Shor’s algorithm, for example, involves three major steps in this context: First, one creates the ‘clever’ entangled state with a set of unitary transformations. The result of the computation—a global property of a function—is now ‘hidden’ in this state; second, in order to retrieve this result, one projects it on a subspace of the Hilbert space, and finally one performs another set of unitary transformations in order to make the result measurable in the original computational basis. All these steps count as computational steps as far as the efficiency of the algorithm is concerned. See also Bub (2006b).) The trick is to perform these local one- or two-qubit transformations in polynomial time, and it is likely that it is here where the physical power of quantum computing may be found. 5.2 Experimental Metaphysics? The quantum information revolution has prompted several physicists and philosophers to claim that new insights can be gained from the rising new science into conceptual problems in the foundations of quantum mechanics (see, e.g., Bub (2016), Chiribella and Spekkens (2016); for responses and commentaries, see, e.g., Myrvold (2010), Timpson (2013), Felline (2016), Cuffaro (forthcoming), Duwell (forthcoming), Felline (forthcoming-a), Henderson (forthcoming), Koberinski and Müller (2018)). Yet while one of the most famous foundational problems in quantum mechanics, namely the quantum measurement problem, remains unsolved even within quantum information theory (see Hagar (2003), Hagar and Hemmo (2006), and Felline (forthcoming-b) for a critique of the quantum information theoretic approach to the foundations of quantum mechanics and the role of the quantum measurement problem in this context), some quantum information theorists dismiss it as a philosophical quibble (Fuchs 2002). Indeed, in quantum information theory the concept of “measurement” is taken as a primitive, a “black box” which remains unanalysed. The measurement problem itself, furthermore, is regarded as a misunderstanding of quantum theory. But recent advances in the realisation of a large scale quantum computer may eventually prove quantum information theorists wrong: Rather than supporting the dismissal of the quantum measurement problem, these advances may surprisingly lead to its empirical solution. The speculative idea is the following. As it turns out, collapse theories—one form of alternatives to quantum theory which aim to solve the measurement problem—modify Schrödinger’s equation and give different predictions from quantum theory in certain specific circumstances. These circumstances can be realised, moreover, if decoherence effects can be suppressed (Bassi, Adler, & Ippoliti 2004). Now one of the most difficult obstacles that await the construction of a large scale quantum computer is its robustness against decoherence effects (Unruh 1995). It thus appears that the technological capabilities required for the realisation of a large scale quantum computer are potentially related to those upon which the distinction between “true” and “false” collapse (Pearle 1997), i.e., between collapse theories and environmentally induced decoherence, is contingent. Consequently the physical realisation of a large-scale quantum computer, if it were of the right architecture, could potentially shed light on one of the long standing conceptual problems in the foundations of the theory, and if so this would serve as yet another example of experimental metaphysics (the term was coined by Abner Shimony to designate the chain of events that led from the EPR argument via Bell’s theorem to Aspect’s experiments). Note, however, that as just mentioned, one would need to consider the computer’s architecture before making any metaphysical conclusions. The computer architecture is important because while dynamical collapse theories tend to collapse superpositions involving the positions of macroscopic quantities of mass, they tend not to collapse large complicated superpositions of photon polarisation or spin. 5.3 Quantum Causality Is quantum mechanics compatible with the principle of causality? This is an old question—indeed one of the very first interpretational questions confronted by the early commentators on the theory (Hermann 2017; Schlick 1961, 1962). The contemporary literature continues to exhibit considerable skepticism in regards to the prospects of explaining quantum phenomena causally (Hausman & Woodward 1999; Van Fraassen 1982; Woodward 2007), or at any rate locally causally, especially in the wake of Bell’s theorem (Myrvold 2016). As a result of some fascinating theoretical work (Allen, Barrett, Horsman, Lee, & Spekkens 2017; Costa & Shrapnel 2016; Shrapnel 2017), however, it seems that the prospects for a locally causal explanation of quantum phenomena are not quite as hopeless as they may initially have seemed, at least in the context of an interventionist theory of causation. This is not to say that decades of physical and philosophical investigations into the consequences of Bell’s theorem have all been mistaken, of course. For one thing, the interventionist frameworks utilised in this new work are operationalist, thus the relevance of this work to so-called hidden variables theories of quantum mechanics is unclear. Second, the interventionist frameworks utilised are not classical, and neither is the kind of causality they explicate. Indeed, in regard to the latter point, it is arguably the key insight emerging from this work that the frameworks previously utilised for analysing interventionist causation in the quantum context are inappropriate to that context. In contrast to a classical interventionist framework in which events are thought of as primitive (i.e. as not further analysable), events in these generalised frameworks are characterised as processes with associated inputs and outputs. Specifically, one characterises quantum events using a concept from quantum computation and information theory called a quantum channel. And within this generalised interventionist framework, causal models of quantum phenomena can be given which do not need to posit non-local causal influences, and which satisfy certain other desiderata typically required in a causal model (in particular that such a model respect the causal Markov condition and that it not require ‘fine-tuning’; see Shrapnel (2017)). 5.4 (Quantum) Computational Perspectives on Physical Science Physics is traditionally conceived as a primarily “theoretical” activity, in the sense that it is generally thought to be the goal of physics to tell us, even if only indirectly (Fuchs (2002), pp. 5–6, Fuchs (2010), pp. 22–3), what the world is like independently of ourselves. This is not the case with every science. Chemistry, for example, is arguably best thought of as a “practically” oriented discipline concerned with the ways in which systems can be manipulated for particular purposes (Bensaude-Vincent (2009)). Even within physics, there are sub-disciplines which are best construed in this way (Myrvold 2011; Wallace 2014; Ladyman 2018), and indeed, some (though at present these are still a minority) have even sought to (re-)characterise physics as a whole in something like this way, i.e. as a science of possible as opposed to impossible transformations (Deutsch 2013). Elaborating upon ideas which one can glean from Pitowsky’s work (1990, 1996, 2002), Cuffaro argues at length that quantum computation and information theory (QCIT) are practical sciences in this sense, as opposed to the “theoretical sciences” exemplified by physics under its traditional characterisation; further that recognising this distinction illuminates both areas of activity. On the one hand (Cuffaro 2017), practical investigators attempting to isolate and/or quantify the computational resources made available by quantum computers are in danger of conceptual confusion if they are not cognisant of the differences between practical and traditional sciences. On the other hand (Cuffaro 2018a), one should be wary of the significance of classical computer simulations of quantum mechanical phenomena for the purposes of a foundational analysis of the latter. For example, certain mathematical results can legitimately be thought of as no-go theorems for the purposes of foundational analysis, and yet are not really relevant for the purpose of characterising the class of efficiently simulable quantum phenomena. 5.5 The Church-Turing Thesis and Deutsch’s Principle The Church-Turing thesis, which asserts that every function naturally regarded as computable is Turing-computable, is argued by Deutsch to presuppose a physical principle, namely that: [DP]: Every finitely realisable physical system can be perfectly simulated by a universal model computing machine operating by finite means. (Deutsch 1985) Since no machine operating by finite means can simulate classical physics’ continuity of states and dynamics, Deutsch argues that DP is false in a classical world. He argues that it is true for quantum physics, however, owing to the existence of the universal quantum Turing machine he introduces in the same paper, which thus proves both DP and the Church-Turing thesis it underlies to be sound. This idea—that the Church-Turing thesis requires a physical grounding—is set into historical context by Lupacchini (2018), who traces its roots in the thought of Gödel, Post, and Gandy. It is criticised by Timpson (2013), who views it as methodologically fruitful, but as nevertheless resting on a confusion regarding the meaning of the Church-Turing thesis, which in itself has nothing to do with physics. 5.6 (Quantum) Computation and Scientific Explanation In the general philosophy of science literature on scientific explanation there is a distinction between so-called “how-actually” and “how-possibly” explanation, where the former aims to convey how a particular outcome actually came about, and the latter aims to convey how the occurrence of an event can have been possible. That how-actually explanation actually explains is uncontroversial, but the merit (if any) of how-possibly explanation has been debated. While some view how-possibly explanation as genuinely explanatory, others have argued that how-possibly ‘explanation’ is better thought of as, at best, a merely heuristically useful exercise. It turns out that the science of quantum computation is able to illuminate this debate. Cuffaro (2015) argues that when one examines the question of the source of quantum “speed-up”, one sees that to answer this question is to compare algorithmic processes of various kinds, and in so doing to describe the possibility spaces associated with these processes. By doing so one explains how it is possible for one process to outperform its rival. Further, Cuffaro argues that in examples like this, once one has answered the how-possibly question, nothing is actually gained by subsequently asking a how-actually question. 5.7 Are There Computational Kinds? Finally, another philosophical implication of the realisation of a large scale quantum computer regards the long-standing debate in the philosophy of mind on the autonomy of computational theories of the mind (Fodor 1974). In the shift from strong to weak artificial intelligence, the advocates of this view tried to impose constraints on computer programs before they could qualify as theories of cognitive science (Pylyshyn 1984). These constraints include, for example, the nature of physical realisations of symbols and the relations between abstract symbolic computations and the physical causal processes that execute them. The search for the computational feature of these theories, i.e., for what makes them computational theories of the mind, involved isolating some features of the computer as such. In other words, the advocates of weak AI were looking for computational properties, or kinds, that would be machine independent, at least in the sense that they would not be associated with the physical constitution of the computer, nor with the specific machine model that was being used. These features were thought to be instrumental in debates within cognitive science, e.g., the debate between functionalism and connectionism (Fodor and Pylyshyn 1988). Note, however, that once the physical Church-Turing thesis is violated, arguably some computational notions cease to be autonomous. In other words, given that quantum computers may be able to efficiently solve classically intractable problems, hence re-describe the abstract space of computational complexity (Bernstein and Vazirani 1997), computational concepts and even computational kinds such as ‘an efficient algorithm’ or ‘the class NP’, become machine-dependent, and recourse to ‘hardware’ becomes inevitable in any analysis thereof (Hagar 2007). Advances in quantum computing may thus militate against the functionalist view about the unphysical character of the types and properties that are used in computer science. In fact, these types and categories may become physical as a result of this natural development in physics (e.g., quantum computing, chaos theory). Consequently, efficient quantum algorithms may also serve as counterexamples to a-priori arguments against reductionism (Pitowsky 1996). • Aaronson, S., 2013a. Quantum Computing Since Democritus. New York: Cambridge University Press. • –––, 2013b. “Why Philosophers Should Care About Computational Complexity,” in Computability: Turing, Gödel, Church, and Beyond, edited by B. J. Copeland, C. J. Posy, and O. Shagrir, 261–327, Cambridge, MA: MIT Press. • Adamyan, V. A., C. S. Calude, and B. S. Pavlov, 2004. “Transcending the Limits of Turing Computability,” in Quantum Information and Complexity, 119–37. • Adleman, L. M., 1994. “Molecular Computation of Solutions to Combinatorial Problems,” Science, 266: 1021–4. • Agrawal, M., N. Kayal, and N. Saxena, 2004. “PRIMES Is in P,” Annals of Mathematics, 160: 781–93. • Aharonov, D., 1999. “Quantum Computation,” in Annual Reviews of Computational Physics Vi, 259–346, Singapore: World Scientific. • Aharonov, D., and M. Ben-Or, 1997. “Fault-Tolerant Computation with Constant Error,” in Proceedings of the Twenty-Ninth ACM Symposium on the Theory of Computing, Vol. 176. • Aharonov, D., V. Jones, and Z. Landau, 2009. “A Polynomial Quantum Algorithm for Approximating the Jones Polynomial,” Algorithmica, 55: 395–421. • Aharonov, D., W. Van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev, 2008. “Adiabatic Quantum Computation Is Equivalent to Standard Quantum Computation,” SIAM Review, 50: 755–87. • Albert, D., 1983. “On Quantum Mechanical Automata,” Phys. Lett. A, 98: 249. • Alicki, R., D. Lidar, and P. Zinardi, 2006. “Internal Consistency of Fault Tolerant Quantum Error Correction,” Phys. Rev. A, 73: 052311. • Alléaume, R., C. Branciard, J. Bouda, T. Debuisschert, M. Dianati, N. Gisin, M. Godfrey, et al., 2014. “Using Quantum Key Distribution for Cryptographic Purposes: A Survey,” Theoretical Computer Science, 560: 62–81. • Allen, J. A., J. Barrett, D. C Horsman, C. M. Lee, and R. W. Spekkens, 2017. “Quantum Common Causes and Quantum Causal Models,” Physical Review X, 7: 031021. • Andréka, H., J. X. Madarász, I. Németi, P. Németi, and G. Székely, 2018. “Relativistic Computation,” in Cuffaro & Fletcher 2018, 195–218. • Arora, S., and B. Barak, 2009. Computational Complexity: A Modern Approach, Cambridge: Cambridge University Press. • Barenco, A., C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A Smolin, and H. Weinfurter, 1995. “Elementary Gates for Quantum Computation,” Phys. Rev. A, 52: 3457–67. • Bassi, A., S. L. Adler, and E. Ippoliti, 2004. “Towards Quantum Superpositions of a Mirror: Stochastic Collapse Analysis,” Phys. Rev. Lett., 94: 030401. • Bennett, C. H., E. Bernstein, G. Brassard, and U. Vazirani, 1997. “Strengths and Weaknesses of Quantum Computing,” SIAM Journal on Computing, 26: 1510–23. • Bensaude-Vincent, B., 2009. “The Chemists’ Style of Thinking,” Berichte Zur Wissenschaftsgeschichte, 32: 365–78. • Bernien, H., S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, et al., 2017. “Probing Many-Body Dynamics on a 51-Atom Quantum Simulator,” Nature, 551: 579. • Bernstein, E., and U. Vazirani, 1997. “Quantum Complexity Theory,” SIAM Journal on Computing, 26: 1411–73. • Biham, E., G. Brassard, D. Kenigsberg, and T. Mor, 2004. “Quantum Computing Without Entanglement,” Theoretical Computer Science, 320: 15–33. • Bub, J., 2006b. “Quantum Information and Computing,” in Handbook of the Philosophy of Science, Philosophy of Physics, Part A, edited by J. Butterfield and J. Earman, 555–660, Amsterdam: Elsevier. • –––, 2010. “Quantum Computation: Where Does the Speed-up Come from?” In Philosophy of Quantum Information and Entanglement, edited by A. Bokulich and G. Jaeger, 231–46, Cambridge: Cambridge University Press. • –––, 2016. Bananaworld, Quantum Mechanics for Primates, Oxford: Oxford University Press. • Chiribella, G., and R. W. Spekkens, 2016. Quantum Theory: Informational Foundations and Foils, Dordrecht: Springer. • Cirac, J. I., and P. Zoller, 1995. “Quantum Computations with Cold Trapped Ions,” Phys. Rev. Lett., 74: 4091–4. • Cobham, A., 1965. “The Intrinsic Computational Difficulty of Functions,” in Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress, edited by Yehoshua Bar-Hillel, 24–30, Amsterdam: North-Holland. • Cook, S. A., 1971. “The Complexity of Theorem-Proving Procedures,” in Proceedings of the Third Annual ACM Symposium on Theory of Computing, 151–58, New York: ACM. • Copeland, B. J., 2002. “Hypercomputation,” Minds and Machines, 12: 461–502. • –––, 2018. “Zuse’s Thesis, Gandy’s Thesis, and Penrose’s Thesis,” in Cuffaro & Fletcher 2018, 39–59. • Costa, F., and S. Shrapnel, 2016. “Quantum Causal Modelling,” New Journal of Physics, 18: 063032. • Cuffaro, M. E., 2012. “Many Worlds, the Cluster-State Quantum Computer, and the Problem of the Preferred Basis,” Studies in History and Philosophy of Modern Physics, 43: 35–42. • –––, 2015. “How-Possibly Explanations in (Quantum) Computer Science,” Philosophy of Science, 82: 737–48. • –––, 2017. “On the Significance of the Gottesman-Knill Theorem,” The British Journal for the Philosophy of Science, 68: 91–121. • –––, 2018a. “Reconsidering No-Go-Theorems from a Practical Perspective,” The British Journal for the Philosophy of Science, 69: 633–55. • –––, 2018b. “Universality, Invariance, and the Foundations of Computational Complexity in the Light of the Quantum Computer,” in Technology and Mathematics: Philosophical and Historical Investigations, edited by S. O. Hansson, 253–82, Cham: Springer. • –––, forthcoming. “Information Causality, the Tsirelson Bound, and the ‘Being-Thus’ of Things,” Studies in History and Philosophy of Modern Physics, first online 13 November 2018; 10.1016/j.shpsb.2018.05.001 • Cuffaro, M. E., and S. C. Fletcher (eds.), 2018. Physical Perspectives on Computation, Computational Perspectives on Physics, Cambridge: Cambridge University Press. • Davis, M., 1958. The Undecidable, New York: Dover. • –––, 2003. “The Myth of Hypercomputation,” in Alan Turing, Life and Legacy of a Great Thinker, edited by C. Teuscher, 195–212, New York: Springer. • Deutsch, D., 1985. “Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer,” Proc. Roy. Soc. Lond. A, 400: 97–117. • –––, 1989. “Quantum Computational Networks,” Proc. Roy. Soc. Lond. A, 425: 73–90. • –––, 1997. The Fabric of Reality, New York: Penguin. • –––, 2013. “The Philosophy of Constructor Theory,” Synthese, 190: 4331–59. • Deutsch, D., and R. Jozsa. 1992. “Rapid Solution of Problems by Quantum Computation,” Proc. Roy. Soc. Lond. A, 439: 553–58. • Dewdney, A. K., 1984. “On the Spaghetti Computer and Other Analog Gadgets for Problem Solving,” Scientific American, 250: 19–26. • DiVincenzo, D., 1995. “Two-Bit Gates Are Universal for Quantum Computation,” Phys. Rev. A, 51: 1015–22. • –––, 2000. “The Physical Implementation of Quantum Computation,” Fortschritte Der Physik, 48: 771–83. • Duwell, A., 2007. “The Many-Worlds Interpretation and Quantum Computation,” Philosophy of Science, 74: 1007–18. • –––, 2018. “How to Make Orthogonal Positions Parallel: Revisiting the Quantum Parallelism Thesis,” in Cuffaro & Fletcher 2018, 83–102. • –––, forthcoming. “Understanding Quantum Phenomena and Quantum Theories,” Studies in History and Philosophy of Modern Physics, first online 27 July 2918; doi:10.1016/j.shpsb.2018.06.002 • Edmonds, J., 1965. “Paths, Trees, and Flowers,” Canadian Journal of Mathematics, 17: 449–67. • Ekert, A., and R. Jozsa, 1996. “Quantum Computation and Shor’s Factoring Algorithm,” Reviews of Modern Physics, 68: 733–53. • Farhi, E., J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda, 2001. “A Quantum Adiabatic Evolution Algorithm Applied to Random Instances of an NP-Complete Problem,” Science 292: 472–75. • Felline, L., 2016. “It’s a Matter of Principle: Scientific Explanation in Information‐Theoretic Reconstructions of Quantum Theory,” Dialectica, 70: 549–75. • –––, forthcoming-a. “Quantum Theory Is Not Only About Information,” Studies in History and Philosophy of Modern Physics, first online 22 June 2018; doi:10.1016/j.shpsb.2018.03.003 • –––, forthcomingn-b. “The Measurement Problem and Two Dogmas About Quantum Mechanics,” in Quantum, Probability, Logic: Itamar Pitowsky’s Work and Influence, edited by M. Hemmo and O. Shenker, Dordrecht: Springer. • Feynman, R. P., 1982. “Simulating Physics with Computers,” International Journal of Theoretical Physics, 21: 467–88. • Fodor, J., 1974. “Special Sciences,” Synthese 2: 97–115. • Fodor, J., and Z. Pylyshyn, 1988. “Connectionism and Cognitive Architecture, a Critical Analysis,” Cognition 28: 3–71. • Fortnow, L., 1994. “The Role of Relativization in Complexity Theory,” Bulletin of the European Association for Theoretical Computer Science, 52: 229–44. • –––, 2003. “One Complexity Theorist’s View of Quantum Computing,” Theoretical Computer Science, 292: 597–610. • Freedman, M. H., 1998. “P/NP and the Quantum Field Computer,” Proc. Natl. Acad. Sci., 95: 98–101. • Freedman, M. H., A. Kitaev, and Z. Wang, 2002. “Simulation of Topological Field Theories by Quantum Computers,” Communications in Mathematical Physics, 227: 587–603. • Gandy, R., 1980. “Church’s Thesis and Principles for Mechanisms,” in The Kleene Symposium (Studies in Logic and the Foundations of Mathematics), edited by J. Barwise, H. J. Keisler, and K. Kunen, 123–48, Amsterdam: Elsevier. • Garey, M. R., and D. S. Johnson, 1979. Computers and Intractability: A Guide to the Theory of NP-Completeness, New York: WH Freeman. • Giblin, P., 1993. Primes and Programming, Cambridge: Cambridge University Press. • Gödel, K., 1956. “Private Letter to John von Neumann, 20 March 1956; translated by A. S. Wensinger in M. Sipser, “The History and Status of the P Versus NP Question”, in Proceedings of the Twenty-Fourth Annual ACM Symposium Theory of Computing, New York: ACM, 1992, 603–618. • Gottesman, D., 1999. “The Heisenberg Representation of Quantum Computers,” in Group22: Proceedings of the XXII International Colloquium on Group Theoretical Methods in Physics, edited by S. P. Corney, R. Delbourgo, and P. D. Jarvis, 32–43, Cambridge, MA: International Press. • Gottesman, D., and I. Chuang, 1999. “Demonstrating the Viability of Universal Quantum Computation Using Teleportation and Single-Qubit Operations,” Nature, 402: 390–93. • Grover, L. K., 1996. “A Fast Quantum Mechanical Algorithm for Database Search,” in Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing (STOC ’96), 212–19, New York, NY, USA: Association for Computing Machinery. • Hagar, A., 2003. “A Philosopher Looks at Quantum Information Theory,” Philosophy of Science, 70: 752–75. • –––, 2007. “Quantum Algorithms: Philosophical Lessons,” Minds & Machines, 17: 233–47. • –––, 2009. “Active Fault-Tolerant Quantum Error Correction: The Curse of the Open System,” Philosophy of Science, 76: 506–35. • –––, 2016. “Ed Fredkin and the Physics of Information: An Inside Story of an Outsider Scientist,” Information and Culture, 51: 419–43. • Hagar, A., and M. Hemmo, 2006. “Explaining the Unobserved: Why Quantum Mechanics Ain’t Only About Information,” Foundations of Physics, 36: 1295–1324. • Hagar, A., and A. Korolev, 2007. “Quantum Hypercomputation – Hype or Computation?” Philosophy of Science, 74: 347–63. • Haroche, S., and J. M. Raimond, 1996. “Quantum Computing: Dream or Nightmare?” Physics Today, 8: 51–52. • Hartmanis, J., and R. E. Stearns, 1965. “On the Computational Complexity of Algorithms,” Transactions of the American Mathematical Society, 117: 285–306. • Hausman, D. M., and J. Woodward, 1999. “Independence, Invariance, and the Causal Markov Condition,” The British Journal for the Philosophy of Science, 50: 521–83. • Henderson, L., forthcoming. “Quantum Reaxiomatisations and Information-Theoretic Interpretations of Quantum Theory,” Studies in History and Philosophy of Modern Physics, first online 9 July 2018; doi:10.1016/j.shpsb.2018.06.003 • Hermann, G., 2017. “Natural-Philosophical Foundations of Quantum Mechanics (1935),” in Grete Hermann: Between Physics and Philosophy, edited by E. Crull and G. Bacciagaluppi, translated by E. Crull, 239–78. Dordrecht: Springer. • Hewitt-Horsman, C., 2009. “An Introduction to Many Worlds in Quantum Computation,” Foundations of Physics, 39: 869–902. • Hogarth, M., 1994. “Non-Turing Computers and Non-Turing Computability,” in PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 126–38. Philosophy of Science Association. • Holevo, A. S., 1973. “Bounds for the Quantity of Information Transmitted by a Quantum Communication Channel,” Problemy Peredachi Informatsii, 9: 3–11. English translation in Problems of Information Transmission 9: 177–83, 1973. • Horsman, C., A. G. Fowler, S. Devitt, and R. Van Meter, 2012. “Surface Code Quantum Computing by Lattice Surgery,” New Journal of Physics, 14: 123011. • Howard, M., J. Wallman, V. Veitch, and J. Emerson, 2014. “Contextuality Supplies the ‘Magic’ for Quantum Computation,” Nature, 510: 351–55. • Ingarden, R. S., 1976. “Quantum Information Theory,” Rep. Math. Phys., 10: 43–72. • Jozsa, R., 1997. “Entanglement and Quantum Computation,” in The Geometric Universe, edited by S. A. Huggett, L. J. Mason, K. P. Tod, S. T. Tsou, and N. M. J. Woodhouse, Ch. 27. Oxford: Oxford University Press. • –––, 2006. “An Introduction to Measurement Based Quantum Computation,” NATO Science Series, III: Computer and Systems Sciences. Quantum Information Processing-from Theory to Experiment, 199: 137–58. • Kieu, T. D., 2002. “Quantum Hypercomputability,” Minds and Machines, 12: 541–61. • –––, 2004. “A Reformulation of Hilbert’s Tenth Problem Through Quantum Mechanics,” Proc. Royal Soc. A 460: 1535–45. • Koberinski, A., and M. Müller, 2018. “Quantum Theory as a Principle Theory: Insights from an Information Theoretic Reconstruction,”; in Cuffaro & Fletcher 2018, 257–80. • Ladyman, J., 2018. “Intension in the Physics of Computation: Lessons from the Debate About Landauer’s Principle,” in Cuffaro & Fletcher (2018), 219–39. • Leung, D. W., 2004. “Quantum Computation by Measurements,” International Journal of Quantum Information, 2: 33–43. • Levin, L., 2003. “Polynomial Time and Extravagant Models,” Problems of Information Transmission, 39: 2594–7. • –––, 1999. “Good Dynamics Versus Bad Kinematics: Is Entanglement Needed for Quantum Computation?” Phys. Rev. Lett., 87: 047901. • Lipton, R., 1995. “Using DNA to Solve NP-Complete Problems,” Science, 268: 542–45. • Lupacchini, Rossella, 2018. “Church’s Thesis, Turing’s Limits, and Deutsch’s Principle,” in Cuffaro & Fletcher 2018, 60–82. • Manin, Y., 1980. Computable and Uncomputable, Moscow: Sovetskoye Radio. • Mermin, David N., 2007. Quantum Computer Science: An Introduction, Cambridge University Press. • Messiah, A., 1961. Quantum Mechanics, Vol. II, New York: Interscience Publishers. • Moore, C., 1990. “Unpredictability and Undecidability in Dynamical Systems,” Phys. Rev. Lett., 64: 2354–7. • Myers, J., 1997. “Can a Universal Quantum Computer Be Fully Quantum?” Phys. Rev. Lett., 78: 1823–4. • Myrvold, W. C., 2010. “From Physics to Information Theory and Back,” in Philosophy of Quantum Information and Entanglement, edited by Alisa Bokulich and Gregg Jaeger, 181–207, Cambridge: Cambridge University Press. • –––, 2011. “Statistical Mechanics and Thermodynamics: A Maxwellian View,” Studies in History and Philosophy of Modern Physics, 42: 237–43. • –––, 2016. “Lessons of Bell’s Theorem: Nonlocality, Yes; Action at a Distance, Not Necessarily”. In Mary Bell & Shan Gao (eds.), Quantum Nonlocality and Reality: 50 Years of Bell’s Theorem, pp. 238–260, Cambridge: Cambridge University Press. • Nielsen, M., 2003. “Quantum Computation by Measurement and Quantum Memory,” Phys. Lett. A, 308: 96–100. • Nielsen, M. A., and I. L. Chuang, 2000. Quantum Computation and Quantum Information, Cambridge: Cambridge University Press. • Nielsen, M. A., and C. M. Dawson, 2005. “Fault-Tolerant Quantum Computation with Cluster States,” Physical Review A, 71: 042323. • Pearle, P., 1997. “True Collapse and False Collapse,” in Quantum Classical Correspondence: Proceedings of the 4th Drexel Symposium on Quantum Nonintegrability, Philadelphia, Pa, Usa, September 8–11, 1994, edited by D. H. Feng and B. L. Hu, 51–68, Cambridge: International Press. • Pitowsky, I., 1989. Quantum Probability — Quantum Logic, Hemsbach: Springer. • –––, 1990. “The Physical Church Thesis and Physical Computational Complexity,” Iyyun: The Jerusalem Philosophical Quarterly, 39: 81–99. • –––, 1994. “George Boole’s ‘Conditions of Possible Experience’ and the Quantum Puzzle,” British Journal for the Philosophy of Science, 45: 99–125. • –––, 1996. “Laplace’s Demon Consults an Oracle: The Computational Complexity of Prediction,” Studies in History and Philosophy of Modern Physics, 27: 161–80. • –––, 2002. “Quantum Speed-up of Computations,” Philosophy of Science, 69: S168–S177. • Pitowsky, I., and O. Shagrir, 2003. “Physical Hypercomputation and the Church-Turing Thesis,” Minds and Machines, 13: 87–101. • Poplavskii, R. P., 1975. “Thermodynamical Models of Information Processing (in Russian),” Uspekhi Fizicheskikh Nauk, 115: 465–501. • Pour-el, M., and I. Richards, 1981. “The Wave Equation with Computable Initial Data Such That Its Unique Solution Is Not Computable,” Advances in Mathematics, 29: 215–39. • Preskill, J., 1998. “Quantum Computing: Pro and Con,” Proc. Roy. Soc. Lond. A, 454: 469–86. • –––, 2018. “Quantum Computing in the NISQ Era and Beyond,” Quantum, 2: 79. • Pylyshyn, Z. 1984. Computation and Cognition: Toward a Foundation for Cognitive Science, Cambridge: MIT Press. • Rabin, M., 1976. “Probabilistic Algorithms,” in Algorithms and Complexity: New Directions and Recent Results, edited by J. Traub, 23–39, New York: Academic Press. • Raussendorf, R., and H. J. Briegel, 2002. “Computational Model Underlying the One-Way Quantum Computer,” Quantum Information and Computation, 2: 443–86. • Raussendorf, R., D. E. Browne, and H. J. Briegel, 2003. “Measurement-Based Quantum Computation on Cluster States,” Physical Review A, 68: 022312. • Raussendorf, R., J. Harrington, and K. Goyal, 2008. “Topological Fault-Tolerance in Cluster State Quantum Computation,” New Journal of Physics, 9: 1–24. • Reichardt, B. W., 2004. “The Quantum Adiabatic Optimization Algorithm and Local Minima,” in Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing, 502–10. • Rivest, R. L., A. Shamir, and L. Adleman, 1978. “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems,” Communications of the ACM, 21: 120–26. • Schlick, M., 1961. “Causality in Contemporary Physics I (1931),” Translated by D. Rynin. The British Journal for the Philosophy of Science, 12: 177–193. • –––, 1962. “Causality in Contemporary Physics II (1931),” Translated by D. Rynin. The British Journal for the Philosophy of Science, 12: 281–298. • Shor, P. W., 1994. “Algorithms for Quantum Computation: Discrete Logarithms and Factoring,” Proceedings of the 35th Annual Symposium on Foundations of Computer Science (SFCS ’94), 124–34, Washington, D.C.: IEEE Computer Society. • Scheme for Reducing Decoherence in Quantum, 1995. “Scheme for Reducing Decoherence in Quantum Computer Memory,” Phys. Rev. A., 52: 2493–6. • –––, 2004. “Progress in Quantum Algorithms,” Quantum Information Processing, 3: 5–13. • Shor, P., and D. DiVincenzo, 1996. “Fault Tolerant Error Correction with Efficient Quantum Codes,” Phys. Rev. Lett., 77: 3260–3. • Shrapnel, S., 2017. “Discovering Quantum Causal Models,” The British Journal for the Philosophy of Science, 70: 1–25. • Sieg, W., and J. Byrnes, 1999. “An Abstract Model for Parallel Computations,” The Monist, 82: 150–64. • Simon, D. R., 1994. “On the Power of Quantum Computation,” in 1994 Proceedings of the 35th Annual Symposium on Foundations of Computer Science, 116–23, Los Alamitos, CA: IEEE Press. • Spekkens, R. W., 2007. “Evidence for the Epistemic View of Quantum States: A Toy Theory,” Phys. Rev. A, 75: 032110. • Steane, A. M., 1996. “Multiple Particle Interference and Quantum Error Correction,” Proc. Roy. Soc. Lond. A, 452: 2551–77. • –––, 2003. “A Quantum Computer Only Needs One Universe,” Studies in History and Philosophy of Modern Physics, 34: 469–78. • Tabakin, F., 2017. “Model Dynamics for Quantum Computing,” Annals of Physics, 383: 33–78. • Timpson, C. G., 2013. Quantum Information Theory & the Foundations of Quantum Mechanics, Oxford: Oxford University Press. • Turing, A. M., 1936. “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society. Second Series, s2–42: 230–65. • Unruh, W. G., 1995. “Maintaining Coherence in Quantum Computers,” Phys. Rev. A, 51: 992–97. • Van Fraassen, B. C., 1982. “The Charybdis of Realism: Epistemological Implications of Bell’s Inequality,” Synthese, 52: 25–38. • Van Meter, R., and C. Horsman, 2013. “A Blueprint for Building a Quantum Computer,” Communications of the ACM 56: 16–25. • Vergis, A., K. Steiglitz, and B. Dickinson, 1986. “The Complexity of Analog Computation,” Mathematics and Computers in Simulation, 28: 91–113. • Vidal, G., 2003. “Efficient Classical Simulation of Slightly Entangled Quantum Computations,” Phys. Rev. Lett., 91: 147902. • Wallace, D., 2012. The Emergent Multiverse, Oxford: Oxford University Press. • –––, 2014. “Thermodynamics as Control Theory,” Entropy, 16: 699–725. • Wiesner, S., 1983. “Conjugate Coding,” Sigact News, 18: 78–88. • Witten, E., 1989. “Quantum Field Theory and the Jones Polynomial,” Comm. Math. Phys., 121: 351–99. • Wolfram, S., 1985. “Undecidability and Intractability in Theoretical Physics,” Phys. Rev. Lett., 54: 735. • Woodward, J., 2007. “Causation with a Human Face,” in Causation, Physics, and the Constitution of Reality: Russell’s Republic Revisited, H. Price and R. Corry (eds.), Oxford: Oxford University Press. Other Internet Resources Online Papers Web sites of interest Copyright © 2019 by Amit Hagar Michael Cuffaro <mike@michaelcuffaro.com> The Encyclopedia Now Needs Your Support Please Read How You Can Help Keep the Encyclopedia Free
4ea0176a4bb79ad3
Radioactive Decay in the Causal Interpretation of Quantum Theory According to classical physics, a particle can never overcome a potential greater than its kinetic energy; this is not the case in quantum theory. For unstable isotopes there is a finite probability for a quantum particle (an particle) to tunnel through the potential barrier in a nucleus. Such isotopes are called radioactive isotopes. The behavior of such isotopes can be described by a square wave packet that is a solution of the Schrödinger equation with the potential term . The time evolution leads to a wave packet that bounces back and forth. Each time it strikes the potential barrier a part of the packet tunnels through and there is a chance for some transmission. In orthodox quantum theory it is impossible to predict the decay of a single isotope. A statistical conclusion can be made only for an ensemble of isotopes (e.g., half-life period). In the causal interpretation of quantum theory of David Bohm, it is in principle possible to predict the decay of a single isotope (single event). Radioactive decay is described deterministically in terms of well-defined particle trajectories. In practise, it is impossible to predict or control the quantum trajectories with complete precision. Single particles are placed in a wave packet inside a nuclear potential. The decay of an isotope only depends on the initial position of the particle inside the wave packet. If the position is at the front of the wave, the particle trajectory leads to an escape of the particle from the nucleus. In the region of the potential, the quantum potential becomes large, and the resulting particle acceleration together with the reduction of the nuclear potential via the quantum potential accounts for tunneling. For this Demonstration a simple model of radioactive decay is chosen. The nuclear potential is proportional to (Pöschl–Teller potential) with the potential height . The initial unnormalized wave is , with (width of the initial wavefunction) and (wave number). The peak is placed at . The initial positions of the particles are linearly distributed around the peak of the packet inside the wave. If , the packet evolves like a free packet. For , none of the particles inside the wave packet leave the nuclear potential (stable isotope). The trajectories are positioned in space. The graphic on the left shows the particles' positions, the wavefunction amplitude (black), the Pöschl–Teller potential (blue), and the quantum potential (red). The graphic on the right shows the amplitude of the wavefunction, the potential for the complete time period, and the trajectories at the actual time step. The wavefunction amplitude, the potential, and the quantum potential are scaled to fit. • [Snapshot] • [Snapshot] • [Snapshot] The guidance equation for the particle velocity is , which is calculated from the gradient of the phase from the total wavefunction in the eikonal form . The quantum potential is given by . The effective potential is the sum of quantum potential and nuclear potential that leads to the time-dependent quantum force: . The numerical methods to calculate the velocity and the quantum potential from a discrete function are, in general, not very stable, but the applied interpolation functions lead to an accurate approximation of the physical event; due to the numerical errors produced by the limited mesh of 120 mesh points, the velocity term must be adjusted (here using 41/100 instead of 0.5). J. Caulfield, "What Determines Alpha Decay?," Portsmouth Polytechnic (England), student research project, unpublished, 1991. A. Goldberg, H. Schey, and J. L. Schwartz, "Computer-Generated Motion Pictures of One-Dimensional Quantum Mechanical Transmission and Reflection Phenomena," Am. J. Phys., 35(3), 1967 pp. 177–186. • Share: Embed Interactive Demonstration New! Files require Wolfram CDF Player or Mathematica. Mathematica » The #1 tool for creating Demonstrations and anything technical. Wolfram|Alpha » Explore anything with the first computational knowledge engine. MathWorld » The web's most extensive mathematics resource. Course Assistant Apps » An app for every course— right in the palm of your hand. Wolfram Blog » Read our views on math, science, and technology. Computable Document Format » The format that makes Demonstrations (and any information) easy to share and interact with. STEM Initiative » Programs & resources for educators, schools & students. Computerbasedmath.org » Join the initiative for modernizing math education. Step-by-Step Solutions » Wolfram Problem Generator » Wolfram Language » Knowledge-based programming for everyone. Download or upgrade to Mathematica Player 7EX I already have Mathematica Player or Mathematica 7+
7275bb35967d7959
Open Access Article This Open Access Article is licensed under a Creative Commons Attribution 3.0 Unported Licence Toward fully quantum modelling of ultrafast photodissociation imaging experiments. Treating tunnelling in the ab initio multiple cloning approach Dmitry V. Makhova, Todd J. Martinezb and Dmitrii V. Shalashilina aSchool of Chemistry, University of Leeds, Leeds, LS2 9JT, UK. E-mail: D.Makhov@leeds.ac.uk; D.Shalashilin@leeds.ac.uk bDepartment of Chemistry, Stanford University, Stanford, CA 94305, USA Received 11th April 2016 , Accepted 1st May 2016 First published on the web 2nd May 2016 We present an account of our recent effort to improve simulation of the photodissociation of small heteroaromatic molecules using the Ab Initio Multiple Cloning (AIMC) algorithm. The ultimate goal is to create a quantitative and converged technique for fully quantum simulations which treats both electrons and nuclei on a fully quantum level. We calculate and analyse the total kinetic energy release (TKER) spectra and Velocity Map Images (VMI), and compare the results directly with experimental measurements. In this work, we perform new extensive calculations using an improved AIMC algorithm that now takes into account the tunnelling of hydrogen atoms. This can play an extremely important role in photodissociation dynamics. I. Introduction Quantum non-adiabatic molecular dynamics is a powerful tool for understanding the details of the mechanisms of important photo-induced processes, such as the photodissociation of pyrrole and other heteroaromatic molecules. In these processes, quantum effects such as electronically non-adiabatic transitions and tunnelling are important, and an approach that goes beyond surface hopping, such as multiconfigurational time dependent Hartree (MCTDH),1 for example, is often required. MCTDH can be very accurate, and was recently used to simulate the dissociation of pyrrole.2 However it needs a parameterized potential energy surface as a starting point, which significantly restricts its practicality. A good alternative is represented by a variety of methods3–11 based on trajectory-guided Gaussian basis functions (TBF). Despite the fact that such approaches use classical trajectories, they are still fully quantum mechanical because these trajectories are employed only for propagating the basis, while the evolution of their amplitudes and, thus, of the total nuclear wave-function is determined by the time-dependent Schrödinger equation. An important advantage of trajectory-guided quantum dynamics methods is that they are fully compatible with direct or ab initio molecular dynamics where excited state energies, gradients, and non-adiabatic coupling terms are evaluated on the fly simultaneously with the nuclear propagation. The disadvantage is that trajectory based direct dynamics is very expensive due to the high cost of electronic structure calculations and typically can afford only a limited number of trajectories, which can be an obstacle to full convergence. Recently, we introduced the ab initio multiple cloning (AIMC)10 method, where TBFs are moving along Ehrenfest trajectories, as in the multiconfigurational Ehrenfest (MCE)8,9 approach, with bifurcation of the wave-functions taken into account via basis function cloning. While leading to the growth of the number of trajectories, the use of cloning helps to adopt the basis set to quantum dynamics significantly better than in the classical MCE approach. AIMC also uses a number of tricks to efficiently sample the trajectory basis and to use the information obtained on the fly: (1) similar to the previously developed trajectory based methods AIMC relies on importance sampling of initial conditions. (2) AIMC uses the so called time displaced or train basis sets,10,12,13 which increase the basis set size almost without any additional extra cost by reusing the ab initio data which has already been obtained. (3) The method calculates quantum amplitudes in a “post-processing technique” after the trajectories of the basis set functions have been found. As a result, the trajectories can be calculated one by one in parallel and good statistics can be accumulated. In this work, we present a new implementation of the AIMC approach that is improved to take into account the tunnelling of hydrogen atoms by identifying possible tunnelling points and placing additional TBFs of the other side of the barrier. We use this new implementation to simulate the dynamics of the photodissociation of pyrrole, a process where tunnelling can play a very important role. We calculate the TKER spectrum and velocity map image (VMI), and directly compare the results of our calculations with experimental observations.14 The paper is organized as follows. In Section II we describe the proposed implementation of the AIMC approach. Section III contains the computational details of our simulations. In Section IV, we present and discuss the results. Conclusions are given in Section V. II. Theory II.1 Working equations The AIMC method10 is based on the same ansatz as the multiconfigurational Ehrenfest (MCE) approach,8,9 in which the total wave-function |Ψ(t)〉 is represented in a trajectory-guided basis |ψn(t)〉: image file: c6fd00073h-t1.tif(1) The basis functions |ψn(t)〉 are composed of nuclear and electronic parts: image file: c6fd00073h-t2.tif(2) The nuclear part |χn(t)〉 is a Gaussian coherent state moving along an Ehrenfest trajectory: image file: c6fd00073h-t3.tif(3) where [R with combining macron]n(t) and [P with combining macron]n(t) are the phase space coordinate and momentum vectors of the basis function centre, γn(t) is a phase, and the parameter α determines the width of the Gaussians. The electronic part of basis functions |ψn(t)〉 is represented as a superposition of several adiabatic eigenstates |ϕI〉 with quantum amplitudes a(n)I. The time dependence of the Ehrenfest amplitudes a(n)I is given by the equations image file: c6fd00073h-t4.tif(4) where the matrix elements of electronic Hamiltonian Hel(n)IJ are expressed as: image file: c6fd00073h-t5.tif(5) here VI([R with combining macron]n) is the Ith potential energy surface and dIJ([R with combining macron]n) = 〈ϕI|R|ϕJ〉 is the non-adiabatic coupling matrix element (NACME). The motion of the centres of the Gaussians follows the standard Newton's equations: image file: c6fd00073h-t6.tif(6) where the force [F with combining macron]n is an Ehrenfest force that includes both the usual gradient term and the additional term related to the change of quantum amplitudes as a result of non-adiabatic coupling: image file: c6fd00073h-t7.tif(7) Finally, the phase γn evolves as: image file: c6fd00073h-t8.tif(8) Eqn (3)–(8) form a complete set, determining the basis and its time evolution. The evolution of the total wave-function |Ψ(t)〉 (eqn (1)) is defined by both the evolution of the basis functions |ψn(t)〉 and the evolution of the relevant amplitudes cn(t). The time dependence of the amplitudes cn(t) is given by the equation image file: c6fd00073h-t9.tif(9) which can be easily obtained by substituting (1) into the time dependent Schrödinger equation. The Hamiltonian matrix elements Hmn can be written as: image file: c6fd00073h-t10.tif(10) Assuming that the second derivative of the electronic wave-function |ϕI〉 with respect to R can be disregarded, we get: image file: c6fd00073h-t11.tif(11) The matrix elements of the kinetic energy operator image file: c6fd00073h-t12.tif can be calculated analytically. For potential energy and non-adiabatic coupling matrix elements, we use a simple approximation:10 image file: c6fd00073h-t13.tif(12) image file: c6fd00073h-t14.tif(13) The approximation (12) represents a linear interpolation of the potential energy between the two points and can be improved further at the cost of calculating higher derivatives of the potential energy along the trajectories. It has been tested previously,10 and no visible change of the results was found when this approximation was applied compared to the saddle point approximation which expands around a distinct centroid for each pair of TBFs.4 The term image file: c6fd00073h-t15.tif in eqn (9), which originates from the time dependence of the basis, can be expressed as: image file: c6fd00073h-t16.tif(14) image file: c6fd00073h-t17.tif(15) Notice that in the AIMC approach, all off-diagonal matrix elements entering eqn (9) are calculated from the electronic structure data at the TBF centres, which is needed for the propagation of the basis. Thus, quantum coupling between the configurations comes at almost no extra cost. Moreover, eqn (9) can be solved after the trajectories have been calculated, provided the appropriate electronic structure information has been saved. The detailed derivation of MCE equations together with the expressions for relevant matrix elements can be found in our previous works.10,11 II.2 Basis set sampling and cloning The Ehrenfest basis set is guided by an average potential, which can be advantageous when quantum transitions are frequent. However, it becomes unphysical in regions of low non-adiabatic coupling when two or more electronic states have significant amplitudes: in this case, the difference of the shapes of potential energy surfaces for different electronic states should lead to branching of the wavepacket. In order to reproduce the bifurcation of the wave-function after leaving the non-adiabatic coupling region, AIMC methods adopt the cloning procedure,10 where the appropriate basis function is replaced by two basis functions, each guided (mostly) by a single potential energy surface. After the cloning event, an Ehrenfest configuration image file: c6fd00073h-t18.tif yields two configurations: image file: c6fd00073h-t19.tif(16) image file: c6fd00073h-t20.tif(17) The first clone configuration has non-zero amplitudes for only one electronic state, and the second clone contains contributions of all other electronic states. The amplitudes of the two new configurations become: image file: c6fd00073h-t21.tif(18) so that the contribution of the two clones |ψn〉 and |ψn〉 to the whole wave-function (1) remains the same as the contribution of original function: image file: c6fd00073h-t22.tif(19) We apply the cloning procedure shortly after a trajectory passes near a conical intersection, when the non-adiabatic coupling is lower than a threshold, and, at the same time, the so-called breaking force image file: c6fd00073h-t23.tif(20) which is the force pulling the Ith state away from the remaining states, is sufficiently strong. The cloning procedure is very much in spirit of the spawning, used in the Ab Initio Multiple Spawning approach (AIMS). Cloning does not require any back-propagation of spawned/cloned basis functions, unlike many4 (but not all15,16) implementations of spawning. As has been described in our previous work,7 we rely on importance sampling when generating the initial conditions. Using the linearity of the Schrödinger equation, we first represent the initial wave-function as a superposition of Gaussians and then propagate each of them independently, “bit-by-bit”.7 We use a time-displaced basis set (coherent state trains), where several Gaussian basis functions are moving along the same trajectory but with a time-shift Δt, allowing us to reuse the same electronic structure data for each of the basis functions in the “train.” Fig. 1 shows a time displaced basis guided by a trajectory and its bifurcation via cloning. The best possible result with AIMC can be achieved when a swarm of trains is used to propagate each “bit” of the initial wave-function. image file: c6fd00073h-f1.tif Fig. 1 A sketch of the AIMC propagation scheme. The wave-function is represented as a superposition of Gaussian coherent states, which form a train moving along the trajectory. After passing the intersection, the train branches in the process of cloning. The figure shows a single train with cloning. In the most detailed AIMC calculation, a basis of several cloning trains interacting with each other is used. II.3 Tunnelling The tunnelling of hydrogen atoms can play an important role in photodissociation processes. As mentioned above, MCE, AIMC and AIMS are fully quantum methods because classical trajectories are used only to propagate the basis, while the amplitudes cn(t) are found by solving the time dependent Schrödinger equation. When Gaussian basis functions are present on two sides of the potential barrier, the interaction between them can provide quantum tunnelling through the barrier. However, in the case of direct ab initio dynamics, the basis is usually very small, far from being complete. As a result, no basis functions normally would be present on the other side, and they must be placed there by hand in order to take tunnelling into account. In this paper we adopt the ideas17,18 previously used in the AIMS method to describe tunnelling for use with the AIMC technique. Fig. 2 illustrates the algorithm that we apply. First, we calculate the usual AIMC trajectories and find turning points, where the distance between the hydrogen atom and the radical reaches a local maximum. Then, for each of these turning points, we calculate the shape of the potential barrier: we increase manually the length of the N–H bond keeping all other degrees of freedom frozen, calculate potential energies, and find the point on the other side of the barrier with the same energy as in the turning point. If this point lies further than a set threshold from the turning point, we assume that tunnelling is not possible here, as the potential barrier is too wide. Otherwise, we use it as a starting point for an additional AIMC trajectory. The new trajectory is calculated both forward and backward in time, and the initial momenta are taken as the same as in the turning point, ensuring that new trajectories have the same total classical energies as their parent trajectories. This is exactly the procedure used in the multiple spawning approach, thus our method combines cloning for non-adiabatic events and spawning for tunnelling events. The forward propagation of new trajectories often involves branching as a result of cloning; backward propagation is performed without cloning and for a sufficiently short time, until new and parent trajectories separate in phase space. image file: c6fd00073h-f2.tif Fig. 2 Illustration of the algorithm used to treat tunnelling in our approach. (A) Identify turning point; (B) find a point with the same potential energy on the opposite side of the barrier; (C) run an additional trajectory through this point; (D) solve time-dependent Schrödinger equation in the basis of a coherent state trains10 moving along the trajectories on both sides of the barrier. When all the trajectories are calculated, we solve eqn (9) for quantum amplitudes cn(t) in a time-displaced basis set (coherent state trains). This is similar to our previous approach10,11 but with the difference that now the basis is better adapted to treat tunnelling. The train basis on the new trajectory is placed in such way that it reaches the tunnelling point at the same time as the train basis on the parent trajectory. Because the new trajectory differs from its parent by only one coordinate at a tunnelling point, namely by the length of the N–H bond, there is a significant overlap between Gaussian basis functions belonging to these two trajectories. This interaction is retained for a significant time while the coherent state trains are passing the tunnelling point, ensuring the transfer of quantum amplitude across the barrier. III. Computational details Using our AIMC approach, we have simulated the dynamics of pyrrole following excitation to the first excited state. Trajectories were calculated using the AIMS-MOLPRO19 computational package, which has been modified to incorporate Ehrenfest dynamics. Electronic structure calculations were performed with the complete active space self-consistent field (CASSCF) method using the cc-pVDZ basis set. As in our previous works,9,11 we used an active space of eight electrons in seven orbitals (three ring π orbitals and two corresponding π* orbitals, one σ orbital and a corresponding σ* orbital). State averaging was performed over four singlet states using equal weights, i.e. the electronic wave-function is SA4-CAS(8,7)/cc-pVDZ. The width of Gaussian functions α was taken as 4.7 bohr−2 for hydrogen, 22.7 bohr−2 for carbon, and 19.0 bohr−2 for nitrogen atoms, as suggested in ref. 20. Three electronic states were taken into consideration during the dynamics – the ground state and the two lowest singlet excited states. The initial positions and momenta were randomly sampled from the ground state vibrational Wigner distribution in the harmonic approximation using vibrational frequencies and normal modes were calculated at the same CASSCF level of theory. We approximate the photoexcitation by simply lifting the ground state wavepacket to the excited state, as would be appropriate for an instantaneous excitation pulse within the Condon approximation. Of course, the fine details of the initial photoexcited wavepacket are lost in this approximation, however, we do not expect these details to have much effect on the observables shown in this paper. We have run 900 initial Ehrenfest trajectories, each propagated with a time-step of ∼0.06 fs (2.5 a.u.) for 200 fs or until the dissociation occurred, defined as an N–H distance exceeding 4.0 Å. For a small number of trajectories, simulations exhibiting N–H dissociation were carried out to the full 200 fs in order to investigate the dynamics of the radical. Cloning was applied to TBFs when the breaking acceleration of eqn (20) exceeded a threshold of 5 × 10−6 a.u. and the norm of the non-adiabatic coupling vector was simultaneously less than 2 × 10−3 a.u. For all initial trajectories, as well as for their branches resulting from cloning, we identified turning points for the N–H bond length and calculated the width of the potential barrier. Additional trajectories on the other side of the barrier were placed if the width of the barrier did not exceed 0.5 bohr, which corresponds to an overlap of ∼0.3 between Gaussian basis functions. The new trajectories were propagated backward for 20 fs to accommodate the train basis set, and forward until dissociation or until the trajectory time exceeds 200 fs. For each initial trajectory with all its branches and tunnelling sub-trajectories, we solved eqn (9) using a train basis set of N = 21 Gaussians per branch, separated by 10 time steps, which corresponds to an average overlap of ∼0.6 between the nearest Gaussians in the train. The total size of the basis is constantly changing because of the inclusion of new branches. The final amplitudes cn give statistical weights for each of the branches, which are used in the analysis that follows. IV. Results As a result of cloning, 900 initial configurations give rise to 1131 trajectory branches. This corresponds to an average of ∼0.25 cloning events per initial trajectory. For these branches, we have found 7702 local maxima of N–H bond length, of which 2376 have been identified as possible tunnelling points. For all these points, we run sub-trajectories, which finally gives 3203 additional branches, 4334 branches in total. The majority of these branches undergo N–H dissociation within our computational time of 200 fs: the total statistical weight of dissociative trajectories is 92%, of which 53% is the contribution of tunnelling sub-trajectories. The kinetic energy distribution of the ejected hydrogen atom is presented in Fig. 3 together with the experimental TKER spectrum.14 Both distributions clearly exhibit two contributions: a large peak at higher energies, and a small contribution at lower energies. It is important to note that adopting the basis set to tunnelling shifts the high-energy peak of TKER spectrum toward the lower energies by about ∼1000 cm−1 and makes the low-energy peak slightly more pronounced. While the calculated energies are still on average about 1.5 times higher than experimental values, this difference can be ascribed to the lack of dynamic electron correlation in the CASSCF potential energy surfaces. We previously showed11 that a more accurate MS-CASPT2 PES would lead to a shift in the kinetic energy peak of approximately ∼1800–1900 cm−1 towards lower energies, significantly improving the agreement with experiment. image file: c6fd00073h-f3.tif Fig. 3 Total kinetic energy release (TKER) spectrum of hydrogen atoms after dissociation calculated with (solid) and without (dash) taking tunnelling into account. Both spectra are averaged over the same ensemble of initial configuration. The curves are smoothed by replacing delta-functions with Gaussian functions (σ = 200 cm−1). The inset shows the experimentally measured spectrum.14 Analysis of the electronic state amplitudes in the Ehrenfest configurations (eqn (2)) shows that the bifurcation of the wave-function while passing through a conical intersection plays an important role in the formation of a two-peak spectrum: the high kinetic energy product is predominantly in the ground state, while the low energy peak is formed by mostly low-weight branches with substantial contribution from excited electronic states. Fig. 4 presents an example of such a bifurcating trajectory. At about 55 fs after photoexcitation, this trajectory reaches an intersection for the first time. After passing the intersection, the ground and first excited states of the original TBF are approximately equally populated, so the cloning procedure is applied creating instead two TBF, one in the ground state and one in the excited state. At this point, the potential energy surfaces for the ground and excited states have opposite gradients. This leads to the acceleration of the hydrogen atom for the TBF associated with the ground state and, at the same time, slows it down for the exited state TBF. As a result, although both branches are leading to dissociation, the kinetic energies of the ejected atoms are significantly different: the ground state branch contributes to the high energy peak of the distribution in Fig. 3, while the excited state branch contributes to the low energy peak. For the ground state branch, the remaining vibrational energy of the radical is low, so it remains in the ground state for the rest of the run and does not reach the intersection again. For the excited state branch, the energy taken away by the hydrogen atom is lower, leaving the pyrrolyl radical with sufficient energy to pass through numerous intersections with population transfer between the ground and both excited states. Naturally, quenching to the ground state will happen eventually for this branch but the time scale of this process is much longer than that for the dissociation, while the TKER spectrum is only affected by the radical dynamics until the H atom is lost. image file: c6fd00073h-f4.tif Fig. 4 An example of trajectory bifurcation on conical intersection. Electronic state populations (a), the kinetic energy of the H atom (b) and the N–H distance (c) as a function of time. Fast and slow branches are referred as (1) and (2) respectively. The black vertical line indicates the moment when cloning was applied. In order to calculate the velocity map image with respect to the laser pulse polarization, we must average the velocity distribution of hydrogen atoms relative to the axes of the molecule, given by calculations, over all possible orientations of the molecule: image file: c6fd00073h-t24.tif(21) where α, β and γ are Euler angles, θ is the angle between the atom velocity vector v and the transition dipole of the molecule, ξ(α,β,γ) is the angle between the transition dipole and light polarization vectors, and ϕ(θ,α,β,γ) is the angle between the light polarization vector and atom velocity. Here we take into account that the probability of excitation is proportional to cos2(ξ). Integrating over Euler angles and replacing, as usual, the δ-function for |v| with a narrow Gaussian function, we obtain image file: c6fd00073h-t25.tif(22) Fig. 5 shows the simulated velocity map with respect to the laser pulse polarization assuming that the transition dipole is normal to the molecular plane. The simulations reproduce well the main feature of the velocity map image, which is the anisotropy of the intense high energy part. Our results are also consistent with experiment14 in the low energy region showing an isotropic distribution, although admittedly the statistics of both experiment and simulation are poorer in the region of low energy. image file: c6fd00073h-f5.tif Fig. 5 Simulated velocity map image with respect to the laser pulse polarization assuming that the transition dipole moment is normal to the molecule plane. The experimental VMI14 is shown in the inset. V. Conclusion We simulated the photodissociation dynamics of pyrrole excited to the lowest singlet excited state (1A11A2) using a new implementation of the AIMC approach, which now is modified to take into account the tunnelling of hydrogen atoms more accurately. AIMC is a fully quantum technique but its computational cost in our implementation is compatible with classical “on the fly” molecular dynamics, which allows the accumulation of sufficient statistics to clarify the details of photo-induced processes in pyrrole. The treatment of tunnelling in our implementation provides a promising starting point for the further development of fully quantum methods for non-adiabatic dynamics and tunnelling with the ultimate goal of reaching well converged quantitative results. The current version of AIMC is already accurate enough to reproduce features of the experimentally observed TKER spectrum and velocity map images. DM and DS acknowledge the support from EPSRC through grants EP/J001481/1 and EP/N007549/1. 1. G. A. Worth, H.-D. Meyer, H. Köppel, L. S. Cederbaum and I. Burghardt, Using the MCTDH wavepacket propagation method to describe multimode non-diabatic dynamics, Int. Rev. Phys. Chem., 2008, 27, 569–606 CrossRef CAS. 2. G. Wu, S. P. Neville, O. Schalk, T. Sekikawa, M. N. R. Ashfold, G. A. Worth and A. Stolow, Excited state non-adiabatic dynamics of pyrrole: a time-resolved photoelectron spectroscopy and quantum dynamics study, J. Chem. Phys., 2015, 142, 074302 CrossRef PubMed. 3. T. J. Martinez, M. Ben-Nun and G. Ashkenazi, Classical/quantal method for multistate dynamics: a computational study, J. Chem. Phys., 1996, 104, 2847 CrossRef CAS. 4. M. Ben-Nun and T. J. Martínez, Ab Initio Quantum Molecular Dynamics, Adv. Chem. Phys., 2002, 121, 439 CrossRef CAS. 5. D. V. Shalashilin, Quantum mechanics with the basis set guided by Ehrenfest trajectories: theory and application to spin-boson model, J. Chem. Phys., 2009, 130(24), 244101 CrossRef PubMed. 6. S. L. Fiedler and J. Eloranta, Nonadiabatic dynamics by mean-field and surface hopping approaches: energy conservation considerations, Mol. Phys., 2010, 108(11), 1471–1479 CrossRef CAS. 7. D. V. Shalashilin, Nonadiabatic dynamics with the help of multiconfigurational Ehrenfest method: improved theory and fully quantum 24D simulation of pyrazine, J. Chem. Phys., 2010, 132(24), 244111 CrossRef PubMed. 8. D. V. Shalashilin, Multiconfigurational Ehrenfest approach to quantum coherent dynamics in large molecular systems, Faraday Disc., 2011, 153, 105 RSC. 9. K. Saita and D. V. Shalashilin, On-the-fly ab initio molecular dynamics with multiconfigurational Ehrenfest method, J. Chem. Phys., 2012, 137, 8 CrossRef PubMed. 10. D. V. Makhov, W. J. Glover, T. J. Martinez and D. V. Shalashilin, Ab initio multiple cloning algorithm for quantum nonadiabatic molecular dynamics, J. Chem. Phys., 2014, 141(5), 054110 CrossRef PubMed. 11. D. V. Makhov, K. Saita, T. J. Martinez and D. V. Shalashilin, Ab initio multiple cloning simulations of pyrrole photodissociation: TKER spectra and velocity map imaging, Phys. Chem. Chem. Phys., 2015, 17, 3316 RSC. 12. D. V. Shalashilin and M. S. Child, Basis set sampling in the method of coupled coherent states: coherent state swarms, trains and pancakes, J. Chem. Phys., 2008, 128, 054102 CrossRef PubMed. 13. M. Ben-Nun and T. J. Martinez, Exploiting Temporal Non-Locality to Remove Scaling Bottlenecks in Nonadiabatic Quantum Dynamics, J. Chem. Phys., 1999, 110, 4134–4140 CrossRef CAS. 14. G. M. Roberts, C. A. Williams, H. Yu, A. S. Chatterley, J. D. Young, S. Ullrich and V. G. Stavros, Probing ultrafast dynamics in photoexcited pyrrole: timescales for (1)pi sigma* mediated H-atom elimination, Faraday Discuss., 2013, 163, 95–116 RSC. 15. M. Ben-Nun and T. J. Martínez, A Continuous Spawning Method for Nonadiabatic Dynamics and Validation for the Zero-Temperature Spin-Boson Problem, Isr. J. Chem., 2007, 47, 75–88 CrossRef CAS. 16. S. Yang, J. D. Coe, B. Kaduk and T. J. Martínez, An “Optimal” Spawning Algorithm for Adaptive Basis Set Expansion in Nonadiabatic Dynamics, J. Chem. Phys., 2009, 130, 134113 CrossRef PubMed. 17. M. Ben-Nun and T. J. Martinez, Semiclassical tunneling rates from ab initio molecular dynamics, J. Phys. Chem. A, 1999, 103(31), 6055–6059 CrossRef CAS. 18. M. Ben-Nun and T. J. Martínez, A Multiple Spawning Approach to Tunneling Dynamics, J. Chem. Phys., 2000, 112, 6113–6121 CrossRef CAS. 19. B. G. Levine, J. D. Coe, A. M. Virshup and T. J. Martinez, Implementation of ab initio multiple spawning in the Molpro quantum chemistry package, Chem. Phys., 2008, 347(1), 3–16 CrossRef CAS. 20. A. L. Thompson, C. Punwong and T. J. Martinez, Optimization of width parameters for quantum dynamics with frozen Gaussian basis sets, Chem. Phys., 2010, 370, 70–77 CrossRef CAS. This journal is © The Royal Society of Chemistry 2016
49cbfb6a3fb530a1
Skip navigation Please use this identifier to cite or link to this item: Title: Lessons from the quantum control landscape: Robust optimal control of quantum systems and optimal control of nonlinear Schrödinger equations Authors: Hocker, David Lance Advisors: Rabitz, Herschel A Contributors: Chemistry Department Keywords: Bose-Einstein condensates Nonlinear Schrödinger equations Quantum computing Quantum control Quantum information processing Robust control Subjects: Physical chemistry Quantum physics Issue Date: 2016 Publisher: Princeton, NJ : Princeton University Abstract: The control of quantum systems occurs across a broad range of length and energy scales in modern science, and efforts have demonstrated that locating suitable controls to perform a range of objectives has been widely successful. The justification for this success arises from a favorable topology of a quantum control landscape, defined as a mapping of the controls to a cost function measuring the success of the operation. This is summarized in the landscape principle that no suboptimal extrema exist on the landscape for well-suited control problems, explaining a trend of successful optimizations in both theory and experiment. This dissertation explores what additional lessons may be gleaned from the quantum control landscape through numerical and theoretical studies. The first topic examines the experimentally relevant problem of assessing and reducing disturbances due to noise. The local curvature of the landscape is found to play an important role on noise effects in the control of targeted quantum unitary operations, and provides a conceptual framework for assessing robustness to noise. Software for assessing noise effects in quantum computing architectures was also developed and applied to survey the performance of current quantum control techniques for quantum computing. A lack of competition between robustness and perfect unitary control operation was discovered to fundamentally limit noise effects, and highlights a renewed focus upon system engineering for reducing noise. This convergent behavior generally arises for any secondary objective in the situation of high primary objective fidelity. The other dissertation topic examines the utility of quantum control for a class of nonlinear Hamiltonians not previously considered under the landscape principle. Nonlinear Schrödinger equations are commonly used to model the dynamics of Bose-Einstein condensates (BECs), one of the largest known quantum objects. Optimizations of BEC dynamics were performed in which the nonlinearity itself was harnessed as a control, leading to successful optimization of coherent mode-to-mode transformations. Such success strengthens further extension of the landscape principle to wider classes of control. Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections:Chemistry Files in This Item: File Description SizeFormat  Hocker_princeton_0181D_11727.pdf6.34 MBAdobe PDFView/Download
aac37e09e9d79a3b
Born–Oppenheimer approximation From Wikipedia, the free encyclopedia   (Redirected from Born-Oppenheimer approximation) Jump to: navigation, search Not to be confused with the Born approximation. In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the assumption that the motion of atomic nuclei and electrons in a molecule can be separated. The approach is named after Max Born and J. Robert Oppenheimer. In mathematical terms, it allows the wavefunction of a molecule to be broken into its electronic and nuclear (vibrational, rotational) components. Computation of the energy and the wavefunction of an average-size molecule is simplified by the approximation. For example, the benzene molecule consists of 12 nuclei and 42 electrons. The time independent Schrödinger equation, which must be solved to obtain the energy and wavefunction of this molecule, is a partial differential eigenvalue equation in 162 variables—the spatial coordinates of the electrons and the nuclei. The BO approximation makes it possible to compute the wavefunction in two less complicated consecutive steps. This approximation was proposed in 1927, in the early period of quantum mechanics, by Born and Oppenheimer and is still indispensable in quantum chemistry. In the first step of the BO approximation the electronic Schrödinger equation is solved, yielding the wavefunction depending on electrons only. For benzene this wavefunction depends on 126 electronic coordinates. During this solution the nuclei are fixed in a certain configuration, very often the equilibrium configuration. If the effects of the quantum mechanical nuclear motion are to be studied, for instance because a vibrational spectrum is required, this electronic computation must be in nuclear coordinates. In the second step of the BO approximation this function serves as a potential in a Schrödinger equation containing only the nuclei—for benzene an equation in 36 variables. The success of the BO approximation is due to the difference between nuclear and electronic masses. The approximation is an important tool of quantum chemistry; without it only the lightest molecule, H2, could be handled, and all computations of molecular wavefunctions for larger molecules make use of it. Even in the cases where the BO approximation breaks down, it is used as a point of departure for the computations. The electronic energies consist of kinetic energies, interelectronic repulsions, internuclear repulsions, and electron–nuclear attractions. In accord with the Hellmann-Feynman theorem, the nuclear potential is taken to be an average over electron configurations of the sum of the electron–nuclear and internuclear electric potentials. In molecular spectroscopy, because the ratios of the periods of the electronic, vibrational and rotational energies are each related to each other on scales in the order of a thousand, the Born–Oppenheimer name has also been attached to the approximation where the energy components are treated separately. The nuclear spin energy is so small that it is normally omitted. Short description[edit] The BornOppenheimer (BO) approximation is ubiquitous in quantum chemical calculations of molecular wavefunctions. It consists of two steps. In the first step the nuclear kinetic energy is neglected,[1] that is, the corresponding operator Tn is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian He the nuclear positions enter as parameters. The electron–nucleus interactions are not removed and the electrons still "feel" the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the clamped nuclei approximation.) The electronic Schrödinger equation is solved (out of necessity, approximately). The quantity r stands for all electronic coordinates and R for all nuclear coordinates. The electronic energy eigenvalue Ee depends on the chosen positions R of the nuclei. Varying these positions R in small steps and repeatedly solving the electronic Schrödinger equation, one obtains Ee as a function of R. This is the potential energy surface (PES): Ee(R) . Because this procedure of recomputing the electronic wave functions as a function of an infinitesimally changing nuclear geometry is reminiscent of the conditions for the adiabatic theorem, this manner of obtaining a PES is often referred to as the adiabatic approximation and the PES itself is called an adiabatic surface.[2] In the second step of the BO approximation the nuclear kinetic energy Tn (containing partial derivatives with respect to the components of R) is reintroduced and the Schrödinger equation for the nuclear motion[3] is solved. This second step of the BO approximation involves separation of vibrational, translational, and rotational motions. This can be achieved by application of the Eckart conditions. The eigenvalue E is the total energy of the molecule, including contributions from electrons, nuclear vibrations, and overall rotation and translation of the molecule. Derivation of the Born–Oppenheimer approximation[edit] It will be discussed how the BO approximation may be derived and under which conditions it is applicable. At the same time we will show how the BO approximation may be improved by including vibronic coupling. To that end the second step of the BO approximation is generalized to a set of coupled eigenvalue equations depending on nuclear coordinates only. Off-diagonal elements in these equations are shown to be nuclear kinetic energy terms. It will be shown that the BO approximation can be trusted whenever the PESs, obtained from the solution of the electronic Schrödinger equation, are well separated: We start from the exact non-relativistic, time-independent molecular Hamiltonian: The position vectors of the electrons and the position vectors of the nuclei are with respect to a Cartesian inertial frame. Distances between particles are written as (distance between electron i and nucleus A) and similar definitions hold for and . We assume that the molecule is in a homogeneous (no external force) and isotropic (no external torque) space. The only interactions are the two-body Coulomb interactions among the electrons and nuclei. The Hamiltonian is expressed in atomic units, so that we do not see Planck's constant, the dielectric constant of the vacuum, electronic charge, or electronic mass in this formula. The only constants explicitly entering the formula are ZA and MA—the atomic number and mass of nucleus A. Suppose we have K electronic eigenfunctions of , that is, we have solved The electronic wave functions will be taken to be real, which is possible when there are no magnetic or spin interactions. The parametric dependence of the functions on the nuclear coordinates is indicated by the symbol after the semicolon. This indicates that, although is a real-valued function of , its functional form depends on . For example, in the molecular-orbital-linear-combination-of-atomic-orbitals (LCAO-MO) approximation, is a molecular orbital (MO) given as a linear expansion of atomic orbitals (AOs). An AO depends visibly on the coordinates of an electron, but the nuclear coordinates are not explicit in the MO. However, upon change of geometry, i.e., change of , the LCAO coefficients obtain different values and we see corresponding changes in the functional form of the MO . We will assume that the parametric dependence is continuous and non-differentiable, so that it is meaningful to consider which in general will not be zero. The total wave function is expanded in terms of : and where the subscript indicates that the integration, implied by the bra–ket notation, is over electronic coordinates only. By definition, the matrix with general element is diagonal. After multiplication by the real function from the left and integration over the electronic coordinates the total Schrödinger equation is turned into a set of K coupled eigenvalue equations depending on nuclear coordinates only The column vector has elements . The matrix is diagonal and the nuclear Hamilton matrix is non-diagonal with the following off-diagonal (vibronic coupling) terms, The vibronic coupling in this approach is through nuclear kinetic energy terms. Solution of these coupled equations gives an approximation for energy and wavefunction that goes beyond the Born–Oppenheimer approximation. Unfortunately, the off-diagonal kinetic energy terms are usually difficult to handle. This is why often a diabatic transformation is applied, which retains part of the nuclear kinetic energy terms on the diagonal, removes the kinetic energy terms from the off-diagonal and creates coupling terms between the adiabatic PESs on the off-diagonal. If we can neglect the off-diagonal elements the equations will uncouple and simplify drastically. In order to show when this neglect is justified, we suppress the coordinates in the notation and write, by applying the Leibniz rule for differentiation, the matrix elements of as The diagonal () matrix elements of the operator vanish, because we assume time-reversal invariant so can be chosen to be always real. The off-diagonal matrix elements satisfy The matrix element in the numerator is The matrix element of the one-electron operator appearing on the right hand side is finite. When the two surfaces come close, , the nuclear momentum coupling term becomes large and is no longer negligible. This is the case where the BO approximation breaks down and a coupled set of nuclear motion equations must be considered, instead of the one equation appearing in the second step of the BO approximation. Conversely, if all surfaces are well separated, all off-diagonal terms can be neglected and hence the whole matrix of is effectively zero. The third term on the right hand side of the expression for the matrix element of Tn (the Born–Oppenheimer diagonal correction) can approximately be written as the matrix of squared and, accordingly, is then negligible also. Only the first (diagonal) kinetic energy term in this equation survives in the case of well-separated surfaces and a diagonal, uncoupled, set of nuclear motion equations results, which are the normal second-step of the BO equations discussed above. We reiterate that when two or more potential energy surfaces approach each other, or even cross, the Born–Oppenheimer approximation breaks down and one must fall back on the coupled equations. Usually one invokes then the diabatic approximation. The Born–Oppenheimer approximation with the correct symmetry[edit] To include the correct symmetry within the Born–Oppenheimer (BO) approximation,[4][5] a molecular system presented in terms of (mass-dependent) nuclear coordinates, , and formed by the two lowest BO adiabatic potential energy surfaces (PES), and , is considered. To insure the validity of the BO approximation the energy of the system, E, is assumed to be low enough so that becomes a closed PES in the region of interest, with the exception of sporadic infinitesimal sites surrounding degeneracy points formed by and (designated as (1,2) degeneracy points). The starting point is the nuclear adiabatic BO (matrix) equation written in the form:[6] where is a column vector that contains the unknown nuclear wave functions , is a diagonal matrix which contains the corresponding adiabatic potential energy surfaces , m is the reduced mass of the nuclei, E is the total energy of the system, is the grad operator with respect to the nuclear coordinates and is a matrix which contains the vectorial Non-Adiabatic Coupling Terms (NACT): Here are eigenfunctions of the electronic Hamiltonian assumed to form a complete Hilbert space in the given region in configuration space. To study the scattering process taking place on the two lowest surfaces one extracts, from the above BO equation, the two corresponding equations: where and is the (vectorial) NACT responsible for the coupling between and . Next a new function is introduced:[7] and the corresponding rearrangements are made: (i) Multiplying the second equation by and combining it with the first equation yields the (complex) equation: (ii) The last term in this equation can be deleted for the following reasons: At those points where is classically closed, ~ 0 by definition, and at those points where becomes classically allowed (which happens at the vicinity of the (1,2) degeneracy points) this implies that: ~ or - ~ 0. Consequently the last term is, indeed, negligibly small at every point in the region of interest and the equation simplifies to become: In order for this equation to yield a solution with the correct symmetry, it is suggested to apply a perturbation approach based on an elastic potential, , which coincides with at the asymptotic region. The equation with an elastic potential can be solved, in a straightforward manner, by substitution. Thus, if is the solution of this equation, it is presented as: where is an arbitrary contour and the exponential function contains the relevant symmetry as created while moving along . The function can be shown to be a solution of the (unperturbed/elastic) equation: Having , the full solution of the above decoupled equation takes the form: where satisfies the resulting inhomogeneous equation: In this equation the inhomogeneity ensures the symmetry for the perturbed part of the solution along any contour and therefore for the solution in the required region in configuration space. The relevance of the present approach was demonstrated while studying a two-arrangement-channel model (containing one inelastic channel and one reactive channel) for which the two adiabatic states were coupled via a Jahn-Teller conical intersection.[8][9] A nice fit between the symmetry-preserved, single-state, treatment and the corresponding two-state treatment was obtained. This applies in particular to the reactive state-to-state probabilities (see Table III in Ref. 5a and Table III in Ref. 5b) for which the ordinary BO approximation led to erroneous results, whereas the symmetry-preserving BO approximation produced the accurate results as they followed from solving the two coupled equations. See also[edit] 1. ^ This step is often justified by stating that "the heavy nuclei move more slowly than the light electrons." Classically this statement makes sense only if the momentum p of electrons and nuclei is of the same order of magnitude. In that case mnuc >> melec implies p2/(2mnuc) << p2/(2melec). It is easy to show that for two bodies in circular orbits around their center of mass (regardless of individual masses), the momentum of the two bodies is equal and opposite, and that for any collection of particles in the center of mass frame, the net momentum is zero. Given that the center of mass frame is the lab frame (where the molecule is stationary), the momentum of the nuclei must be equal and opposite to that of the electrons. A hand waving justification can be derived from quantum mechanics as well. Recall that the corresponding operators do not contain mass and think of the molecule as a box containing the electrons and nuclei and see particle in a box. Since the kinetic energy is p2/(2m), it follows that, indeed, the kinetic energy of the nuclei in a molecule is usually much smaller than the kinetic energy of the electrons, the mass ratio being on the order of 104).[citation needed] 2. ^ It is assumed, in accordance with the adiabatic theorem, that the same electronic state (for instance the electronic ground state) is obtained upon small changes of the nuclear geometry. The method would give a discontinuity (jump) in the PES if electronic state-switching would occur.[citation needed] 3. ^ This equation is time-independent and stationary wavefunctions for the nuclei are obtained, nevertheless it is traditional to use the word "motion" in this context, although classically motion implies time-dependence.[citation needed] 4. ^ Max Born; J. Robert Oppenheimer (1927). "Zur Quantentheorie der Molekeln" [On the Quantum Theory of Molecules]. Annalen der Physik (in German). 389 (20): 457–484. Bibcode:1927AnP...389..457B. doi:10.1002/andp.19273892002. Retrieved 28 May 2013.  5. ^ M. Born and K. Huang, Dynamical Theory of Crystal Lattices, 1954 (Oxford University Press, New York), Chapter IV 6. ^ M. Baer, Beyond Born–Oppenheimer: Electronic non-Adiabatic Coupling Terms and Conical Intersections, 2006 (Wiley and Sons, Inc., Hoboken, N.J.), Chapter 2 7. ^ M. Baer and R. Englman, Chem. Phys. Lett. 265, 105 (1997) 8. ^ (a) R. Baer, D.M. Charutz, R. Kosloff and M. Baer, J. Chem. Phys. 111, 9141 (1996); (b) S. Adhikari and G.D. Billing, J. Chem. Phys. 111, 40 (1999) 9. ^ D.M. Charutz, R. Baer and M. Baer, Chem. Phys. Lett. 265, 629 (1996) External links[edit] Resources related to the Born–Oppenheimer approximation:
9663e792fd6b4fd7
Stochastic differential equation From Wikipedia, the free encyclopedia Jump to: navigation, search A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs are used to model various phenomena such as unstable stock prices or physical systems subject to thermal fluctuations. Typically, SDEs contain a variable which represents random white noise calculated as the derivative of Brownian motion or the Wiener process. However, other types of random behaviour are possible, such as jump processes. Early work on SDEs was done to describe Brownian motion in Einstein's famous paper, and at the same time by Smoluchowski. However, one of the earlier works related to Brownian motion is credited to Bachelier (1900) in his thesis 'Theory of Speculation'. This work was followed upon by Langevin. Later Itô and Stratonovich put SDEs on more solid mathematical footing. In physical science, SDEs are usually written as Langevin equations. These are sometimes ambiguously called "the Langevin equation" even though there are many possible forms. Those forms consist of an ordinary differential equation containing a deterministic function and an additional random white noise term. A second form includes the Smoluchowski equation or the Fokker-Planck equation. These are partial differential equations which describe the time evolution of probability distribution functions. The third form is the Itô stochastic differential equation, which is most frequently used in mathematics and quantitative finance. This is similar to the Langevin form, but it is usually written in differential notation. SDEs are denoted in two varieties, corresponding to two versions of stochastic calculus. Stochastic calculus[edit] Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. Øksendal, 2003) and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down. Numerical solutions[edit] Numerical solution of stochastic differential equations and especially stochastic partial differential equations is a young field relatively speaking. Almost all algorithms that are used for the solution of ordinary differential equations will work very poorly for SDEs, having very poor numerical convergence. A textbook describing many different algorithms is Kloeden & Platen (1995). Methods include the Euler–Maruyama method, Milstein method and Runge–Kutta method (SDE). Use in physics[edit] In physics, SDEs are typically written in the Langevin form and referred to as "the Langevin equation." For example, a general coupled set of first-order SDEs is often written in the form: where is the set of unknowns, the and are arbitrary functions and the are random functions of time, often referred to as "noise terms". This form is usually usable because there are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. If the are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. This term is somewhat misleading as it has come to mean the general case even though it appears to imply the limited case in which . Additive noise is the simpler of the two cases; in that situation the Langevin equation has only one natural notion of solution and, in the linear case, this solution has an explicit expression found using the ordinary rules of calculus as if the were functions. However, in the case of multiplicative noise, the Langevin equation is not a well-defined entity on its own, and it must be specified whether the Langevin equation should be interpreted as an Itô SDE or a Stratonovich SDE. In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker-Planck equation (FPE). The Fokker-Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function.[citation needed] Use in probability and mathematical finance[edit] The notation used in probability theory (and in many applications of probability theory, for instance mathematical finance) is slightly different. This notation makes the exotic nature of the random function of time in the physics formulation more explicit. It is also the notation used in publications on numerical methods for solving stochastic differential equations. In strict mathematical terms, cannot be chosen as an ordinary function, but only as a generalized function. The mathematical formulation treats this complication with less ambiguity than the physics formulation. A typical equation is of the form where denotes a Wiener process (Standard Brownian motion). This equation should be interpreted as an informal way of expressing the corresponding integral equation The equation above characterizes the behavior of the continuous time stochastic process Xt as the sum of an ordinary Lebesgue integral and an Itô integral. A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length δ the stochastic process Xt changes its value by an amount that is normally distributed with expectation μ(Xttδ and variance σ(Xtt)² δ and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function μ is referred to as the drift coefficient, while σ is called the diffusion coefficient. The stochastic process Xt is called a diffusion process, and is usually a Markov process. The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution. Both require the existence of a process Xt that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space (Ω, F, Pr). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space. An important example is the equation for geometric Brownian motion which is the equation for the dynamics of the price of a stock in the Black Scholes options pricing model of financial mathematics. There are also more general stochastic differential equations where the coefficients μ and σ depend not only on the present value of the process Xt, but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, X, is not a Markov process, and it is called an Itô process and not a diffusion process. When the coefficients depends only on present and past values of X, the defining equation is called a stochastic delay differential equation. Existence and uniqueness of solutions[edit] As with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for Itô SDEs taking values in n-dimensional Euclidean space Rn and driven by an m-dimensional Brownian motion B; the proof may be found in Øksendal (2003, §5.2). Let T > 0, and let be measurable functions for which there exist constants C and D such that for all t ∈ [0, T] and all x and y ∈ Rn, where Let Z be a random variable that is independent of the σ-algebra generated by Bs, s ≥ 0, and with finite second moment: Then the stochastic differential equation/initial value problem has a Pr-almost surely unique t-continuous solution (tω) |→ Xt(ω) such that X is adapted to the filtration FtZ generated by Z and Bs, s ≤ t, and Some explicitly solvable SDEs[1][edit] Linear SDE: general case[edit] Reducible SDEs: Case 1[edit] for a given differentiable function is equivalent to the Stratonovich SDE which has a general solution Reducible SDEs: Case 2[edit] for a given differentiable function is equivalent to the Stratonovich SDE which is reducible to where where is defined as before. Its general solution is See also[edit] 1. ^ Kloeden 1995, pag.118 Further reading[edit] • Adomian, George (1983). Stochastic systems. Mathematics in Science and Engineering (169). Orlando, FL: Academic Press Inc.  • Adomian, George (1986). Nonlinear stochastic operator equations. Orlando, FL: Academic Press Inc.  • Adomian, George (1989). Nonlinear stochastic systems theory and applications to physics. Mathematics and its Applications (46). Dordrecht: Kluwer Academic Publishers Group.  • Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications. Berlin: Springer. ISBN 3-540-04758-1.  • Teugels, J. and Sund B. (eds.) (2004). Encyclopedia of Actuarial Science. Chichester: Wiley. pp. 523–527.  • C. W. Gardiner (2004). Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences. Springer. p. 415.  • Thomas Mikosch (1998). Elementary Stochastic Calculus: with Finance in View. Singapore: World Scientific Publishing. p. 212. ISBN 981-02-3543-7.  • Seifedine Kadry, (2007). A Solution of Linear Stochastic Differential Equation. USA: WSEAS TRANSACTIONS on MATHEMATICS, April 2007. p. 618. ISSN 1109-2769.  • Bachelier, L., (1900). Théorie de la speculation (in French), PhD Thesis. NUMDAM: In English in 1971 book 'The Random Character of the Stock Market' Eds. P.H. Cootner.  External link in |publisher= (help) • P.E. Kloeden & E. Platen, (1995). Numerical Solution of Stochastic Differential Equations,. Springer,.  • Higham., Desmond J. (January 2001). "An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations". SIAM Review. 43 (3): 525–546. Bibcode:2001SIAMR..43..525H. doi:10.1137/S0036144500378302.
ecdec57dd906c3ff
How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival [Excerpt] This book excerpt traces the history of quantum information theory and the colorful and famous physicists who tried to figure out "spooky action at a distance" W. W. Norton & Company Editor's Note: Reprinted from How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival by David Kaiser. Copyright (c) 2011 by David Kaiser. Used with permission of the publisher, W.W. Norton & Company, Inc. Click here to see a Scientific American video that explains quantum entanglement.  [from Chapter 2, pp. 25-38:] The iconoclastic Irish physicist John S. Bell had long nursed a private disquietude with quantum mechanics. His physics teachers—first at Queen's University in his native Belfast during the late 1940s, and later at Birmingham University, where he pursued doctoral work in the mid-1950s—had shunned matters of interpretation. The "ask no questions" attitude frustrated Bell, who remained unconvinced that Niels Bohr had really vanquished the last of Einstein's critiques long ago and that there was nothing left to worry about. At one point in his undergraduate studies, his red shock of hair blazing, he even engaged in a shouting match with a beleaguered professor, calling him "dishonest" for trying to paper over genuine mysteries in the foundations, such as how to interpret the uncertainty principle. Certainly, Bell would grant, quantum mechanics worked impeccably "for all practical purposes," a phrase he found himself using so often that he coined the acronym, "FAPP." But wasn't there more to physics than FAPP? At the end of the day, after all the wavefunctions had been calculated and probabilities plotted, shouldn't quantum mechanics have something coherent to say about nature? In the years following his impetuous shouting matches, Bell tried to keep these doubts to himself. At the tender age of twenty-one he realized that if he continued to indulge these philosophical speculations, they might well scuttle his physics career before it could even begin. He dove into mainstream topics, working on nuclear and particle physics at Harwell, Britain's civilian atomic energy research center. Still, his mind continued to wander. He wondered whether there were some way to push beyond the probabilities offered by quantum theory, to account for motion in the atomic realm more like the way Newton's physics treated the motion of everyday objects. In Newton's physics, the behavior of an apple or a planet was completely determined by its initial state—variables like position (where it was) and momentum (where it was going)—and the forces acting upon it; no probabilities in sight. Bell wondered whether there might exist some set of variables that could be added to the quantum-mechanical description to make it more like Newton's system, even if some of those new variables remained hidden from view in any given experiment. Bell avidly read a popular account of quantum theory by one of its chief architects, Max Born's Natural Philosophy of Cause and Chance (1949), in which he learned that some of Born's contemporaries had likewise tried to invent such "hidden variables" schemes back in the late 1920s. But Bell also read in Born's book that another great of the interwar generation, the Hungarian mathematician and physicist John von Neumann, had published a proof as early as 1932 demonstrating that hidden variables could not be made compatible with quantum mechanics. Bell, who could not read German, did not dig up von Neumann's recondite proof. The say-so of a leader (and soon-to-be Nobel laureate) like Born seemed like reason enough to drop the idea. Imagine Bell's surprise, therefore, when a year or two later he read a pair of articles in the Physical Review by the American physicist David Bohm. Bohm had submitted the papers from his teaching post at Princeton University in July 1951; by the time they appeared in print six months later, he had landed in São Paolo, Brazil, following his hounding by the House Un-American Activities Committee. Bohm had been a graduate student under J. Robert Oppenheimer at Berkeley in the late 1930s and early 1940s. Along with several like-minded friends, he had participated in free-wheeling discussion groups about politics, worldly affairs, and local issues like whether workers at the university's laboratory should be unionized. He even joined the local branch of the Communist Party out of curiosity, but he found the discussions so boring and ineffectual that he quit a short time later. Such discussions might have seemed innocuous during ordinary times, but investigators from the Military Intelligence Division thought otherwise once the United States entered World War II, and Bohm and his discussion buddies started working on the earliest phases of the Manhattan Project to build an atomic bomb. Military intelligence officers kept the discussion groups under top-secret surveillance, and in the investigators' eyes the line between curious discussion group and Communist cell tended to blur. When later called to testify before HUAC, Bohm pleaded the Fifth Amendment rather than name names. Over the physics department's objections, Princeton's administration let his tenure-track contract lapse rather than reappoint him. At the center of a whirling media spectacle, Bohm found all other domestic options closed off. Reluctantly, he decamped for Brazil. In the midst of the Sturm und Drang, Bohm crafted his own hidden variables interpretation of quantum mechanics. As Bell later reminisced, he had "seen the impossible done" in these papers by Bohm. Starting from the usual Schrödinger equation, but rewriting it in a novel way, Bohm demonstrated that the formalism need not be interpreted only in terms of probabilities. An electron, for example, might behave much like a bullet or billiard ball, following a path through space and time with well-defined values of position and momentum every step of the way. Given the electron's initial position and momentum and the forces acting on it, its future behavior would be fully determined, just like the case of the trusty billiard ball—although Bohm did have to introduce a new "quantum potential" or force field that had no analogue in classical physics. In Bohm's model, the quantum weirdness that had so captivated Bohr, Heisenberg, and the rest—and that had so upset young Bell, when parroted by his teachers—arose because certain variables, such as the electron's initial position, could never be specified precisely: efforts to measure the initial position would inevitably disturb the system. Thus physicists could not glean sufficient knowledge of all the relevant variables required to calculate a quantum object's path. The troubling probabilities of quantum mechanics, Bohm posited, sprang from averaging over the real-but-hidden variables. Where Bohr and his acolytes had claimed that electrons simply did not possess complete sets of definite properties, Bohm argued that they did—but, as a practical matter, some remained hidden from view. Bohm's papers fired Bell's imagination. Soon after discovering them, Bell gave a talk on Bohm's papers to the Theory Division at Harwell. Most of his listeners sat in stunned (or perhaps just bored) silence: why was this young physicist wasting their time on such philosophical drivel? Didn't he have any real work to do? One member of the audience, however, grew animated: Austrian émigré Franz Mandl. Mandl, who knew both German and von Neumann's classic study, interrupted several times; the two continued their intense arguments well after the seminar had ended. Together they began to reexamine von Neumann's no-hidden-variables proof, on and off when time allowed, until they each went their separate ways. Mandl left Harwell in 1958; Bell, dissatisfied with the direction in which the laboratory seemed to be heading, left two years later. Bell and his wife Mary, also a physicist, moved to CERN, Europe's multinational high-energy physics laboratory that had recently been established in Geneva. Once again he pursued cutting-edge research in particle physics. And once again, despite his best efforts, he found himself pulled to his hobby: thinking hard about the foundations of quantum mechanics. Once settled in Geneva, he acquired a new sparring partner in Josef Jauch. Like Mandl, Jauch had grown up in the Continental tradition and was well versed in the finer points of Einstein's, Bohr's, and von Neumann's work. In fact, when Bell arrived in town Jauch was busy trying to strengthen von Neumann's proof that hidden-variables theories were irreconcilable with the successful predictions of quantum mechanics. To Bell, Jauch's intervention was like waving a red flag in front of a bull: it only intensified his resolve to demonstrate that hidden variables had not yet been ruled out. Spurred by these discussions, Bell wrote a review article on the topic of hidden variables, in which he isolated a logical flaw in von Neumann's famous proof. At the close of the paper, he noted that "the first ideas of this paper were conceived in 1952"—fourteen years before the paper was published—and thanked Mandl and Jauch for all of the "intensive discussion" they had shared over that long period. Still Bell kept pushing, wondering whether a certain type of hidden variables theory, distinct from Bohm's version, might be compatible with ordinary quantum mechanics. His thoughts returned to the famous thought experiment introduced by Einstein and his junior colleagues Boris Podolsky and Nathan Rosen in 1935, known from the start by the authors' initials, "EPR." Einstein and company had argued that quantum mechanics must be incomplete: at least in some situations, definite values for pairs of variables could be determined at the same time, even though quantum mechanics had no way to account for or represent such values. The EPR authors described a source, such as a radioactive nucleus, that shot out pairs of particles with the same speed but in opposite directions. Call the left-moving particle, "A," and the right-moving particle, "B." A physicist could measure A's position at a given moment, and thereby deduce the value of B's position. Meanwhile, the physicist could measure B's momentum at that same moment, thus capturing knowledge of B's momentum and simultaneous position to any desired accuracy. Yet Heisenberg's uncertainty principle dictated that precise values for certain pairs of variables, such as position and momentum, could never be known simultaneously. Fundamental to Einstein and company's reasoning was that quantum objects carried with them—on their backs, as it were—complete sets of definite properties at all times. Think again of that trusty billiard ball: it has a definite value of position and a definite value of momentum at any given moment, even if we choose to measure only one of those properties at a time. Einstein assumed the same must be true of electrons, photons, and the rest of the furniture of the microworld. Bohr, in a hurried response to the EPR paper, argued that it was wrong to assume that particle B had a real value for position all along, prior to any effort to measure it. Quantum objects, in his view, simply did not possess sharp values for all properties at all times. Such values emerged during the act of measurement, and even Einstein had agreed that no device could directly measure a particle's position and momentum at the same time. Most physicists seemed content with Bohr's riposte—or, more likely, they were simply relieved that someone else had responded to Einstein's deep challenge. Bohr's response never satisfied Einstein, however; nor did it satisfy John Bell. Bell realized that the intuition behind Einstein's famous thought experiment—the reason Einstein considered it so damning for quantum mechanics—concerned "locality." To Einstein, it was axiomatic that something that happens in one region of space and time should not be able to affect something happening in a distant region—more distant, say, than light could have traveled in the intervening time. As the EPR authors put it, "since at the time of measurement the two systems [particles A and B] no longer interact, no real change can take place in the second system in consequence of anything that may be done to the first system." Yet Bohr's response suggested something else entirely: the decision to conduct a measurement on particle A (either position or momentum) would instantaneously change the properties ascribed to the far-away particle B. Measure particle A's position, for example, and—bam!—particle B would be in a state of well-defined position. Or measure particle A's momentum, and—zap!—particle B would be in a state of well-defined momentum. Late in life, Bohr's line still rankled Einstein. "My instinct for physics bristles at this," Einstein wrote to a friend in March 1948. "Spooky actions at a distance," he huffed. Fresh from his wrangles with Jauch, Bell returned to EPR's thought experiment. He wondered whether such "spooky actions at a distance" were endemic to quantum mechanics, or just one possible interpretation among many. Might some kind of hidden variable approach reproduce all the quantitative predictions of quantum theory, while still satisfying Einstein's (and Bell's) intuition about locality? He focused on a variation of EPR's set-up, introduced by David Bohm in his 1951 textbook on quantum mechanics. Bohm had suggested swapping the values of the particles' spins along the x- and y-axes for position and momentum. "Spin" is a curious property that many quantum particles possess; its discovery in the mid-1920s added a cornerstone to the emerging edifice of quantum mechanics. Quantum spin is a discrete amount of angular momentum—that is, the tendency to rotate  around a given direction in space. Of course many large-scale objects possess angular momentum, too: think of the planet Earth spinning around its axis to change night into day. Spin in the microworld, however, has a few quirks. For one thing, whereas large objects like the Earth can spin, in principle, at any rate whatsoever, quantum particles possess fixed amounts of it: either no spin at all, or one-half unit, or one whole unit, or three-halves units, and so on. The units are determined by a universal constant of nature known as Planck's constant, ubiquitous throughout the quantum realm. The particles that make up ordinary matter, such as electrons, protons, and neutrons, each possess one-half unit of spin; photons, or quanta of light, possesss one whole unit of spin. In a further break from ordinary angular momentum, quantum spin can only be oriented in certain ways. A spin one-half particle, for example, can exist in only one of two states: either spin "up" or spin "down" with respect to a given direction in space. The two states become manifest when a stream of particles passes through a magnetic field: spin-up particles will be deflected upward, away from their previous direction of flight, while spin-down particles will be deflected downward. Choose some direction along which to align the magnets—say, the z-axis—and the spin of any electron will only ever be found to be up or down; no electron will ever be measured as three-quarters "up" along that direction. Now rotate the magnets, so that the magnetic field is pointing along some different direction. Send a new batch of electrons through; once again you will only find spin up or spin down along that new direction. For spin one-half particles like electrons, the spin along a given direction is always either +1 (up) or -1 (down), nothing in between. (Fig. 2.1.) No matter which way the magnets are aligned, moreover, one-half of the incoming electrons will be deflected upward and one-half downward. In fact, you could replace the collecting screen (such as a photographic plate) downstream of the magnets with two Geiger counters, positioned where the spin-up and spin-down particles get deflected. Then tune down the intensity of the source so that only one particle gets shot out at a time. For any given run, only one Geiger counter will click: either the upper one (indicating passage of a spin-up particle) or the lower one (indicating spin-down). Each particle has a 50-50 chance of being measured as spin-up or spin-down; the sequence of clicks would be a random series of +1's (upper counter) and -1's (lower counter), averaging out over many runs to an equal number of clicks from each detector. Neither quantum theory nor any other scheme has yet produced a successful means of predicting in advance whether a given particle will be measured as spin-up or spin-down; only the probabilities for a large number of runs can be computed. Bell realized that Bohm's variation of the EPR thought experiment, involving particles' spins, offered two main advantages over EPR's original version. First, the measurements always boiled down to either a +1 or a -1; no fuzzy continuum of values to worry about, as there would be when measuring position or momentum. Second, physicists had accumulated decades of experience building real machines that could manipulate and measure particles' spin; as far as thought experiments went, this one could be grounded on some well-earned confidence. And so Bell began to analyze the spin-based EPR arrangement. Because the particles emerged in a special way—spat out from a source that had zero spin before and after they were disgorged—the total spin of the two particles together likewise had to be zero. When measured along the same direction, therefore, their spins should always show perfect correlation: if A's spin were up then B's must be down, and vice versa. Back in the early days of quantum mechanics, Erwin Schrödinger had termed such perfect correlations "entanglement." Bell demonstrated that a hidden-variables model that satisfied locality—in which the properties of A remained unaffected by what measurements were conducted on B—could easily reproduce the perfect correlation when A's and B's spins were measured along the same direction. At root, this meant imagining that each particle carried with it a definite value of spin along any given direction, even if most of those values remained hidden from view. The spin values were considered to be properties of the particles themselves; they existed independent of and prior to any effort to measure them, just as Einstein would have wished. Next Bell considered other possible arrangements. One could choose to measure a particle's spin along any direction: the z-axis, the y-axis, or any angle in between. All one had to do was rotate the magnets between which the particle passed.  What if one measured A's spin along the z-axis and B's spin along some other direction? (Fig. 2.2.) Bell homed in on the expected correlations of spin measurements when shooting pairs of particles through the device, while the detectors on either side were oriented at various angles. He considered detectors that had two settings, or directions along which spin could be measured. Using only a few lines of algebra, Bell proved that no local hidden variables theory could ever reproduce the same degree of correlations as one varied the angles between detectors. The result has come to be known as "Bell's theorem." Simply assuming that each particle carried a full set of definite values on its own, prior to measurement—even if most of those values remained hidden from view—necessarily clashed with quantum theory. Nonlocality was indeed endemic to quantum mechanics, Bell had shown: somehow, the outcome of the measurement on particle B depended on the measured outcome on particle A, even if the two particles were separated by huge distances at the time those measurements were made. Any effort to treat the particles (or measurements made upon them) as independent, subject only to local influences, necessarily led to different predictions than those of quantum mechanics. Here was what Bell had been groping for, on and off since his student days: some quantitative means of distinguishing Bohr's interpretation of quantum mechanics from other coherent, self-consistent possibilities. The problem—entanglement versus locality—was amenable to experimental test. In his bones he hoped locality would win. In the years since Bell formulated his theorem, many physicists (Bell included) have tried to articulate what the violation of his inequality would mean, at a deep level, about the structure of the microworld. Most prosaically, entanglement suggests that on the smallest scales of matter, the whole is more than the sum of its parts. Put another way: one could know everything there is to know about a quantum system (particles A + B), and yet know nothing definite about either piece separately. As one expert in the field has written, entangled quantum systems are not even "divisible by thought": our natural inclination to analyze systems into subsystems, and to build up knowledge of the whole from careful study of its parts, grinds to a halt in the quantum domain. Physicists have gone to heroic lengths to translate quantum nonlocality into everyday terms. The literature is now full of stories about boxes that flash with red and green lights; disheveled physicists who stroll down the street with mismatched socks; clever Sherlock Holmes-inspired scenarios involving quantum robbers; even an elaborate tale of a baker, two long conveyor belts, and pairs of soufflés that may or may not rise. My favorite comes from a "quantum-mechanical engineer" at MIT, Seth Lloyd. Imagine twins, Lloyd instructs us, separated a great distance apart. One steps into a bar in Cambridge, Massachusetts just as her brother steps into a bar in Cambridge, England. Imagine further (and this may be the most difficult part) that neither twin has a cell phone or any other device with which to communicate back and forth. No matter what each bartender asks them, they will give opposite answers. "Beer or whiskey?"  The Massachusetts twin might respond either way, with equal likelihood; but no matter which choice she makes, her twin brother an ocean away will respond with the opposite choice. (It's not that either twin has a decided preference; after many trips to their respective bars, they each wind up ordering beer and whiskey equally often.) The bartenders could equally well have asked, "Bottled beer or draft?" or "Red wine or white?" Ask any question—even a question that no one had decided to ask until long after the twins had traveled far, far away from each other—and you will always receive polar opposite responses. Somehow one twin always "knows" how to answer, even though no information could have traveled between them, in just such a way as to ensure the long-distance correlation. [from Chapter 3, pp. 43-48:] John Clauser sat through his courses on quantum mechanics as a graduate student at Columbia University in the mid-1960s, wondering when they would tackle the big questions. Like John Bell, Clauser quickly learned to keep his mouth shut and pursue his interests on the side. He buried himself in the library, poring over the EPR paper and Bohm's articles on hidden variables. Then in 1967 he stumbled upon Bell's paper in Physics Physique Fizika. The journal's strange title had caught his eye, and while lazily leafing through the first bound volume he happened to notice Bell's article. Clauser, a budding experimentalist, realized that Bell's theorem could be amenable to real-world tests in a laboratory. Excited, he told his thesis advisor about his find, only to be rebuffed for wasting their time on such philosophical questions. Soon Clauser would be kicked out of some of the finest offices in physics, from Robert Serber's at Columbia to Richard Feynman's at Caltech. Bowing to these pressures, Clauser pursued a dissertation on a more acceptable topic—radio astronomy and astrophysics—but in the back of his mind he continued to puzzle through how Bell's inequality might be put to the test. Before launching into an experiment himself, Clauser wrote to John Bell and David Bohm to double-check that he had not overlooked any prior experiments on Bell's theorem and quantum nonlocality. Both respondents wrote back immediately, thrilled at the notion that an honest-to-goodness experimentalist harbored any interest in the topic at all. As Bell later recalled, Clauser's letter from February 1969 was the first direct response Bell had received from any physicist regarding Bell's theorem—more than four years after Bell's article had been published. Bell encouraged the young experimenter: if by chance Clauser did manage to measure a deviation from the predictions of quantum theory, that would "shake the world!" Encouraged by Bell's and Bohm's responses, Clauser realized that the first step would be to translate Bell's pristine algebra into expressions that might make contact with a real experiment. Bell had assumed for simplicity that detectors would have infinitesimally narrow windows or apertures through which particles could pass. But as Clauser knew well from his radio-astronomy work, apertures in the real world are always wider than a mathematical pinprick. Particles from a range of directions would be able to enter the detectors at either of their settings, a or a'. Same for detector efficiences. Bell had assumed that the spins of every pair of particles would be measured, every time a new pair was shot out from the source. But no laboratory detectors were ever 100% efficient; sometimes one or both particles of a pair would simply escape detection altogether. All these complications and more had to be tackled on paper, long before one bothered building a machine to test Bell's work. Clauser dug in and submitted a brief abstract on this work to the Bulletin of the American Physical Society, in anticipation of the Society's upcoming conference. The abstract appeared in print right before the spring 1969 meeting. And then his telephone rang. Two hundred miles away, Abner Shimony had been chasing down the same series of thoughts. Shimony's unusual training—he held Ph.D.s in both philosophy and in physics, and taught in both departments at Boston University—primed him for a subject like Bell's theorem in a way that almost none of his American physics colleagues shared. He had already published several articles on other philosophical aspects of quantum theory, beginning in the early 1960s. Shimony had been tipped off about Bell's theorem back in 1964, when a colleague at nearby Brandeis University, where Bell had written up his paper, sent Shimony a preprint of Bell's work. Shimony was hardly won over right away. His first reaction: "Here's another kooky paper that's come out of the blue," as he put it recently. "I'd never heard of Bell. And it was badly typed, and it was on the old multigraph paper, with the blue ink that smeared. There were some arithmetical errors. I said, ‘What's going on here?'" Alternately bemused, puzzled, and intrigued, he read it over again and again. "The more I read it, the more brilliant it seemed. And I realized, ‘This is no kooky paper. This is something very great.'" He began scouring the literature to see if some previous experiments, conducted for different purposes, might already have inadvertently put Bell's theorem to the test. After intensive digging—he came to call this work "quantum archaeology"—he realized that, despite a few near misses, no existing data would do the trick. No experimentalist himself, he "put the whole thing on ice" until he could find a suitable partner. A few years went by before a graduate student came knocking on Shimony's door. The student had just completed his qualifying exams and was scouting for a dissertation topic. Together they decided to mount a brand-new experiment to test Bell's theorem. Several months into their preparations, still far from a working experiment, Shimony spied Clauser's abstract in the Bulletin, and reached for the phone. They decided to meet at the upcoming American Physical Society meeting in Washington, D.C., where Clauser was scheduled to talk about his proposed experiment. There they hashed out a plan to join forces. A joint paper, Shimony felt, would no doubt be stronger than either of their separate efforts alone would be—the whole would be greater than the sum of its parts—and, on top of that, "it was the civilized way to handle the priority question." And so began a fruitful collaboration and a set of enduring friendships. Clauser completed his dissertation not long after their meeting. He had some down time between handing in his thesis and the formal thesis defense, so he went up to Boston to work with Shimony and the (now two) graduate students whom Shimony had corralled onto the project. Together they derived a variation on Bell's theme: a new expression, more amenable to direct comparisons with laboratory data than Bell's had been. (Their equations concerned S, the particular combination of spin measurements examined in the previous chapter.) Even as his research began to hum, Clauser's employment prospects grew dim. He graduated just as the chasm between demand and supply for American physicists opened wide. He further hindered his chances by giving a few job talks on the subject of Bell's theorem. Clauser would later write with great passion that in those years, physicists who showed any interest in the foundations of quantum mechanics labored under a "stigma," as powerful and keenly felt as any wars of religion or McCarthy-like political purges. Finally Berkeley's Charles Townes offered Clauser a postdoctoral position in astrophysics at the Lawrence Berkeley Laboratory, on the strength of Clauser's dissertation on radio astronomy. Clauser, an avid sailer, planned to sail his boat from New York around the tip of Florida and into Galveston, Texas; then he would load the boat onto a truck and drive it to Los Angeles, before setting sail up the California coast to the San Francisco Bay Area. (A hurricane scuttled his plans; he and his boat got held up in Florida, and he wound up having to drive it clear across the country instead.) All the while, Clauser and Shimony hammered out their first joint article on Bell's theorem: each time Clauser sailed into a port along the East Coast, he would find a telephone and check in with Shimony, who had been working on a draft of their paper. Then Shimony would mail copies of the edited draft to every marina in the next city on Clauser's itinerary, "some of which I picked up," Clauser explained recently, "and some of which are probably still waiting there for all I know." Back and forth their edits flew, and by the time Clauser arrived in Berkeley in early August 1969, they had a draft ready to submit to the journal. Things were slow at the Lawrence Berkeley Laboratory compared to the boom years, and budgets had already begun to shrink. Clauser managed to convince his faculty sponsor, Townes, that Bell's theorem might merit serious experimental study. Perhaps Townes, an inventor of the laser, was more receptive to Clauser's pitch than the others because Townes, too, had been told by the heavyweights of his era that his own novel idea flew in the face of quantum mechanics. Townes allowed Clauser to devote half his time to his pet project, not least because, as Clauser made clear, the experiments he envisioned would cost next to nothing. With the green light from Townes, Clauser began to scavenge spare parts from storage closets around the Berkeley lab—"I've gotten pretty good at dumpster diving," as he put it recently—and soon he had duct-taped together a contraption capable of measuring the correlated polarizations of pairs of photons. (Photons, like electrons, can exist in only one of two states; polarization, in this case, functions just like spin as far as Bell-type correlations are concerned.) In 1972, with the help of a graduate student loaned to him at Townes's urging, Clauser published the first experimental results on Bell's theorem. (Fig. 3.1.) Despite Clauser's private hope that quantum mechanics would be toppled, he and his student found the quantum-mechanical predictions to be spot on. In the laboratory, much as on theorists' scratch pads, the microworld really did seem to be an entangled nest of nonlocality. He and his student had managed to conduct the world's first experimental test of Bell's theorem—today such a mainstay of frontier physics—and they demonstrated, with cold, hard data, that measurements of particle A really were more strongly correlated with measurements of particle B than any local mechanisms could accommodate. They had produced exactly the "spooky action at a distance" that Einstein had found so upsetting. Still, Clauser could find few physicists who seemed to care. He and his student published their results in the prestigious Physical Review Letters, and yet the year following their paper, global citations to Bell's theorem—still just a trickle—dropped by more than half. The world-class work did little to improve Clauser's job prospects, either. One department chair to whom Clauser had applied for a job doubted that Clauser's work on Bell's theorem counted as "real physics." Share this Article: Email this Article
a003b6fb983291e3
PowerPedia:Quantum Ring Theory Lasted edited by Andrew Munsey, updated on June 14, 2016 at 9:01 pm. • 24 errors has been found on this page. Administrator will correct this soon. Quantum Ring Theory (QRT) is a theory developed by Wladimir Guglinski between 1993 and 2004, published in a book form by the Bäuu Institute Press in August 2006, two years after Dr. PowerPedia:Eugene Mallove had encouraged Guglinski to put his several papers on a book form. The book presents 24 scientific papers, in which the author shows that some principles and models of Modern Physics must be replaced. In the atomic level, QRT follows the interpretation preconized by There was an error working with the wiki: Code[1] on the successes of the Bohr's hydrogen atom. Schrödinger stated that “It is difficult to believe that this result is merely an accidental mathematical consequence of the quantum conditions, and has no deeper physical meaning1?. He believed that Bohr’s successes would be consequence of unknown mechanisms, and he tried to find them. That’s why Schrödinger discovered the There was an error working with the wiki: Code[2] in the There was an error working with the wiki: Code[3]’s equation, and interpreted it as a helical trajectory of the electron. A rival interpretation was supported by Heisenberg, who believed that the Theoretical Physics cannot be developed in dependency of the discovery of “metaphysical? mechanisms not suitable to be measured (observed) in experiments. He stated that only “observables variables are of the science’s interest. The interpretation for the zitterbewegung in the Heisenbergian viewpoint was proposed in 2004 by Krekora2 So, we realize that in the 20th Century there was a divergence between Schrödinger and Heisenberg on the question of what is the aim of the scientific method. It seems that the confrontation between the Shrödingerian and Heisenbergian viewpoints will be decided by the Cold fusion experiments. See: Cold fusion theories. Some principles and models of Modern Physics seemed so much strange to the author. That’s why in 1991 he wrote a book, where he proposed a new theory according to which the neutron must be composed by proton+electron (n=p+e), and the space must be fulfilled by the There was an error working with the wiki: Code[4], which would be responsible for the equilibrium of the electrons within the electrosphere of the atoms. He submitted the book to several publishing houses in Brazil, but no editor had interest in his book. Nine years later, in 2001 Guglinski discovered that Don Borghi et al. had published a paper3 in 1993, describing an experiment that confirms the model n=p+e. ::In 1992 he registered the manuscript of his book in the Brazilian National Library, where the typewritten manuscript stays placed in the archives till today. Because no publisher accepted his book for publication, he decided to prove that his new theory was correct. That’s why in 1993 as self-taught he started to study in depth the foundations of Quantum mechanics, in order to prove that some principles of QM must be replaced (see Cold fusion theories. For the proposal of a new theory, as a point of departure he had considered the seven fundamental points as follows: 1- There was an error working with the wiki: Code[5]’s model of neutron seemed to be impossible to work, since it violates two fundamental laws of Physics: the Newton’s action-reaction law, and the energy-matter conservation law. So Guglinski felt that would be necessary to prove that from the model n=p+e one could explain all the properties of the neutron inferred from the experiments. 2- Another question that worried him was the missing of the aether in the current theories of Modern Physics. Something was wrong with Einstein interpretation. A new theory replacing the empty space by a space fulfilled with the aether would be required. 3- The There was an error working with the wiki: Code[6] could not be entirely correct, by several reasons: 3.1- There was an error working with the wiki: Code[7] is not correct, since it is unable to explain the fine structures. However his model has so many spectacular successes, as for instance one calculates the There was an error working with the wiki: Code[8] from the Bohr’s model, with an accuracy that cannot be accidental, because it’s impossible to consider it accidental from the laws of probability. Nevertheless nowadays the quantum theorists claim that Bohr’s successes are accidental. Such hypothesis is unacceptable, and one has to consider that there is something true (at least partially) in his model, while from the concepts of Quantum Mechanics there is need to consider Bohr’s model as totally wrong. But as from the mathematical probabilty it's necessary to consider that Bohr’s model is at least partially correct, this implies that the QM’s model cannot be entirely correct. 3.2- It is hard to believe that there is no trajectory of elementary particles as proposed in QM, since everybody see that electron’s trajectory exists within the chamber fog. 3.3- The hydrogen atom of QM is undulatory. But there are phenomena that require a corpuscular model to be explained. And it’s hard to believe in the absurd There was an error working with the wiki: Code[9] proposed by Bohr, according to which incompatible models must be used for explaining the phenomena. Indeed, it’s hard to believe that the Nature sometimes uses a corpuscular model, and sometimes she uses a undulatory model. 4- There is not a unique nuclear model in current There was an error working with the wiki: Code[10]. There are several models, and they are incompatible. Besides, the current nuclear theory is unable to explain some nuclear properties and many behavior of the nuclei. It’s hard to believe that Nature works by using several incompatible models for the production of the nuclear phenomena. Thereby it was indispensable to look for a unique nuclear model capable to explain all the nuclear phenomena. 5- Nowadays the theorist consider that the light is a duality wave-particle, and there is not any model of photon for explaining the light behavior. The light in Modern Physics is described by pure abstract mathematical equations, and it’s hard to believe that mathematical equations can produce physical phenomena as those produced by the light. So, Guglinski felt the need of looking for a physical model of photon, capable to generate the There was an error working with the wiki: Code[11] and to reproduce theoretically all the phenomena of the light, as its There was an error working with the wiki: Code[12], the There was an error working with the wiki: Code[13], etc. 6- It’s hard to believe that the duality wave-particle is a property of the matter. There is an alternative solution for explaining the duality wave-particle: by considering the There was an error working with the wiki: Code[14] of the elementary particles. The zitterbewegung appears in the Dirac’s equation of the electron, and therefore the duality can be considered as a property of the helical trajectory of the elementary particles. Such new interpretation for the duality is used in the new hydrogen atom proposed in Quantum Ring Theory. 7- But Quantum mechanics and the There was an error working with the wiki: Code[15] are two successful theories. And it’s hard to believe that they are completely wrong. So Guglinski felt the need of discovering why they are so successful, in spite of they cannot be entirely correct. In another words: where does live the cause of the success of these two theories ? The answers for these questions are proposed in the Quantum Ring Theory: :It's proposed a new model of neutron n=p+e. A reviewer of the There was an error working with the wiki: Code[16] magazine wrote about the paper The Stern-Gerlach Experiment and the Helical Trajectory: ::“The basic question here is can a classical model (which postulates a trajectory for the electron) cast any light on the inner workings of the nucleus ? Most physicists would respond with a resounding NO. However, it generally happens that classical models have quantum analogs and thus can prove suggestive in at least a qualitative way. For instance, without the classical Hamiltonian energy expression there would be no clue to how to write the Schrödinger equation. And the classical energy expression would not exist without trajectory pictorization. Therefore one cannot reject Guglinski’s ‘helical trajectory’ model (or similar models due to Bergman and others) out of hand as useless to physics. We don’t know what the final physics will be, if any. Moreover, Guglinski’s model may solve the problem of spin of the electron in the nucleus?. :It's proposed a new hydrogen atom that conciliates the Bohr’s model with the Schrödinger equation, and he discovered why the QM is successful in explaining the phenomena. The new hydrogen atom of QRT has an unknown property to the quantum theorists: there is a dilation of the aether within the electrospheres of the proton and the electron. :It's proposed a model of photon that generates the Maxwell equations and explain the light behavior, as its duality, the polarization, the There was an error working with the wiki: Code[17], the There was an error working with the wiki: Code[18], the There was an error working with the wiki: Code[19], etc. ::Concerning the nucleus, Guglinski discovered a nuclear model that explains all the nuclear phenomena, and I also discovered that the nucleus has some behavior unknown by the nuclear theorists, as for instance the According-Effect. In 1995 he tried to publish his paper New Model of Neutron in the journal Speculations in Science and Technology. The reviewer rejected it. In the end of 1998 he submitted his paper A Model of Photon to the journal There was an error working with the wiki: Code[20]. The paper has been rejected for publication, because the reviewers of that journal are sure that the duality is a property of the matter (as proposed originally by There was an error working with the wiki: Code[21]), and they don’t accept to replace the original de Broglie’s interpretation by a new one that considers the duality as a property of the zitterbewegung. But the editor Nancy Kolenda sent to Guglinski a copy of the journal in which Mike Carrell talks about cold fusion, and then in the first time in his life Guglinski took acknowledge on the occurrence of that phenomenon. He sent a letter to Mike, who sent to Guglinski a copy of the Infinite Energy magazine in which Elio Conte published an article4 describing his experiment. So Guglinski realized that Conte's experiment had confirmed his new model of neutron n=p+e. As earlier he had discovered a new nuclear model that explains all the ordinary nuclear phenomena, obviously a question arose in his mind: would his new nuclear model be able to explain cold fussion occurrence ? And he had another strong reason to believe that cold fusion explanation would require a new theory with new fundamental principles missing in Quantum Mechanics. Indeed, he knew that the current Nuclear Physics is unable to explain many properties of the nuclei. Therefore, as the theory is unable to explain ordinary phenomena, it would be hard to believe that Nuclear Physcis could be able to explain cold fusion occurrence, since it defies the foundations of QM. So he started to read some papers on cold fusion experiments, in order to try to understand the cold fusion occurrence from the viewpoint of the nuclear properties of his new nuclear model. In 2000 his paper New Model of Neutron has been published by the Journal of New Energy. In the beginning of 2001 Guglinski discovered the existence of Borghi’s experiment. In the same year he suited in law two universities of Brazil, trying to oblige them to repeat the There was an error working with the wiki: Code[22] in their laboratories. The Brazilian Constitution prescribes that the universities must support any experimental research that imply in the interest of the science’s development, and so he used such argument to support his request. Unfortunatelly the judge decided that there is no judicial support that obliges an university to perform any experiment. That was not true, because the support was given by the Brazilian Constitution. But it is known that there is a conspiracy against the prevalence of the scientific method when it defies the current theories. In 2002 the Infinite Energy magazine has published his paper What is Missing in Les Case’s Catalytic Fusion5, where he suggested some improvements to be adopted in Case's experiment, and proposed the hypothesis of the cause why often it’s hard to get replicabilty in cold fusion experiments. In the end of 2002 he submitted more seven papers to Infinite Energy magazine. In 2003 Dennys Lets and Dennys Cravens exhibited in There was an error working with the wiki: Code[23] their experiment, in which the suggestions proposed in Guglinski's paper published by IE in 2002 had been adopted. In the same year he wrote the paper Lets-Cravens Experiment and the Accordion-Effect, in which he proposes that cold fusion can occur in especial conditions when there is resonance between the oscillation of a nucleus due to its accordion-effect and the oscillation of a deuteron due to the Zero-point energy. The alignment of the deuterons with the nucleus by applying an external magnetic field (Letts and Cravens used a magnet) helps the resonance, which is also reinforced by a suit frequence of an oscillatory electromagnetic field (the laser used in their experiment) In January 2004 Dr. PowerPedia:Eugene Mallove said that Guglinski's ideas “ are intriguing and interesting?, and encouraged him to put all his most than 20 papers in a book form. The Iinfinite Energy would advertise and sell the book. As Dr. Mallove died in May-2004, he had to look for another publisher. In August 2006 his theory was published in a book form entitled Quantum Ring Theory-Foundations for Cold Fusion, by the Bäuu Press. Reviews on QRT posted in the link of Barnes&Noble and Bäuu Press website: Claudio Nassif, PhD theoretical physicist ( I am the author of Symmetrical Special Relativity, which first paper was published by the journal Pramanas in July 2008 under the title: 'Deformed special relativity with an invariant minimum speed and its cosmological implications'. We, theoretical physicists, develop theories by using the mathematics, some theorems, many axioms, supporting fundamental principles, but there is not a physical reality underlying our theories. Actually one of achivements of the 20th Century is that a physical reality is unatainable in Modern Physics. But Guglinski's theory just supplies physical models to Theoretical Physics. In his theory are proposed physical models for the photon, the fermions, the neutron, the hydrogen atom, the nucleus, and the aether, and his QRT proposes the fundamental principles from which those physical models work. My SSR and Guglinski's QRT are complementary. A future consistent agglutination of SSR and QRT will perform a New Grand Unified Theory which, if confirmed by experiments, will constitute the New Physics of the 21th Century. Nancy Kolenda, editor - Frontier Perspectives ( Temple University ) In Quantum Ring Theory Guglinski presents a new theory concerning the fundamental nature of physics. Here, the author argures that the current understanding of physics does not showcase an accurate model of the world. Instead, he argues that we must consider the “aether”, a notion originally developed by Greeck philosophers, and by considering the nature of “aether” and its role in physical processes, Guglinski is able to create a theory that reconciles quantum physics with the Theory of Relativity. As part of his new theory, Guglinski showcases a new model of the neutron and this model has been confirmed by contemporary physical experiments. Paul W. Schoening ( Mechanical Engineer A new interpretation of elementary particles and the atom's nuclear structure. Guglinski provides an entirely new understanding of the structure and the mechanics of the atom and the atomic nucleus, which in the presented way requires the interaction with the ether to explain, for instance, the quantum weirdness of the behavior of single, isolated photons and electrons. I highly recommend it. Naveen, A Reviewer WHOA!!...we have a breakthrough here!!! Hi I just came across this book 'Quantum Ring Theory' by Wladimir Guglinkski and found it quite exhilarating and thrilling. The thrill is in the way Quantum Theory is being treated in this book which is totally a new approach to physics. The proposed structure of the Neutron in terms of n=p+e, the ZOOM Effect, Helical trajectory, a completely new interpretation of DUALITY are some of the most original works of the author. I don't think I have seen any of the Modern Physicists as original as Wladimir. I must say that any serious physicist must go through this book and I would be glad if some of the universities come out with funds to perform certain experiments to establish Guglinski's Quantum Ring Theory. WLADIMIR.......HATS OFF MAN!!!!!! 1- E. Schrödinger , On a Remarkable Property of the Quantum-Orbits of a Single Electron, 1922 2- Krekora et al. , Phys. Rev. Lett., v. 93, 043004-1, 2004 3- C. Borghi, C. Giori, A.A. Dall’Ollio, Experimental Evidence of Emission of Neutrons from Cold Hydrogen Plasma, American Institute of Physics (Phys. At. Nucl.), vol 56, no 7, 1993. 4- E. Conte, M. Pieralice, An Experiment Indicates the Nuclear Fusion of the Proton and Electron into a Neutron, Infinite Energy, vol 4, no 23-1999, p 67. 5- W. Guglinski, What is Missing in Les Case's Catalytic Fusion, Infinite Energy Vol. 8 , No. 46 , 2002 See also Aether Structure for unification between gravity and electromagnetism (2015) Cold fusion mystery finally deciphered Physical mechanism behind the quantum entanglement Why a new physics theory could rewrite the textbooks There was an error working with the wiki: Code[24] Law suit against European Physical Journal Image:Monopole spin 95x95.jpg Latest: Directory:Magnets / PowerPedia:Quantum Ring Theory > Article:Magnetic monopole - new experiment corroborates Quantum Ring Theory - Researchers from the Helmholtz Centre Berlin, in cooperation with colleagues from Dresden, St. Andrews, La Plata and Oxford, have for the first time observed magnetic monopoles and how they emerge in a real material. They published their results in the journal Science. W. Guglinski provides commentary. (PESWiki Sept. 12, 2009) Image:Quantum Ring Theory 95x95.jpg PowerPedia:Quantum Ring Theory > Directory:Grand Unified TheoriesArticle:The Successor of Quantum Mechanics - Wladimir Guglinski compares his Quantum Ring Theory with Dr. Cláudio Nassif's Symmetrical Special Relativity, purporting that these "will constitute the New Physics of the 21th Century." (PESWiki August 9, 2008) Directory:Cold Fusion > PowerPedia:Foundations for Cold Fusion - W. Guglinski argues for the need for new foundational premises for cold fusion because otherwise there is no way to surpass some theoretical troubles such as are shown in PowerPedia:Don Borghi's experiment. One such remedy is supposedly found in the Quantum Ring Theory. (PESWiki Nov. 23, 2007) Image:AAAfig6-GAMOWparadox 95x95.gif Directory:Nuclear > Directory:Cold Fusion > Article:Cold Fusion and Gamow's Paradox - Wladimir Guglinski provides an additional argument for the need for new foundations for nuclear fusion physics, saying that the existing theories are unable to explain the alpha decay of U238 ("unsatisfactorily explained by Gamow’s theory"). (PESWiki Mar. 15, 2008) Similarity between Wave Structure of Matter and Quantum Ring Theory PowerPedia:Successes of the Bohr atom PowerPedia:Quantum Ring Theory at Temple University PowerPedia:Quantum Ring Theory burnt in a Brazillian university PowerPedia:Foundations for Cold Fusion Heisenberg's Paradox Article:Cold Fusion and Gamow's Paradox PowerPedia:on the indistinguishibility of Quantum Mechanics PowerPedia:magnetic monopole - new experiment corroborates Quantum Ring Theory PowerPedia:quantum computer will never be constructed PowerPedia:... and Schrödinger wins the duel with Heisenberg PowerPedia:the mistery on the Andrea Rossi's catalyzer PowerPedia:z-axis of atomic nuclei predicted in Quantum Ring Theory PowerPedia:Mechanism for the entanglement in Gabriela’s experiment PowerPedia:Collapse of Heisenberg’s Uncertainty and Bohr’s Complementarity Can Quantum Mechanics be saved by Queen Elizabeth II ? Article:The Impossible Beryllium PowerPedia:Don Borghi's experiment PowerPedia:Cold Fusion Theories PowerPedia:Cold fusion, Don Borghi's Experiment, and hydrogen atom PowerPedia:Einstein and entanglement: Guglinski interviews Dr. John Stachel PowerPedia:Are there five fundamental forces in Nature? Repulsive gravity within the hydrogen atom Script on the film Quantum Ring Theory: PowerPedia:Guglinski’s Model of the Photon PowerPedia:Guglinski on the De Broglie Paradox PowerPedia:Demystifying the EPR Paradox PowerPedia:Zitterbewegung Hydrogen Atom of Quantum Ring Theory PowerPedia:New model of neutron: explanation for cold fusion Article: How magnet motors work Article: New nuclear model of Quantum Ring Theory corroborated by John Arrington’s experiment Article:Quantum Field Theory is being developed in the wrong way Directory:Quantum Particles Site:LRP:Quantum Physics - Quantum Mechanics (Qualified and Quantified) Site:LRP:The Quantum Potential & Overunity Asymmetric Systems Directory:Fix the World Organization's Quantum Energy Generator Directory:Chukanov Quantum Energy LLC Directory:Paintable plastic solar cells using quantum dots Reviews:Book:The Quantum Key Article:The Successor of Quantum Mechanics PowerPedia:Quantum Ring Theory Article:Magnetic monopole - new experiment corroborates Quantum Ring Theory Article:Stability of light nuclei isotopes according to Quantum Ring Theory Paper:A New Foundation for Physics, by Quantum Aether Dynamics Institute Article:Quantum Field Theory is being developed in the wrong way - PowerPedia - Main Page
0138fae2a1ece138
Chapter 9: Scattering in One Dimension We now consider another one-dimensional problem, the scattering problem. In doing so we need to consider scattering-type solutions and what they mean. For standard scattering situations, the wave functions we use are usually those valid for regions of constant potential energy such as complex exponentials (plane waves) when E > V0 and real exponentials when E < V0.1 Table of Contents 1There is one other possibility that is not often considered. If E = V0, the solution to the Schrödinger equation yields a linear solution. Quantum Theory TOC Overview TOC The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
797239f265cf7ab8
May 15 2012 Another Blogger Jumps Into the Dualism Fray There is the crux of the straw man – I never claimed that correlation is sufficient to establish causation. The entire premise of Kastrup’s piece is therefore false, creating a straw man logical fallacy. He goes on at length explaining that correlation does not equal causation. Regular readers of this blog are likely chuckling at this point, knowing that I have written often about this fallacy myself. If you read Katrup’s piece you will notice that at no point does he provide a quote from me claiming that correlation is sufficient to establish causation. He seems to understand also that I was responding directly to Egnor, who was claiming that brain states do not correlate with mind states, so of course I was making the point that they do. But I went much further (perhaps Kastrup did not read my entire post). I wrote: In fact I would add another prediction to the list, one that I have discussed but have not previously added explicity to the list – if brain causes mind then brain activity and changes will precede the corresponding mental activity and changes. Causes come before their effects. This too has been validated. The list I am referring to are the predictions generated by the hypothesis that the brain causes the mind. I contend that all of these predictions have been validated by science. This does not mean the hypothesis has been definitively proven, a claim I never make, just that the best evidence we have so far confirms the predictions of brain causing mind, and there is no evidence that falsifies this hypothesis. Because mere correlation does not prove causation (although it can be compelling if the correlation is tight and multifaceted) I felt compelled to add additional points, like the one above. Brain states do not just correlate with mental state, they precede them. Causes precede effects, so again if the brain causes mind then we would expect changes to brain states to precede their corresponding mental states, and in every case of which we are currently aware, they do. We would not expect this temporal relationship if the mind caused the brain, and it would not be necessary if some third thing causes both or, as Kastrup claims, the correlation is a pattern without causation. Further, in a section of my post titled “Correlation and Causation” I pointed out that it is highly reproducible that changes in brain states precede their corresponding changes in mental states. For example, we can stimulate or inhibit parts of the brain and thereby reliably increase or decrease corresponding mental activity. The temporal arrow of correlation extends to things that change brain states. You get drunk after you drink alcohol, not before. When researchers use transcranial magnetic stimulation to inhibit the functioning of the temporal parietal junction subjects then have an out of body experience. To further demonstrate that I was not relying upon mere correlation to make the case for causation, I wrote: Egnor would have you believe that this growing body of scientific evidence only shows that brain states correlate with the behavior of subjects reporting their experience, and not with the experiences themselves. He would have you believe that even if turning on and off a light switch reliably precedes and correlates with a light turning on and off, the switch does not actually control the light – not even that, he would have you believe that the scientific inference that the switch controls the light (absent any other plausible hypothesis) is materialist pseudoscience. Perhaps Kastrup does not understand the meaning of the word “inference.” That the brain causes mind is not a philosophical proof (something I never claimed), but a scientific inference. Correlation is one pillar of that inference, but so is the fact that brain states precede mental states. Further, I am clearly invoking Occam’s razor in the example above with the fairies and the light switch. The same correlation exists in that example – flipping a light switch preceded and correlates with the lights turning on and off. The simplest explanation is that the light switch controls the light – it is causing the lights to go on or off. But lets say you didn’t know light switches worked by opening and closing a circuit, and you could not break open the wall to investigate the mechanism. You could still come to the confident scientific inference that the light switch was doing something to directly turn the light on or off. You would not need to hypothesize that there were light switch fairies who were doing it. I also felt compelled to add, for completeness, “absent any other plausible hypothesis.” Why would I specifically add this caveat if I thought correlation proved causation? Of course, in this one blog post I could not go into a thorough exploration of every supernatural claim made for anomalous cognition. I maintain that there is no compelling evidence of mental states separate from brain states, and I refer you to my many other blog posts to support this position. Here we see that Kastrup’s clumsy and, dare I say, trite, superficial, and fallacious arguments about correlation not equaling causation are really cover for his true position and agenda – he believes that there is evidence for mental activity separate brain activity. He writes: There is an increasing amount of evidence that there are non-ordinary states of consciousness where the usual correlations between brain states and mind states break (see details here). If only one of these cases proves to be true (and I think at least one of them, the psilocybin study at Imperial College, has been proven true beyond reasonable doubt; see my debate on this with Christoph Koch here.), then the hypothesis that the brain causes the mind is falsified. Novella ignores all this evidence in this opinion piece, and writes as if it didn’t exist. You can also watch the video embedded in his post for an explanation of his position. I will address his two main points, both of which are erroneous. He seems highly impressed by the fact that neuroscientific studies have shown that psilocybin decreases brain activity and causes a “mystical” experience, as if this contradicts the prediction that the brain correlates with the mind (so in reality he does not accept the correlation and that is the reason for his rejection of the brain-mind hypothesis, not his obvious straw man about correlation and causation). Kastrup’s conclusion, however, is hopelessly naive. There are many examples where inhibiting the activity in one part of the brain enhances the activity in another part of the brain through disinhibition. In fact the very study he cites for support concludes: Unconstrained cognition is another way of saying disinhibition. The concept is simple – there are many brain areas all interacting and processing information. This allows for complex information processing but also slows down the whole process – slows down cognition. That is the price we pay for complexity. If, however, we inhibit one part of the brain we lose some functionality, but the other parts of the brain are unconstrained and free to process information and function more quickly. The psilocybin study is a perfect example of this. The drug is inhibiting the reality testing parts of the brain, causing a psychadelic experience that is disinhibited and intense. This is similar to really intense dreams. You may have noticed that sometimes in dreams emotions and experiences can be more intense than anything experienced while awake. This is due to a decrease in brain activity in certain parts of the brain compared to the full waking state. Kastrup seems to be completely unaware of the critical concept of disinhibition and therefore completely misinterprets the significance of the neuroscience research. His next point is equally naive. He claims that near death experiences, in which people have intense experiences without brain activity, is further evidence of a lack of correlation between brain states and mental states. I have already dealt with this claim here. Briefly, there is no evidence that people are having experiences while their brain is not functioning. What we do have are reports of memories that could have formed days or even weeks later, during the recovery period following a near death experience. At the very least one has to admit that NDE claims are controversial. They are certainly not established scientific facts that can be used as a premise to counter the materialist hypothesis of brain and mind. Once again we see a hopelessly naive and confused defense of the mystical position that the mind is something more than the brain. To explicitly detail my position, so that it cannot easily be misrepresented again – if we look at the claim that the brain causes the mind as a scientific hypothesis, based upon the current findings of neuroscience we can make a few conclusions: – There is a tight correlation between brain states and mental states that holds up to the limits of resolution of our ability to measure both. – There are no proven examples of mental states absent brain function. – Brain states precede their corresponding mental states, and changes to the brain precede the corresponding changes to the mind. – At present the best scientific inference we can make from all available evidence is that the brain causes the mind. This inference is strong enough to treat it as an established scientific fact (as much as evolution, for example) but that, of course, is not the same thing as absolute proof. – There are other hypotheses that can also explain the correlation, but they all add unnecessary elements and are therefore eliminated by the application of Occam’s razor. They are the equivalent of light-switch fairies. I have made all these points before, but given the fact that Katrup completely misinterpreted my previous writings it cannot hurt to summarize them so explicitly. Kastrup himself adds nothing of interest to the discussion. He flogs the “correlation is not causation” logical fallacy as if that’s a deep insight, and is unaware of the fact that his application of it is just a straw man. He pays lip service to the notion that brain function correlates with mental states, getting up on his logical fallacy high horse, but this all appears to be a misdirection because his real point is that brain function does not correlate with mental states. He then trots out the long debunked notion of near death experiences as his big evidence for this conclusion, without addressing the common criticisms of this position (even by the person he is currently criticizing). His only other evidence is a complete misunderstanding of pharmacological neuroscience research. I can see no better way to end this piece than with a quote from Kastrup himself, which applies in a way I believe he did not intend: “In my personal view, this superficial and intellectually light-weight opinion piece adds nothing of value to the debate about the mind-body problem.” 44 responses so far 44 Responses to “Another Blogger Jumps Into the Dualism Fray” 1. daedalus2uon 15 May 2012 at 9:42 am It is worth pointing out that the technique used to infer brain activity (fMRI), doesn’t really measure brain activity, what it measures is differential changes in relative quantities of oxy- and deoxyhemoglobin. It is blood flow that is being measured. That blood flow change is caused by nitric oxide. A change in nitric oxide levels will change brain activity and brain behavior because that is how the brain controls itself. It is not just that the idea of a immaterial mind would be greatly complicated by something like “mind fairies” (I personally don’t like arguments from Occam’s razor when evaluating what reality actually is, using Occam’s razor to evaluate a model is ok, but we know that “all models are wrong, some are useful”). There is the problem of conservation of mass/energy, momentum, spin, charge, etc. The brain certainly is made out of materials with all of those conserved things. That matter cannot be influenced except via processes that also conserve those things (so far as we know as in the Standard Model, General Relativity and so on). Those conservation laws have been tested to energies ~12 orders of magnitude greater than energy relevant to brain activity. There has been a complete absence of deviations from those conservation laws. Our default hypothesis should be that the brain is matter just like everything else we are able to interact with. There is no datum that is inconsistent with that default hypothesis. If the brain can only be affected by processes which conserve various quantities, then the idea that there is an immaterial mind is not correct. 2. SARAon 15 May 2012 at 9:47 am I can understand the desire to make the mind be the cause. If you think about brain causing mind, all of life becomes a sort of non-choice. All of my life becomes a slavery to brain chemicals and reactions, over which I have little or no control. Because the “I” of my brain is really just a reaction of brain function. But, I will never understand why people will take slivers and the barest threads of evidence and build a case against a mountain. Suppose their few examples are true. They are all contrived or extreme situations. So, are they really making a case for our every day lives to be caused by this outside “mind” rather than brain? It feels like they arguing that this outside soul (because lets face it – their mind/concsiousness argument is a thinly veiled attempt to infuse us with a soul) is just a voyeuristic rider who only makes an appearance when you sleep, take drugs, strangle yourself, etc and then it makes a spirited escape when you die? That doesn’t seem to be an argument for that being the actual “I” of the mind, does it? Frankly, I’m less disturbed by the idea of being a slave to the chemicals and electrical reactions in my brain. 3. Bernardoon 15 May 2012 at 10:49 am 4. Gallenodon 15 May 2012 at 11:01 am SARA: While we all may be “moist robots” (per Scott Adams), you’re not so much a slave to brain chemicals and electrical reactions as you are constrained by them. You’re still free to make choices within the limits of human perception, comprehension and thought. Dualism supports the human desire for immoratality beyond the limits of physical bodies. It’s a popular prop to the idea that some part of us will survive death. Therefore many people will want to believe in it despite any and all evidence to the contrary; even the most hardened skeptics likely want to exist forever. The tragedy is that the current evidence says we won’t and if you accept that you need to find another reason for existence than living a life that gets you into the afterlife of your choice. And that generally involves the realm of philosophy, not hard science. 5. RickKon 15 May 2012 at 11:23 am Steve – it’s “Fray”, no? 6. tyler the new ageron 15 May 2012 at 11:41 am Hi Dr.Novella, I would like to see this debate between you and Bernardo Kastrup to continue in a pleasant manner, I find your personal attacks on him a little of putting. You are also ignoring a large body of evidence we have for the survival of consciousness after death and consciousness being something more than brain activity. Proxy sittings (Mrs. Piper in particular), drop in communicators, cross correspondences, shared near death experiences and veridical NDEs, shared death bed visions, multiple witness apparitions, children with past life memories, hauntings not associated with one person and the ending of such hauntings by spirit rescue mediumship. I can go on and on. We should not be bigoted against the evidence Dr. Novella. Finally I would like to share something from the late great researcher Montague Keene: The challenge to Mr. Randi and friends (written by late Montague Keen) I present Mr. Randi, and any of his fellow-skeptics, with a list of some of the classical cases of paranormality with most or all of which Mr. Randi will be familiar. I know he will be because he has been studying the subject for half a century, he tells us. ….. I would not imply that Mr. Randi is ignorant of these cases, many of which have long awaited the advent of a critic who could discover flaws in the paranormality claims. For me to suggest this would imply the grossest hypocrisy on Mr. Randi’s part. But to refresh his memory, and help him along, and despite the refusal of some of his colleagues like Professor Kurtz, Professor Hyman and Dr. Susan Blackmore to meet the challenge, I list the requisite references. They are based on (although not identical to) a list of twenty cases suggestive of survival prepared by Professor Archie Roy and published some years ago in the SPR’s magazine, The Paranormal Review as an invitation or challenge to skeptics to demonstrate how any of these cases could be explained by “normal” i.e. non-paranormal, means. Thus far there have been no takers. It is now Mr. Randi’s chance to vindicate his claims. 1. The Watseka Wonder, 1887. Stevens, E.W. 1887 The Watseka Wonder, Chicago; Religio-philosophical Publishing House, and Hodgson R., Religio-Philosophical Journal Dec. 20th, 1890, investigated by Dr. Hodgson. 2. Uttara Huddar and Sharada. Stevenson I. and Pasricha S, 1980. A preliminary report on an unusual case of the reincarnation type with Xenoglossy. Journal of the American Society for Psychical Research 74, 331-348; and Akolkar V.V. Search for Sharada: Report of a case and its investigation. Journal of the American SPR 86,209-247. 3. Sumitra and Shiva-Tripathy. Stevenson I. and Pasricha S, and McLean-Rice, N 1989. A Case of the Possession Type in India with evidence of Paranormal Knowledge. Journal of the Society for Scientific Exploration 3, 81-101. 4. Jasbir Lal Jat. Stevenson, I, 1974. Twenty Cases Suggestive of Reincarnation (2nd edition) Charlottesville: University Press of Virginia. 5. The Thompson/Gifford case. Hyslop, J.H. 1909. A Case of Veridical Hallucinations Proceedings, American SPR 3, 1-469. 6. Past-life regression. Tarazi, L. 1990. An Unusual Case of Hypnotic Regression with some Unexplained Contents. Journal of the American SPR, 84, 309-344. 7. Cross-correspondence communications. Balfour J. (Countess of) 1958-60 The Palm Sunday Case: New Light On an Old Love Story. Proceedings of the Society for Psychical Research, 52, 79-267. 8. Book and Newspaper Tests. Thomas, C.D. 1935. A Proxy Case extending over Eleven Sittings with Mrs Osborne Leonard. Proceedings SPR 43, 439-519. 9. “Bim’s” book-test. Lady Glenconnor. 1921. The Earthen Vessel, London, John Lane. 10. The Harry Stockbridge communicator. Gauld, A. 1966-72. A Series of Drop-in Communicators. PSPR 55, 273-340. 11. The Bobby Newlove case. Thomas, C. D. 1935. A proxy case extending over Eleven Sittings with Mrs. Osborne Leonard. PSPR 43, 439-519. 12. The Runki missing leg case. Haraldsson E. and Stevenson, I, 1975. A Communicator of the Drop-in Type in Iceland: the case of Runolfur Runolfsson. JASPR 69. 33-59. 13. The Beidermann drop-in case. Gauld, A. 1966-72. A Series of Drop-in Communicators. PSPR 55, 273-340. 14. The death of Gudmundur Magnusson. Haraldsson E. and Stevenson, I, 1975. A Communicator of the Drop-in Type in Iceland: the case of Gudni Magnusson, JASPR 69, 245-261. 15. Identification of deceased officer. Lodge, O. 1916. Raymond, or Life and Death. London. Methuen & Co. Ltd.16. Mediumistic evidence of the Vandy death. Gay, K. 1957. The Case of Edgar Vandy, JSPR 39, 1-64; Mackenzie, A. 1971. An Edgar Vandy Proxy Sitting. JSPR 46, 166-173; Keen, M. 2002. The case of Edgar Vandy: Defending the Evidence, JSPR 64.3 247-259; Letters, 2003, JSPR 67.3. 221-224. 17. Mrs Leonore Piper and the George “Pelham” communicator. Hodgson, R. 1897-8. A Further Record of Observations of Certain Phenomena of Trance. PSPR, 13, 284-582. 18. Messages from “Mrs. Willett” to her sons. Cummins, G. 1965. Swan on a Black Sea. London: Routledge and Kegan Paul. 19. Ghostly aeroplane phenomena. Fuller, J.G. 1981 The Airmen Who Would Not Die, Souvenir Press, London. 20. Intelligent responses via two mediums: the Lethe case. Piddington, J.G. 1910. Three incidents from the Sittings. Proc. SPR 24, 86-143; Lodge, O. 1911. Evidence of Classical Scholarship and of Cross-Correspondence in some New Automatic Writing. Proc. 25, 129-142 7. SARAon 15 May 2012 at 11:50 am # Gallenod I have a hard time fully wrapping my head around this thought, so tell me where I’m wrong. I want to be wrong. But since our perception, comprehension and thought are only defined by the chemicals and electrical reactions in our brain and since every neural reaction is merely caused by the previous ones, how can we be anything but puppets to those reactions? Since there is no first cause mind to change the course of the brain, there is only a cascade of mindless reactions being perceived as mindful ones. Isn’t anything else merely an illusion created by our brain? 8. tyler the new ageron 15 May 2012 at 12:00 pm Many of the phenomena that point to an afterlife are anecdotal, but not all anecdotal evidence must be rejected outright, because anecdotal evidence can be valid if the witnesses are competent and are in good standing. Then there is evidence of the afterlife that is not anecdotal, but not experimental, but is evidence of field research showing a systematic pattern that repeats, such as cross-correspondences and children seem to remember past lives. Here are a few questions for your Dr.Novella? Have you actually followed the NDE literature? Also what do you have to say about the Ring Study which demonstrated that NDErs who were born blind or became blind at a young age had powerful visual components in their NDE? Many NDEr’s have correctly identified conversations and visual aspects of their environment. In any case I will save my time because all my arguments for the evidence for survival of consciousness after death will be strongly rejected on this blog on the basis of Wishful thinking Holding on to cherished beliefs Laws of physics being broken Experimenter error File drawer effect Will to believe I sincerely hope this debate with Bernardo Kastrup continues. 9. bgoudieon 15 May 2012 at 12:33 pm I’d like to propose the law of conservation of piss poor thinkers. At any given time at least one must appear on any skeptical blog, making arguments long since discredited, yet insisting that they are the one drawing the correct conclusion by looking at the “real” evidence. Should such a poster go away they will be replaced within hours. It’s as if ignorance has existence beyond the physical brain. Astounding to consider the implications. 10. SARAon 15 May 2012 at 12:42 pm I think you could just call it The Law of Trolls. It’s not actually limited to skeptics vs nonskeptics. It’s anywhere that a controversy creates a gateway for attention mongering. 11. Steven Novellaon 15 May 2012 at 1:05 pm Tyler – I am not ignoring NDEs. I linked to a prior post in which I assessed the evidence. The bottom line is that the evidence for anything paranormal, including NDEs, is all weak and anecdotal. None of these phenomena are well established and generally accepted by scientists. There is a good reason for that. 12. locutusbrgon 15 May 2012 at 1:11 pm What I am enjoying are trolls who prove themselves wrong in their own statements by including superficial criticism with other non-sense. Glad I did not have to point out how obviously you are trying to disarm the argument utilizing straw-man, and no true Scotsman logical fallacies. Just keep rambling on and on. It always impresses me that volumes of arguments are posted to refute one point. An attempt to confuse and distract like a good magician should. 13. ccbowerson 15 May 2012 at 1:20 pm “I personally don’t like arguments from Occam’s razor when evaluating what reality actually is…” The way I think about it in instances like these is that Occam’s razor is not used as an argument for the “way things are,” but it is useful in pointing about where the burden of proof lies. 14. daedalus2uon 15 May 2012 at 1:43 pm I saw a good blog post at another site (which I have now lost track of), by a physicist who was trying to respond to those who posit an immaterial mind or some sort of spiritual energy. He wrote the Schrödinger equation of the electron in terms of energy, and all of the terms were recognized, total energy equals kinetic energy plus potential energy and asked the question where is the “spiritual term”. If an electron is going to be influenced by something, its Schrödinger equation has to have a term for that effect. 15. Shelleyon 15 May 2012 at 1:45 pm “. . . anecdotal evidence can be valid if the witnesses are competent and are in good standing.” Not really. Anecdotal ‘evidence’ is based on one’s experiences and perceptions. It is, in effect, single witness testimony. Please do a literature review on the many factors that impair, affect, and weaken the accuracy of eyewitness testimony (even the testimony of those of good character) before you decide that anecdotal evidence should be taken as valid. Anecdotal evidence is extremely weak, and is useful only to the extent that it can sometimes lead to testable scientific hypotheses. Whenever anecdotal evidence has lead to testable hypotheses in NDEs etc, it has not held up. Really, most of us would love to be proven wrong on this topic. (How cool would that be?) Unfortunately, there is simply no compelling evidence that the mind is anything more than what the brain does. 16. daedalus2uon 15 May 2012 at 2:39 pm CC, the nature of reality is what it is. There isn’t a “burden of proof” to establish what the nature of reality is. I find “argument from burden of proof” to be unsatisfactory. This is really the crux of the difficult problem that Kuhn noticed. The usual default is the current scientific understanding, even when that understanding is known to be wrong. This is not what the usual default should be. The usual default should be whatever model is most consistent with the most data that is reliable. The reason this is so problematic is that humans adopt the “conventional wisdom” not because it corresponds with the most data, but because of normal human feelings, my friend said so, it feels right, my intuition says so, the experts all agree. These are all arguments from authority, someone else believes it, so it must be right. That is not an argument. Human hyperactive agency detection pulls for this type of belief adoption mechanism. This was the problem that Einstein had getting Relativity accepted. Einstein didn’t get the Nobel Prize for Relativity because those on the Nobel Committee didn’t think it was correct. The conventional wisdom of the time was that there had to be and was absolute time and space. It turns out there isn’t. There wasn’t any data that required absolute time and space. It was human conceptual limitations that required absolute time and space. That is one of the consequences of our evolved human brains. Some things are easy to do and understand because they are hard-wired. Some things are not, and our neuroanatomy induces errors. These are like the optical illusions introduced by the neuroanatomy of our visual processing systems. We can recognize that they are optical illusions because we have a model of reality that is independent of our visual system and we can use that model of reality to recognize and override optical illusions. You should also never confuse reality with the model of reality that you are using. The same thing happens with cognition. Humans have cognitive illusions which are analogous to optical illusions which are brought on by our cognitive neuroanatomy. It is only by recognizing, acknowledging and compensating for our cognitive illusions that we can get beyond them. SARA, Yes, our brains are made of meat, and meat as a computational device has its limitations. Either you understand those limitations and compensate by working around them, or you don’t and are stuck believing things that are demonstrably wrong, just like optical illusions. Rejecting things which feel right but which are demonstrably wrong is difficult to do, but it gets easier over time. It is easier to do if you argue from data. That is why I like to always emphasize that there is no data in support of a non-material mind by using the singular form “datum”. Anecdotes are perfectly acceptable as data, once they are recognized for their limitations, the extreme lack of statistical power. Some arguments don’t need statistical power. If you have the hypothesis that all swans are white, a single anecdote of a black swan refutes that argument with virtually 100% certainty (the swan could have been painted black, you could be having a delusion, it is opposites day, it is a clever mechanical flying device that looks like a black swan). 17. SARAon 15 May 2012 at 3:03 pm I accept that the brain is directing our mind. What I question is if we only perceive that we are making adjustments for our cognitive deficiencies, directing our brain in a direction of thought, or whether that directional choice is only the illusion of the “meat”. I’m honestly having a hard time putting into words this concept that is bouncing around in head that we really have no choice. I choose to go on a diet. Or do I? Since I have no “first cause” mind directing my brain, then the choice is an outcome of undirected neural reaction. My brain is just a program following pathways and rules and firing off commands, but all of it is really just a predetermined outcome of the previous conditions. My idea that it is a choice is an artifact of all of those cognitive deficiencies. Please show me I’m wrong. I am really wishing I hadn’t followed this line of thinking. 18. Stefanon 15 May 2012 at 3:03 pm Not surprisingly, Bernardo does not accept any comments that might hurt his book sales… I posted twice this morning and neither post ever appeared, while several others, including his own in support of his fans, have. 19. DOYLEon 15 May 2012 at 3:13 pm The brain-mind idea of cause and effect seems analogous to the mechanical properties of an engine or car.All the qualities that are essential to a vehicle(lights,signals,climate control,audio,reclining movement,window movement,navigation)are dependent upon the function of power,the fuction of catalyst.You need combustion and current to enable a car to portray a car. 20. daedalus2uon 15 May 2012 at 4:24 pm Doyle, I think what you are looking for is the analogy of agency. Top-down control of something by an agent. In a car, you have the designer who designed the hardware, the fabricator who puts the pieces together, and the driver who controls the mechanism once assembled. The problem with the brain-mind analogy is that it presupposes that there is top-down control of the brain by the mind. There is no “top” in the brain, there is no “ghost” in the machine that is controlling things. The default assumption that humans have that there is top-down control has to do with human hyperactive agency detection. Human hyperactive agency detection applies to self-detection too. The idea that there is an “I”, that is a unique entity continuous over the lifespan is an illusion (albeit a persistent one). The reason there is such an illusion, is because the resolution of the self-identity detection is so poor. Organisms don’t need to identify self to a high degree of resolution because there is only a single “self” inside our brain. All you need is a “I am me” module that identifies the self as me when ever it is interrogated. That is why when people experience brain damage, they still self-identify as themselves (except in rare instances where that particular part of the brain is damaged). There is no great utility for higher fidelity resolution beyond that which prioritizes self-preservation, so evolution didn’t provide it. Positing top-down control doesn’t provide any answers because it simply moves the control problem to a different level. If the mind controls the brain, then what controls the mind? If the soul controls the mind, then what controls the soul? There is no “top” from which top-down control can be exerted. The brain forms from a single cell. How can that single cell exert top-down control of neuroanatomy? Very clearly it can’t, and it doesn’t. We know that we don’t understand how the neuroanatomy of the brain is created and how it does all of the things that it does, but we do know that “it” can’t exert top-down control before “it” exists. This is also an example of why defaulting to “I don’t know” is a lot better than guessing or taking somebody’s superficial idea. 21. ccbowerson 15 May 2012 at 4:51 pm If we are looking at this from a scientific perspective, that is the best we can do: a theory that best fits reality. What I mean by burden of proof (I assumed that I was understood) is that if there are 2 alternate explanations for the same phenomenon, then an explanation that adds another layer of complexity should be tentatively rejected in favor of a simpler explanation, assuming that it does not add any explanatory power. It appears that you don’t find that sufficient, because there is no reason to believe that the simpler explanation is more likely the “Truth,” but that is not why it is useful. It is the ‘best bet,’ and the main utility is to eliminate the infinite number of alternative theories that add nothing but further complexity. In order to get closer to the Truth one must distinguish between theories in ways that one appears to fit with reality better. I’m not sure there is a way of accessing ‘reality’ in the way you imply. Perhaps we are talking past each other here, I’m not sure 22. gr8googlymooglyon 15 May 2012 at 8:23 pm Funny how none of Tyler the Newager’s references are newer that 1990 (and a full 9 of them are pre-1940). Have we not learned anything new in the past 22 years about this supernatural pseudoscientific ‘theory’ of duality? Apparently not. Is it really lost on the Tyler’s and Kastrup’s of the world that the more you have to apologize for your pet magic theory, the less likely it is to be real? 23. NewRonon 15 May 2012 at 11:23 pm I may be missing something but I do not know of any scientific answers to the following: How do thoughts arise from physical processes? How do physical processes give rise to a conscious state? How does preconscious physical activity produce a conscious experience of an event? What produces a sense of self from physical brain activity? If I have not missed something and there are no scientific answers to the above, then am I remiss in tentatively holding that the mind is separate from the brain? Or perhaps I should just trust that scientific answers will be forthcoming. 24. BillyJoe7on 16 May 2012 at 12:26 am You don’t believe in freewill?…welcome to the club. 🙂 The whole concept of freewill is irrational. But you know this. I’m just offering support. The illusion of freewill is pretty convincing, however, which is why even those of us who recognise that that is all it is, still act as if we have freewill. 25. Bernardoon 16 May 2012 at 1:48 am “# Stefanon 15 May 2012 at 3:03 pm I moderate all comments to avoid the uncontrolled level of spam I once had, because I do not require registration (as here). When you posted your comments I was asleep and could not release them immediately. Sorry it took a couple of hours. I approved (as usual) all comments posted yesterday and replied to a couple today. Gr, B. 26. Bernardoon 16 May 2012 at 1:59 am Hhmm… I released several comments now but none from a “Stefanon.” Did you post your comment under another name? Can you have a look to make sure it has been published? If not, please let me know. Gr, B. 27. Mantikion 16 May 2012 at 7:27 am I am surprised that the interpretation of Libet’s work has gained such traction amongst materialists. Although I am a theist (via a range of ESP and spiritual experiences) I grappled with the issue of sub/pre conscious decision making following experiences where I made one-handed catches of balls or apples without being consciously aware they were passing within reach. In one case, someone threw an apple at the side of my head and the first thing I knew I was staring at it in my hand (despite being a poor catch). After consideration, it occurred to me that this action is explicable in the same way that we perform innumerable daily actions “unconsciously” such as navigation, dressing and walking etc. In these circumstances, our automatic actions are the result of programmed responses and reflexes laid down from early learning experiences. How does this relate to Libet etc? Well I note that the researchers’ observed responses in advance of the subjects’ conscious awareness of decision making is not 100 percent but more of a ten percent shift in terms of the number of subject. My thinking therefore is that what is being detected neurologically is some form of pre-decision-making algorithmic “tipping point”. In other words, a proportion of subjects are using a number of pre-determined factors as inputs into an algorithm which they use to make the decision and when the individual results from those inputs reach a “critical point”, a decision is made. The EEGs therefore are measuring the critical point at which the decision is triggered. Thus for that significant group of subjects, the EEG displays the tipping-point of decision (not the decision itself) before the subject is aware of it. My conclusion therefore is that (unless I have misunderstood the experimental method) that such experiments in no way demolish the concept of conscious free-will. And in turn have nothing to say about the materialist concept of mind being an epiphenomenon of electro-chemical brain activity. 28. daedalus2uon 16 May 2012 at 7:39 am New Ron, to answer your rhetorical questions. I don’t know. I don’t know. I don’t know. I don’t know. Not knowing doesn’t give one license to make stuff up just because you want it to be that way. I don’t know what is at the bottom of the Atlantic Ocean. That doesn’t mean I can assume that there is a vast undersea civilization called Atlantis ruled by someone named Aquaman who can breathe water and can control aquatic animals telepathically. I do know there is conservation of mass/energy. Any answers to your questions that violate conservation of mass/energy are virtually certain to be wrong. They are so likely to be wrong that they are not worth serious consideration. All of your questions could be reposed in the form “how does a non-material mind do ….” Assuming a non-material mind doesn’t answer any of your questions, it is the equivalent of saying “it is magic”. Maybe it is magic, but assuming something is magic because is is not yet understood is not how science works. 29. RickKon 16 May 2012 at 7:41 am NewRon – can I paraphrase your question a different way? “In these cases (consciousness, etc.) is it safe to assume that a natural phenomena does not have a natural cause?” There are a lot of things science hasn’t answered – that’s always been true. And literally millions of times we’ve seen supernatural explanations definitively replaced by explainable natural causes. In all that time, we’ve never once seen a natural explanation definitively replaced by a supernatural explanation. I’d be quite happy to learn that my “mind” is really an ethereal thing that lives on after my brain is wormfood. But I don’t like the idea of betting on a team that has all failures and no successes. My money is on “mind” and consciousness being no more than a product of natural mechanisms found within our brains and bodies. Oh, and since consciousness, sense of self, etc. can be affected/altered/influenced by physical mechanisms (drugs, stroke, mental training and meditation, etc.), then the data points heavily to “mind” being a manifestation of a physical brain. You concluded with: “perhaps I should just trust that scientific answers will be forthcoming.” It’s a pretty safe bet to assume “natural phenomena have natural causes”. But if you’re not sure, feel free to review history for definitive exceptions to that assumption. 30. SteveAon 16 May 2012 at 7:51 am I’m not sure what you’re saying here. Is there some text missing? Someone saying they saw a black swan proves nothing; a person holding a feather from a black swan (or the whole darned thing) has to be taken seriously. In a real-life situation I would give the black swan spotter the benefit of the doubt (because there is abundant evidence they exist); but someone who comes to me with stories of purple swans really does need to bring a feather with them. tyler the new ager: “anecdotal evidence can be valid if the witnesses are competent and are in good standing” Really? Did you ever see the ‘Surgeon’s Photograph’ of the Loch Ness Monster? Even as a child I thought it looked fake. The only credibility it had was that it was taken by a surgeon, a professional gentleman who ‘had no reason to lie’. That’s why they gave it the name they did – Look! A surgeon took this! A real surgeon. But it was a fake and he eventually confessed to it. PS your list of papers has probably given Brian Dunning his next 20 episodes of Skeptoid. 31. Ufoon 16 May 2012 at 8:09 am Some of you might be interested in this interview of Kastrup for background: The show is run by a “true believer” so don’t get fooled by the name of the podcast. 32. Steven Novellaon 16 May 2012 at 8:21 am New Ron – don’t confuse different levels of questions. We can know scientifically that the brain causes consciousness without knowing exactly how it does. These are separate questions. We also don’t know nothing about how consciousness emerges from brain function. Knowledge is not all or nothing. We know quite a bit, but also there is a lot we don’t yet understand. More telling is the fact that the materialist paradigm of neuroscience is working fine and progressing rapidly. There is no need to hypothesize a magical non-corporeal source of mental function, and such a notion is of no value in our ongoing attempts to understand the mind. 33. Eric Thomsonon 16 May 2012 at 10:35 am I am a bit surprised by all this talk of the brain ‘causing’ the mind. This way of characterizing it invokes the old 19th century view in which brains and minds are obviously different sorts of things, and minds are caused by brains the way bile is secreted by the pancreas, or steam blown off by a train. After all, if if X causes Y, X is usually different from Y, like the rock thrown that caused the glass to break. Shouldn’t we just say that conscious experiences simply are complex brain states? By analogy, the exchange of oxygen/carbon dioxide from our blood does not cause respiration: it is respiration. The voltage spikes in a neuron don’t cause the action potential: they are the action potential. While some might say such loose language is innocuous, it actually leads you to make some claims that strike a strange cord. Aside from abetting dualism (see below), it seems likely false that changes in brain activity precede</i mental state changes. What we typically observe, when it comes to experiences (as in binocular rivalry and such) is that both change at exactly the same time. And this is what we expect from a conscious brain (versus a brain that produces mind). I don't expect brain state changes to precede changes in conscious states: I expect them to perfectly mirror one another, the way that action potentials are perfectly mirrored by voltage spikes.[see note below] Also, saying the brain ’causes’ the mind abets dualism. I could be a substance dualist and agree that the brain causes the mind. They are different, but I’m fine with a bidirectional coupling between brain and mind. For me, the mind is a nonphysical substance that interact with the brain. We materialists should be more clear. The brain does not generate consciousness as some separate thing going on in parallel. Putting it in such terms generates really weird questions like those from New Ron about how the “two things” are connected. That is, dualism. Rather, conscious experiences are complex states of certain brains, period. Until I read this post, I thought such talk of brain states ‘causing’ mental states was largely innocuous. 34. Eric Thomsonon 16 May 2012 at 12:12 pm Ugh I said pancreas produces bile. Umm…that should be ‘liver.’ 35. Kawarthajonon 16 May 2012 at 2:33 pm Light switch fairies – I had no idea that they were involved in turning lights on. Those light switch fairies were reeking havoc with my light in my 120 year old house with 1930’s wiring, making it fizzle and spark when I tried to turn the light on. Changing the switch and rewiring the light seems to have scared them away for now. 36. NewRonon 16 May 2012 at 7:24 pm I am bemused at how a simple set of questions and an admission that I hold a tentative belief – that is one that is uncertain and open to change – can evoke such derisive terms from Steven and Deadalus2u as “magical” and “magic”. At least RickK and Eric Thompson were able to respond (although for me not convincingly) without resorting to such language. 37. Alastair F. Paisleyon 16 May 2012 at 9:23 pm @ Steven Novella The bottom line is that you really don't have any objective scientific evidence that subjective awareness is physical. If you did, then you could furnish us with the physical properties of consciousness. 38. Alastair F. Paisleyon 16 May 2012 at 9:27 pm @ Eric Thomson > I am a bit surprised by all this talk of the brain ‘causing’ the mind. < I'm not. Most materialists presuppose some form of dualism. 39. BillyJoe7on 17 May 2012 at 12:13 am “For me, the mind is a nonphysical substance that interact with the brain.” Nonphysical substance? Actually, the mind IS physical. It is, however, non-material. 40. gervasiumon 17 May 2012 at 9:01 am Eric Thompson, the brain simply is not the mind, even from a materialist perspective, the same way that the Lungs are not respiration, but lungs cause respiration. A dead brain is still a brain and it produces no consciousness. 41. eiskrystalon 17 May 2012 at 10:23 am Yes, Dr. Novella, that bit about it being a “superficial and intellectually light-weight opinion piece” was particularly cutting. How could you!!1! 42. Mantikion 17 May 2012 at 5:19 pm Further to my earlier post, the recent successful biological implant which facilitated the paralysed woman to control a mechanical arm using electrical impulses from her brain can only work via a conscious decision on her part. Any attempt to rationalise a source for those signals beyond her freewill leads nowhere. Turtles all the way down so to speak! Given time and practice familiarity would allow her to develop algorithms to look for “primers” that would allow her to control the robotic arm more smoothly without conscious thought but this would in no way prove that some other sum parts of her brain was combining to “produce” consciousness. Materialist explanations for consciousness are speculation unsupported by evidence. 43. etatroon 17 May 2012 at 6:16 pm @ New Ron – Have you read a neuroscience textbook? Scientific answers to those questions are in a textbook. Go to and type in those key words, you’ll find the “physical processes” in the form of biological activity leading to those mental states. The “how” question is often complicated and nuanced. Just because you can’t explain how the brain causes the mind, doesn’t mean that the brain doesn’t cause the mind. For centuries, we didn’t know “how” gravity worked, but we didn’t assume that it didn’t exist. 44. Mantikion 20 May 2012 at 10:48 pm Hi etatro Just because you can’t explain how the mind can be independant of the brain doesn’t mean that the brain causes the mind. The explanation that consciousness is something fundamental rather than emergent can sit neatly within all scientific paradigms (including neuroscience). Trackback URI | Comments RSS Leave a Reply You must be logged in to post a comment.
d595cdbfef9a31be
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer For a report I'm writing on Quantum Computing, I'm interested in understanding a little about this famous equation. I'm an undergraduate student of math, so I can bear some formalism in the explanation. However I'm not so stupid to think I can understand this landmark without some years of physics. I'll just be happy to be able to read the equation and recognize it in its various forms. To be more precise, here are my questions. Hyperphysics tell me that Shrodinger's equation "is a wave equation in terms of the wavefunction". 1. Where is the wave equation in the most general form of the equation? $$\mathrm{i}\hbar\frac{\partial}{\partial t}\Psi=H\Psi$$ I thought wave equation should be of the type It's the difference in order of of derivation that is bugging me. From Wikipedia "The equation is derived by partially differentiating the standard wave equation and substituting the relation between the momentum of the particle and the wavelength of the wave associated with the particle in De Broglie's hypothesis." 2. Can somebody show me the passages in a simple (or better general) case? 3. I think this questions is the most difficult to answer to a newbie. What is the Hamiltonian of a state? How much, generally speaking, does the Hamiltonian have to do do with the energy of a state? 4. What assumptions did Schrödinger make about the wave function of a state, to be able to write the equation? Or what are the important things I should note in a wave function that are basilar to proof the equation? With both questions I mean, what are the passages between de Broglie (yes there are these waves) and Schrödinger (the wave function is characterized by)? 5. It's often said "The equation helps finds the form of the wave function" as often as "The equation helps us predict the evolution of a wave function" Which of the two? When one, when the other? share|cite|improve this question Philisophically I always find requests to explain an equation for the laymen to be a little strange. The point of writing it in math is to have a precise, and complete representation of the theory... – dmckee Dec 15 '12 at 16:13 You're right. That's why I tried to make it clear I'm not asking an explanation of the "equation" as you mean it, rather the meaning of the "symbols in it". In particulart question number 1 is the most important for me now. – Temitope.A Dec 15 '12 at 17:04 For a connection between Schr. eq. and Klein-Gordon eq, see e.g. A. Zee, QFT in a Nutshell, Chap. III.5, and this Phys.SE post plus links therein. – Qmechanic Dec 15 '12 at 18:21 up vote 10 down vote accepted You should not think of the Schrödinger equation as a true wave equation. In electricity and magnetism, the wave equation is typically written as with two temporal and two spatial derivatives. In particular, it puts time and space on 'equal footing', in other words, the equation is invariant under the Lorentz transformations of special relativity. The one-dimensional time-dependent Schrödinger equation for a free particle is which has one temporal derivative but two spatial derivatives, and so it is not Lorentz invariant (but it is Galilean invariant). For a conservative potential, we usually add $V(x) \psi$ to the right hand side. Now, you can solve the Schrödinger equation is various situations, with potentials and boundary conditions, just like any other differential equation. You in general will solve for a complex (analytic) solution $\psi(\vec r)$: quantum mechanics demands complex functions, whereas in the (classical, E&M) wave equation complex solutions are simply shorthand for real ones. Moreover, due to the probabilistic interpretation of $\psi(\vec r)$, we make the demand that all solutions must be normalized such that $\int |\psi(\vec r)|^2 dr = 1$. We're allowed to do that because it's linear (think 'linear' as in linear algebra), it just restricts the number of solutions you can have. This requirements, plus linearity, gives you the following properties: 1. You can put any $\psi(\vec r)$ into Schrödinger's equation (as long as it is normalized and 'nice'), and the time-dependence in the equation will predict how that state evolves. 2. If $\psi$ is a solution to a linear equation, $a \psi$ is also a solution for some (complex) $a$. However, we say all such states are 'the same', and anyway we only accept normalized solutions ($\int |a\psi(\vec r)|^2 dr = 1$). We say that solutions like $-\psi$, and more generally $e^{i\theta}\psi$, represent the same physical state. 3. Some special solutions $\psi_E$ are eigenstates of the right-hand-side of the time-dependent Schrödinger equation, and therefore they can be written as $$-\frac{\hbar^2}{2m} \frac{\partial^2 \psi_E}{\partial x^2} = E \psi_E$$ and it can be shown that these solutions have the particular time dependence $\psi_E(\vec r, t) = \psi_E(\vec r) e^{-i E t/\hbar}$. As you may know from linear algebra, the eigenstates decomposition is very useful. Physically, these solutions are 'energy eigenstates' and represent states of constant energy. 4. If $\psi$ and $\phi$ are solutions, so is $a \psi + b \phi$, as long as $|a|^2 + |b|^2 = 1$ to keep the solution normalized. This is what we call a 'superposition'. A very important component here is that there are many ways to 'add' two solutions with equal weights: $\frac{1}{\sqrt 2}(\psi + e^{i \theta} \phi)$ are solutions for all angles $\theta$, hence we can combine states with plus or minus signs. This turns out to be critical in many quantum phenomena, especially interference phenomena such as Rabi and Ramsey oscillations that you'll surely learn about in a quantum computing class. Now, the connection to physics. 1. If $\psi(\vec r, t)$ is a solution to the Schrödinger's equation at position $\vec r$ and time $t$, then the probability of finding the particle in a specific region can be found by integrating $|\psi^2|$ around that region. For that reason, we identify $|\psi|^2$ as the probability solution for the particle. • We expect the probability of finding a particle somewhere at any particular time $t$. The Schrödinger equation has the (essential) property that if $\int |\psi(\vec r, t)|^2 dr = 1$ at a given time, then the property holds at all times. In other words, the Schrödinger equation conserves probability. This implies that there exists a continuity equation. 2. If you want to know the mean value of an observable $A$ at a given time just integrate $$ <A> = \int \psi(\vec r, t)^* \hat A \psi(\vec r, t) d\vec r$$ where $\hat A$ is the linear operator associated to the observable. In the position representation, the position operator is $\hat A = x$, and the momentum operator, $\hat p = - i\hbar \partial / \partial x$, which is a differential operator. The connection to de Broglie is best thought of as historical. It's related to how Schrödinger figured out the equation, but don't look for a rigorous connection. As for the Hamiltonian, that's a very useful concept from classical mechanics. In this case, the Hamiltonian is a measure of the total energy of the system and is defined classically as $H = \frac{p^2}{2m} + V(\vec r)$. In many classical systems it's a conserved quantity. $H$ also lets you calculate classical equations of motion in terms of position and momentum. One big jump to quantum mechanics is that position and momentum are linked, so knowing 'everything' about the position (the wavefunction $\psi(\vec r))$ at one point in time tells you 'everything' about momentum and evolution. In classical mechanics, that's not enough information, you must know both a particle's position and momentum to predict its future motion. share|cite|improve this answer Thank you! One last question. How do somebody relate the measurment principle to the equations, that an act of measurment will cause the state to collapse to an eigenstate? Or is time a concept indipendent of the equation? – Temitope.A Dec 16 '12 at 11:37 Can states of entanglement be seen in the equation to? – Temitope.A Dec 16 '12 at 11:47 Note that user10347 talks of a potential added to the differential equation. To get real world solutions that predict the result of a measurement one has to apply the boundary conditions of the problem. The "collapse" vocabulary is misleading. A measurement has a specific probability of existing in the space coordinates or with the fourvectors measured. The measurement itself disturbs the potential and the boundary conditions change, so that after the measurement different solutions/psi functions will apply. – anna v Dec 16 '12 at 13:23 One type of measurement is strong measurement, where we the experimentalists, measure some differential operator $A$, and find some particular (real) number $a_i$, which is one of the eigenvalues of $A$. (Important detail: for $A$ to be measureable, it must have all real eigenvalues.) Then, we know the wavefunction "suddenly" turns into $\psi_i$, which is the eigenfunction of $A$ whose eigenvalue was that number $a_i$ we measured. The system has lost of knowledge of the original wavefunction $\psi$. The probability of measuring $a_i$ is $|<\psi_i | \psi>|^2$. – emarti Dec 18 '12 at 7:12 @Temitope.A: Entanglement isn't obvious in anything here because I've only written single-particle wavefunctions. A two-particle wavefunction $\Psi(\vec r_1, \vec r_2)$ gives a probability $\int_{V_1}\int_{V_2}|\Psi|^2 d \vec r_1 d \vec r_2$ of detecting one particle in a region $V_1$ and a second particle in a region $V_2$. A simple solution for distinguishable particles is $\Psi(\vec r_1, \vec r_2) = \psi_1(\vec r_1) \psi_2(\vec r_2)$, and it can be shown that this satisfies all our conditions. An entangled state cannot be written so simply. (Indistinguishable particles take more care.) – emarti Dec 18 '12 at 9:32 What you write is the time-dependent Schrödinger equation. This is not the equation of a true wave. He postulated the equation using a heuristic approach and some ideas/analogies from optics, and he believed on the existence of a true wave. However, the correct interpretation of $\Psi$ was given by Born: $\Psi$ is an unobservable function, whose complex square $|\Psi|^2$ gives probabilities. In older literature $\Psi$ is still named the wavefunction, In modern literature the term state function is preferred. The terms "wave equation" and "wave formulation" are legacy terms. In fact, part of the confusion that had Schrödinger, when he believed that his equation described a physical wave, is due to the fact he worked with single particles. In that case $\Psi$ is defined in an abstract space which is isomorphic to the tri-dimensional space. However, when you consider a second particle and write $\Psi$ for a two-body system, the isomorphism is broken and the superficial analogy with a physical wave is completely lost. A good discussion of this is given in Ballentine textbook on quantum mechanics (section 4.2). The Schrödinger equation cannot be derived from wave theory. This is why the equation is postulated in quantum mechanics. There is no Hamiltonian for one state; the Hamiltonian is characteristic of a given system with independence of its state. Energy is a possible physical property of a system, one of the possible observables of a system; it is more correct to say that the Hamiltonian gives the energy of a system in the cases when the system is in a certain state. A quantum system always has a Hamiltonian, but not always has a defined energy. Only certain states $\Psi_E$ that satisfy the time-independent Schrödinger equation $H\Psi_E = E \Psi_E$ are associated to a value $E$ of energy. The quantum system can be in a superposition of the $\Psi_E$ states or can be in more general states for which energy is not defined. Wavefunctions $\Psi$ have to satisfy a number of basic requirements such as continuity, differentiability, finiteness, normalization... Some texts emphasize that the wavefunctions would be single-valued, but I already take this in the definition of function. The Schrödinger equation gives both "the form of the wave function" and "the evolution of a wave function". If you know $\Psi$ at some initial time and integrate the time-dependent Schrödinger equation you obtain the form of the wavefunction to some other instant: e.g. the integration is direct and gives $\Psi(t) = \mathrm{Texp}(-\mathrm{i}/\hbar \int_0^t H(t') dt') \Psi(0)$, where $\mathrm{Texp}$ denotes a time-ordered exponential. This equation also gives the evolution of the initial wavefunction $\Psi(0)$. When the Hamiltonian is time-independent, the solution simplifies to $\Psi(t) = \exp(-\mathrm{i}Ht/\hbar) \Psi(0)$. For stationary states, the time-dependent Schrödinger equation that you write reduces to the time-independent Schrödinger equation $H\Psi_E = E \Psi_E$; the demonstration is given in any textbook. For stationary states there is no evolution of the wavefunction, $\Psi_E$ does not depend on time, and solving the equation only gives the form of the wavefunction. share|cite|improve this answer Good answer. I would only add that regarding the last point, I think the confusion comes from references to the "time-independent" Schrodinger eigenvalue equation $H\psi_E = E\psi_E$ being conflated with the "time-dependent" evolution equation $\mathrm{i}\hbar \dot{\psi} = H\psi$, when of course the two are entirely different beasts. – Chris White Dec 15 '12 at 21:07 @ChrisWhite Good point. Made. – juanrga Dec 16 '12 at 2:33 6 paragraph: maybe you should add that the equation only holds if H is time-independent. – ungerade Dec 16 '12 at 12:19 @ungerade Another good point! Added evolution when H is time-dependent. – juanrga Dec 16 '12 at 12:49 If you take the wave equation $$\nabla^2\phi = \frac{1}{u^2}\frac{d^2\phi}{dt^2}\text{,}$$ and consider a single frequency component of a wave while taking out its time dependence, $\phi = \psi e^{-i\omega t}$, then: $$\nabla^2 \phi = -\frac{4\pi^2}{\lambda^2}\phi\text{,}$$ but that means the wave amplitude should satisfy an equation of the same form: $$\nabla^2 \psi = -\frac{4\pi^2}{\lambda^2}\psi\text{,}$$ and if you know the de Broglie relation $\lambda = h/p$, where for a particle of energy $E$ in a potential $V$ has momentum $p = \sqrt{2m(E-V)}$, so that: $$\underbrace{-\frac{\hbar^2}{2m}\nabla^2\psi + V\psi}_{\hat{H}\psi} = E\psi\text{,}$$ Therefore, the time-independent Schrödinger equation has a connection to the wave equation. The full Schrödinger equation can be recovered by putting time-dependence back in, $\Psi = \psi e^{-i\omega t}$ while respecting the de Broglie $E = \hbar\omega$: $$\hat{H}\Psi = (\hat{H}\psi)e^{-i\omega t} = \hbar\omega \psi e^{-i\omega t} = i\hbar\frac{\partial\Psi}{\partial t}\text{,}$$ and then applying the principle of superposition for the general case. However, in this process the repeated application of the de Broglie relations takes us away from either classical waves or classical particles; to what extent the resulting "wave function" should be considered a wave is mostly a semantic issue, but it's definitely not at all a classical wave. As other answers have delved into, the proper interpretation for this new "wave function" $\Psi$ is inherently probabilistic, with its modulus-squared representing a probability density and the gradient of the complex phase being the probability current (scaled by some constants and the probability density). As for the de Broglie relations themselves, it's possible to "guess" them from making an analogy from waves to particles. Writing $u = c/n$ and looking for solutions close to plane wave in form, $\phi = e^{A+ik_0(S-ct)}$, the wave equation gives: $$\begin{eqnarray*} \nabla^2A + (\nabla A)^2 &=& k_0^2[(\nabla S)^2 - n^2]\text{,}\\ \nabla^2 S +2\nabla A\cdot\nabla S &=& 0\text{.} \end{eqnarray*}$$ Under the assumption that the index of refraction $n$ changes slowly over distances on the order of the wavelength, then $A$ does not vary extremely, the wavelength is small, and so $k_0^2 \propto \lambda^{-2}$ is large. Therefore the term in the square brackets should be small, and we can make the approximation: $$(\nabla S)^2 = n^2\text{,}$$ which is the eikonal equation that links the wave equation with geometrical optics, in which motion of light of small wavelengths in a medium of well-behaved refractive index can be treated as rays, i.e., as if described by paths of particles/corpuscles. For the particle analogy to work, the eikonal function $S$ must take the role of Hamilton's characteristic function $W$ formed by separation of variables from the classical Hamilton-Jacobi equation into $W - Et$, which forces the latter to be proportional to the total phase of the wave, giving $E = h\nu$ for some unknown constant of proportionality $h$ (physically Planck's constant). The index of refraction $n$ corresponds to $\sqrt{2m(E-V)}$. This is discussed in, e.g., Goldstein's Classical Mechanics, if you're interested in details. share|cite|improve this answer Your first equation is a wave equation, only if you substitute the total time derivatives by partial ones. Moreover, you introduce a $\Psi = \psi e^{-i\omega t} = \phi$, but the wavefunction $\Psi$ does not satisfy the first equation for a wave. – juanrga Dec 18 '12 at 11:21 Your Answer
f72925ba2746bd46
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Is time simply the rate of change? If this is the case and time was created during the big bang would it be the case that the closer you get to the start of the big bang the "slower" things change until you essentially approach a static, unchanging entity at the beginning of creation? Also, to put this definition in relation to Einstein's conclusions that "observers in motion relative to one another will measure different elapsed times for the same event." : Wouldn't it be the case that saying the difference in elapsed time is the same as saying the difference in the rate of change. With this definition there is no point in describing the "flow" of time or the "direction" of time because time doesn't move forward but rather things simply change according to the laws of physics. Edit: Adding clarification based on @neil's comments: The beginning of the big bang would be very busy, but if time was then created if you go back to the very beginning it seems there is no time and there is only a static environment. So it seems to me that saying time has a direction makes no sense. There is no direction in which time flows. There is no time; unless time is defined as change. So we have our three dimensional objects: and then we have those objects interact. The interaction is what we experience as time. Is this correct or is time more complicated than this? share|cite|improve this question If you're concerned with the rate at which things change, shouldn't things go faster as you approach the Big Bang? The first hour of the universe was an extremely busy time. – Niel de Beaudrap Oct 4 '11 at 15:36 More generally and to the point: how do you determine "the rate of change" without a fixed standard for time, anyhow? Fast processes still happen now; just perhaps less frequently than before. That, and we're often more interested in glacially slow processes, such as human behaviour, and well, the movements of glaciers. It makes the most sense to establish a collection of commensurable standards of time reaching back to the Big Bang; but commensurability pretty much prevents any process of "time inflation" --- at least in how we measure time. – Niel de Beaudrap Oct 4 '11 at 15:38 Things may happen "faster" compared to things happening on earth now but wouldn't you eventually reach the beginning where nothing is happening and you reach a static/stable environment – coder Oct 4 '11 at 16:02 It depends on how you're trying to define a changing scale of time! If the "activity" (very vague) of the universe is getting slower with time in an exponential decay, then going backwards in time would look like watching a computer which performs one instruction in 1Gyr, a second instruction in .5Gyr, a third in .25Gyr, getting faster with time. If you "rescale time" so that each instruction takes one "operational time unit", what you find is not that things come to a rest but that you can squeeze in an infinite regress of activity immediately after the Big Bang. Very speculative of course! – Niel de Beaudrap Oct 4 '11 at 16:10 I admit how time apparently "flows" is a difficult problem and one of the most mysterious in physics. But reading one comment above I remember one of the famous quotes "Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction. " – user1355 Oct 7 '11 at 17:05 11 Answers 11 up vote 1 down vote accepted Since for some reason this question has resurfaced, I would like to point to a similar one posed later than this. Observation of change is important to defining a concept of time. If there are no changes, no time can be defined. But it is also true that if space were not changing, no contours, we would not have a concept of space either. A total three dimensional uniformity would not register. Our scientific time definition uses the concept of entropy to codify change in space, and entropy tells us that there exists an arrow of time. In special relativity and general relativity time is defined as a fourth coordinate on par with the three space directions, with an extension to imaginary numbers for the mathematical transformations involved. The successful description of nature, particularly by special relativity, confirms the use of time as a coordinate on par with the space coordinates. It is the arrow of time that distinguishes it in behavior from the other coordinates as far as the theoretical description of nature goes. share|cite|improve this answer it could be also added that entropy as an indicator of time is a relatively modern concept and not the only one. Surely the earliest thinkers compared cyclic processes with non-cyclic ones and found that some other non entropy-rising systems would behave as only one directional processes. I think of aging for example, compared to the tides or moon cycle. – rmhleo Aug 20 '15 at 9:42 This question ("Is time simply the rate of change?") is too ambiguous to have any meaningful answer. I can think of interpretations in which the question is vacuous (begging the question: "what is meant by 'rate of change'?"), tautological ("rate of change" == d/dt), or in which the answer is 'no' (GR). You might find the answer you seek in this book: share|cite|improve this answer to rephrase: is time a thing in itself or is time simply things changing? This is probably a hard question to articulate. Add that to my lack of understanding of physics :-) – coder Oct 10 '11 at 14:25 @Jeremy: most questions that are hard to articulate in this way are not meaningful, they are only philosophical words that make the brain go in circles. The questions about time which are meaningful are those that can be answered by observations. – Ron Maimon Dec 8 '11 at 5:47 Time is what is measured by clocks. But how is time modelled in physical theories ? In the Schrödinger equation time enters as an external parameter. How does this parameter correspond to the time measured by clocks ? The following reference might be a good introduction to this and related questions concerning time and quantum mechanics : share|cite|improve this answer There's is no such notion as "time" in isolation from space. Since time is a measure of entropy of space, then time wouldn't exist if the space is absolutely static. Imagine that one will somehow manage to 'rollback' the matter & energy to a state in which it was yesterday. Would this be a time travel? I don't see reasons why it wouldn't. There are things not affected by time - say, physical laws and regularities. Since we assume that they are the innate property of the universe, we also assume that they exist out of the scope of time and space. That is, time didn't exist before the BigBang, but the laws did. Edit: it's rather difficult for me, though, to imagine a physical law existing in isolation from things that it governs. share|cite|improve this answer Physics Law is the description of the thing that it governs. – Prathyush Oct 14 '12 at 14:45 Certainly time is intriguing, but there are two different things going on here: (1) there is (classically) the manifold, (2) and the zeroth component of the momentum 4-vector. To start, the temporal part of the gravitational potential does have some weird geometry that we aren't used to in everyday life and this certainly plays a role in some of the strangeness surrounding "time", but a decomposition of the EFE demonstrates that actually $g_{00}$ and $g_{0i}$ don't have time derivatives. The temporal parts of the space-time manifold, are static, only the spatial parts, $g_{ij}$ are dynamic. So where is this notion of "flow" coming from? Instead, think of the manifold as a landscape, with something like a "temporal" direction. Our movement through that direction, is determined by the zeroth component of the momentum 4-vector, energy, temporal momentum. Why are almost all things in everyday life moving in the same "direction" of time? its not because we are all in the same river, its because we are all made of the same stuff. If you want to relate "time" with a rate of change, a place to start looking is at the momentum 4-vector, not the spacetime manifold. share|cite|improve this answer A clear understanding of time in my opinion Still eludes us. Within scope of classical Concepts there is a Perfectly valid practical definition of time. Which essentially is the correlations between the periodic behaviour of systems. For instance the behaviour of a pendulum is correlated with the behaviour of the motion of the sun around the planet as N number of periods pendulum corresponds to 1 Period of earth orbit around the sun. The property of periodicity in classical systems is essential in the definition of a clock. The Question about arrow of time in my opinion Boils down to our inability to prepare system in precise initial conditions, Which only allows the possibility of predicting its behaviour on in a statistical Sense. We are also limited to measure only certain properties of a system, and we cannot acquire complete information. This is a limitation we must accept on our ability to perform experiments. In this sense, if we use the clock we defined only using Classical Concepts, then this implies that flows have a preferred direction ie, a the direction of incerasing entropy. Question of time In my opinion will completely resolve Itself if one understands what a memory impression is. Memory being impression being permanent contains a record of the passing of time. I think it is very closely connected to the foundational Issues that plague Quantum measurement. Saying this coming to your question on Time running Slower closer to the big-bang, In one sense One can say that there is no structure to measure the movement of time. But really to answer this question we have to wait for the discovery of Quantum Gravity. share|cite|improve this answer The 7 fundamental quantities are hard to define. • Time • Displacement/position • Mass • Temperature • Current • Amount of substance (e.i. the mole) • Luminous intensity More here: Time is just one of them. If someone asked me, I would say something like "Time is how long something lasts" or "Time is the duration of something". But that's circular; it's the same as saying "Time is how much time has passed". So it really doesn't say anything. Any other physical quantity can be explained in terms of these 7 fundamental ones. E.i., velocity is how far something moves per time unit. But how can you explain/define the fundamental ones themselves, then? My best answer is: You can't! You can only define it empirically or with examples. Time is what a clock shows, I heard someone say once. You do though say that we can define displacement and current. But how? How would you do that without ending up in a similar looping circular explanation? share|cite|improve this answer I go with Time is the separation between distinct events that happen in the same place. which is very general and not quantitative at all, but covers the basics. Given three distinct events that happen at the same place we can determine which happened between the other from just the values of the three separations. And it agrees with the notion that "time is what a clock measures". From the perspective of relativity this definition is the proper time. share|cite|improve this answer The concept of time is intimately related with the concept of causality. If we don't have the notion that something can cause some other thing then there is no objective meaning to the word "time". It's causality which enables us to decide and describe which event is past and which is present. In relativity, as we know, space and time are intimately related to each other. What is just space to some observer may be a combination of space and time for another. It is therefore helpful to think of a $4$ dimensional space called spacetime whose points represent events. An event therefore needs $4$ independent numbers to be uniquely specified. Out of this $4$ numbers one is a little special. If you draw a light cone at any point in this space then all except one of the axes will be outside the light cone. This special axis is the direction of time and the numbers it represent is "time". share|cite|improve this answer Thanks for not commenting on the reason for the down vote. That's a huge relief ;) – user1355 Oct 7 '11 at 17:08 +1 because this definitely doesn't deserve a -1 :-). You essentially rephrase my question in the first part. I could have rephrased as asking, "is it the case that time is simply causality?" If this is the case then it seems the notion of a "flow" of time only exists because we have a memory of the past events; when in fact there is no past, there is no future, there is only stuff which interacts. So the words "time", "causality" and "interaction" are interchangeable leaving us with only stuff that changes. – coder Oct 7 '11 at 17:44 Not my downvote, but the concept of time does not require a concept of causality, which is notoriously hard to pin down in the microrealm and probably doesn't make any sense. – Ron Maimon Oct 10 '11 at 5:30 I disagree with you Ron. First it is dangerous to say that causality doesn't make any sense in the micro realm. If it were so, then how can you ever trust QFT which is fully consistent with S.R. and which requires causality to hold strictly. Secondly, if two events are space like separated in the micro realm how do you decide which has taken place earlier? You can't, unless you seriously modify the existing theories or unless you are talking about some as yet unknown QG theory. – user1355 Oct 11 '11 at 16:02 @RonMaimon: However, it is true that in the microscopic world there may be processes which may not have any intrinsic "arrow of time". But that's an altogether different issue, right? – user1355 Oct 11 '11 at 16:09 well the way time should be conceived is the same way you should look at motion or any type of energy kinetic or potential, ergo it should be treated as such. example, when a object falls from a table the time it takes to travel through the air co insists with the space around it ("space-time to be precise which the fine gent below me is proclaiming). So your question comes up which the answer would most likely be yes. but not to forget time is also a unit of measurement such as length width and depth and we use it as such. it simply being the rate of change is plausible under certain theoretical works in the past which many have been trying to prove fact in the present. share|cite|improve this answer you seem to think time started with the big bang, how long was the matter there before the big bang? we are still only talking about half an equation. two points are still needed to justify either of our perspectives share|cite|improve this answer Your Answer
32a5ff3abee15694
Regge theory From Wikipedia, the free encyclopedia Jump to: navigation, search In quantum physics, Regge theory is the study of the analytic properties of scattering as a function of angular momentum, where the angular momentum is not restricted to be an integer but is allowed to take any complex value. The nonrelativistic theory was developed by Tullio Regge in 1959. History and implications[edit] The main result of the theory is that the scattering amplitude for potential scattering grows as a function of the cosine z of the scattering angle as a power that changes as the scattering energy changes: A(z) \propto z^{l(E^2)} where l(E^2) is the noninteger value of the angular momentum of a would-be bound state with energy E. It is determined by solving the radial Schrödinger equation and it smoothly interpolates the energy of wavefunctions with different angular momentum but with the same radial excitation number. The trajectory function is a function of s=E^2 for relativistic generalization. The expression l(s) is known as the Regge trajectory function, and when it is an integer, the particles form an actual bound state with this angular momentum. The asymptotic form applies when z much greater than one, which is not a physical limit in nonrelativistic scattering. Shortly afterwards, Stanley Mandelstam noted that in relativity the purely formal limit of z large is near to a physical limit — the limit of large t. Large t means large energy in the crossed channel, where one of the incoming particles has an energy momentum that makes it an energetic outgoing antiparticle. This observation turned Regge theory from a mathematical curiosity into a physical theory: it demands that the function that determines the falloff rate of the scattering amplitude for particle-particle scattering at large energies is the same as the function that determines the bound state energies for a particle-antiparticle system as a function of angular momentum.[1] The switch required swapping the Mandelstam variable s, which is the square of the energy, for t, which is the squared momentum transfer, which for elastic soft collisions of identical particles is s times one minus the cosine of the scattering angle. The relation in the crossed channel becomes A(z) \propto s^{l(t)} ...which says that the amplitude has a different power law falloff as a function of energy at different corresponding angles, where corresponding angles are those with the same value of t. It predicts that the function that determines the power law is the same function that interpolates the energies where the resonances appear. The range of angles where scattering can be productively described by Regge theory shrinks into a narrow cone around the beam-line at large energies. In 1960 Geoffrey Chew and Steven Frautschi conjectured from limited data that the strongly interacting particles had a very simple dependence of the squared-mass on the angular momentum: the particles fall into families where the Regge trajectory functions were straight lines: l(s)=ks with the same constant k for all the trajectories. The straight-line Regge trajectories were later understood as arising from massless endpoints on rotating relativistic strings. Since a Regge description implied that the particles were bound states, Chew and Frautschi concluded that none of the strongly interacting particles were elementary. Experimentally, the near-beam behavior of scattering did fall off with angle as explained by Regge theory, leading many to accept that the particles in the strong interactions were composite. Much of the scattering was diffractive, meaning that the particles hardly scatter at all — staying close to the beam line after the collision. Vladimir Gribov noted that the Froissart bound combined with the assumption of maximum possible scattering implied there was a Regge trajectory that would lead to logarithmically rising cross sections, a trajectory nowadays known as the Pomeron. He went on to formulate a quantitative perturbation theory for near beam line scattering dominated by multi-Pomeron exchange. From the fundamental observation that hadrons are composite, there grew two points of view. Some correctly advocated that there were elementary particles, nowadays called quarks and gluons, which made a quantum field theory in which the hadrons were bound states. Others also correctly believed that it was possible to formulate a theory without elementary particles — where all the particles were bound states lying on Regge trajectories and scatter self-consistently. This was called S-matrix theory. The most successful S-matrix approach centered on the narrow-resonance approximation, the idea that there is a consistent expansion starting from stable particles on straight-line Regge trajectories. After many false starts, Dolen Horn and Schmidt understood a crucial property that led Gabriele Veneziano to formulate a self-consistent scattering amplitude, the first string theory. Mandelstam noted that the limit where the regge trajectories are straight is also the limit where the lifetime of the states is long. As a fundamental theory of strong interactions at high energies, Regge theory enjoyed a period of interest in the 1960s, but it was largely succeeded by quantum chromodynamics. As a phenomenological theory, it is still an indispensable tool for understanding near-beam line scattering and scattering at very large energies. Modern research focuses both on the connection to perturbation theory and to string theory. List of unsolved problems in physics How does Regge theory emerge from quantum chromodynamics at long distances? See also[edit] 1. ^ Gribov, V. (2003). The Theory of Complex Angular Momentum. Cambridge University press. ISBN 0-521-81834-6.  Further reading[edit] External links[edit]
747f1d0e50d2585f
Symmetry, Integrability and Geometry: Methods and Applications (SIGMA) SIGMA 4 (2008), 014, 7 pages      arXiv:0802.0482 Contribution to the Proceedings of the Seventh International Conference Symmetry in Nonlinear Mathematical Physics Symmetry Transformation in Extended Phase Space: the Harmonic Oscillator in the Husimi Representation Samira Bahrami a and Sadolah Nasiri b a) Department of Physics, Zanjan University, Zanjan, Iran b) Institute for Advanced Studies in Basic Sciences, Iran Received October 08, 2007, in final form January 23, 2008; Published online February 04, 2008 In a previous work the concept of quantum potential is generalized into extended phase space (EPS) for a particle in linear and harmonic potentials. It was shown there that in contrast to the Schrödinger quantum mechanics by an appropriate extended canonical transformation one can obtain the Wigner representation of phase space quantum mechanics in which the quantum potential is removed from dynamical equation. In other words, one still has the form invariance of the ordinary Hamilton-Jacobi equation in this representation. The situation, mathematically, is similar to the disappearance of the centrifugal potential in going from the spherical to the Cartesian coordinates. Here we show that the Husimi representation is another possible representation where the quantum potential for the harmonic potential disappears and the modified Hamilton-Jacobi equation reduces to the familiar classical form. This happens when the parameter in the Husimi transformation assumes a specific value corresponding to Q-function. Key words: Hamilton-Jacobi equation; quantum potential; Husimi function; extended phase space. pdf (190 kb)   ps (136 kb)   tex (10 kb) 1. Bohm D., Hiley B.J., Unbroken quantum realism, from microscopic to macroscopic levels, Phys. Rev. Lett. 55 (1985), 2511-2514. 2. Holland P.R., The quantum theory of motion, Cambridge University Press, 1993, 68-69. 3. Takabayashi T., The formulation of quantum mechanics in terms of ensemble in phase space, Progr. Theoret. Phys. 11 (1954), 341-373. 4. Muga J.G., Sala R., Snider R.F., Comparison of classical and quantum evolution of phase space distribution functions, Phys. Scripta 47 (1993), 732-739. 5. Brown M.R., The quantum potential: the breakdown of classical symplectic symmetry and the energy of localization and dispersion, quant-ph/9703007. 6. Holland P.R., Quantum back-reaction and the particle law of motion, J. Phys. A: Math. Gen. 39 (2006), 559-564. 7. Shojai F., Shojai A., Constraints algebra and equation of motion in Bohmian interpretation of quantum gravity, Classical Quantum Gravity 21 (2004), 1-9, gr-qc/0409035. 8. Carroll R., Fluctuations, gravity, and the quantum potential, gr-qc/0501045. 9. Nasiri S., Quantum potential and symmetries in extended phase space, SIGMA 2 (2006), 062, 12 pages, quant-ph/0511125. 10. Carroll R., Some fundamental aspects of a quantum potential, quant-ph/0506075. 11. Sobouti Y., Nasiri S., A phase space formulation of quantum state functions, Internat. J. Modern Phys. B 7 (1993), 3255-3272. 12. Nasiri S., Sobouti Y., Taati F., Phase space quantum mechanics - direct, J. Math. Phys. 47 (2006), 092106, 15 pages, quant-ph/0605129. 13. Nasiri S., Khademi S., Bahrami S., Taati F., Generalized distribution functions in extended phase space, in Proceedings QST4, Editor V.K. Dobrev, Heron Press Sofia, 2006, Vol. 2, 820-826. 14. Wigner E., On the quantum correction for thermodynamic equillibrium, Phys. Rev. 40 (1932), 749-759. 15. Lee H.W., Theory and application of the quantum phase space distribution functions, Phys. Rep. 259 (1995), 147-211. 16. de Gosson M., Symplectically covariant Schrödinger equation in phase space, J. Phys. A: Math. Gen. 38 (2005), 9263-9287, math-ph/0505073. 17. Jannussis A., Patargias N., Leodaris A., Phillippakis T., Streclas A., Papatheos V., Some remarks on the nonnegative quantum mechanical distribution functions, Preprint, Department of Theoretical Physics, University of Patras, 1982. 18. Husimi K., Some formal properties of the density matrix, Proc. Phys.-Math. Soc. Japan 22 (1940), 264-314. Previous article   Next article   Contents of Volume 4 (2008)
7251a7b63edd2141
The Sonic Screwdriver working prototype During summers, while working at the Institute for Medical Science & Technology, which is located at the largest research hospital in the United Kingdom, we made a sonic screwdriver. ...It beams out enough ultrasound to pick up a 4-inch diameter, half-inch thick rubber disk and spin it around (and we can switch the direction of rotation and the rate of spinning). Other members of the team have begun work to raise the operating frequency by two orders of magnitude, which shrinks everything in size, miniaturizing the device. -- And by using resonators the power requirements also shrink.      Why? Well, on the one hand, Magnetic Resonance Imaging + focused ultrasound has some real potential for medical treatment, but needs further development. On a more academic level, our results validate, for the first time directly, something quite general: the theoretically predicted ratio of the orbital angular momentum to linear momentum in a propagating beam.      There are some interesting predictions about these sorts of beams (e.g., negative radiation pressure in higher-order Bessel Beams) that we'd like to test, and they also serve as rapid, sensitive tests of system aberrations (wibbly wobbly beamy-weamy stuff that we like to have control over). One of my IWU students and I will go back to Scotland this summer to work on those aspects.      In addition, this work provides a model system that fits nicely into teaching, where I have to work hard to convince students that there really isn't anything "orbiting" in the stationary states that we find in solving the Schrödinger equation for the hydrogen atom, even though those states can have orbital angular momentum. What does orbital angular momentum mean, when nothing is orbiting?       At a soccer stadium, a beach ball might be sent around the stadium by fans “doing the wave” but if you’re a fan, you generally would bat the ball laterally, in order to transfer momentum in that direction: that’s cheating! If, instead, you start with a fan with a stationary ball overhead and the fan simply moves up and down, then you have to ask where does the transverse momentum come from.       By making analogous states with acoustic waves, these ideas become clearer. For example, beneath the blue hockey puck that we acoustically levitate in the video, is an array of one thousand piezoelectric actuators, each simply moving up and down. Surprisingly, this transfers angular momentum if we tailor the phase shift between adjacent actuators.       In the left-hand figure above, color represents the relative phase lags we impose on our array of actuators, as we move around a circle. Note that since phase is a periodic variable (ranging from zero to 360°), the horizontal line is not a discontinuity. It's the helicity of the resulting wavefront (shown at right) that allows these waves to transfer angular momentum. Our work looks specifically at the ratio of the orbital angular momentum to the linear momentum carried by a propagating beam, as we move increase the “pitch” of these helical waves (which, again, is a result of the phase profile we impose upon our array of actuators). Importantly, the same “phase factor” describes the stationary states of hydrogen that carry “orbital” angular momentum. It also describes laser modes that carry “orbital” angular momentum.      By the way, one of the (several) advantages of doing this with acoustic waves rather than light waves is that acoustic waves aren't polarized, and this eliminates one source of confusion that may arise in the optical case. -- In thinking about optics, most people associate the angular momentum of light with circular polarization, which is really something else entirely: that sort of angular momentum comes from a rotating polarization (and is called "spin" angular momentum or "intrinsic" angular momentum). What we are interested in, instead, is orbital angular momentum, which is (quite generally) associated with helicity of the wavefront. "The sonic screwdriver," G. C. Spalding, C. Démoré, A. Volovick, Z. Yang, Y. Hertzberg, M. MacDonald, A. Cochran, Proceedings SPIE 8097, 8097-58 (2011). "Mechanical evidence of the orbital angular momentum to energy ratio of vortex beams," C. Démoré, Z. Yang, A. Volovick, S. Cochran, M. MacDonald, G. C. Spalding, Physical Review Letters 108, 194301 (2012). Gabe Spalding & Kishan Dholakia teach a related short course for professionals, to enable you to: 1. assess a variety of approaches to beam shaping and wavefront correction 2. explain simple protocols for optimizing some beam types of broad interest 3. describe various aspects of data analysis for some wavefront correction algorithms 4. identify key options for enhanced degrees of beam control, resolution, and sensitivity Orbital Angular Momentum carried by waves
9f3dd67317dc3124
From Wikipedia, the free encyclopedia Jump to: navigation, search An instanton[1] (or pseudoparticle[2][3]) is a notion appearing in theoretical and mathematical physics. An instanton is a classical solution to equations of motion[note 1] with a finite, non-zero action, either in quantum mechanics or in quantum field theory. More precisely, it is a solution to the equations of motion of the classical field theory on a Euclidean spacetime. Quantum theory[edit] In such quantum theories, solutions to the equations of motion may be thought of as critical points of the action. The critical points of the action may be local maxima of the action, local minima, or saddle points. For example the classical path (or classical equation of motion) is the path that minimizes the action and is therefore a global minimum. Instantons are important in quantum field theory because: • they appear in the path integral as the leading quantum corrections to the classical behavior of a system, and • they can be used to study the tunneling behavior in various systems such as a Yang–Mills theory. Mathematically, a Yang–Mills instanton is a self-dual or anti-self-dual connection in a principal bundle over a four-dimensional Riemannian manifold that plays the role of physical space-time in non-abelian gauge theory. Instantons are topologically nontrivial solutions of Yang–Mills equations that absolutely minimize the energy functional within their topological type. The first such solutions were discovered in the case of four-dimensional Euclidean space compactified to the four-dimensional sphere, and turned out to be localized in space-time, prompting the names pseudoparticle and instanton. Yang–Mills instantons have been explicitly constructed in many cases by means of twistor theory, which relates them to algebraic vector bundles on algebraic surfaces, and via the ADHM construction, or hyperkähler reduction (see hyperkähler manifold), a sophisticated linear algebra procedure. The groundbreaking work of Simon Donaldson, for which he was later awarded the Fields medal, used the moduli space of instantons over a given four-dimensional differentiable manifold as a new invariant of the manifold that depends on its differentiable structure and applied it to the construction of homeomorphic but not diffeomorphic four-manifolds. Many methods developed in studying instantons have also been applied to monopoles. This is because Magnetic Monopoles arise as solutions of a dimensional reduction of the Yang-Mills Equations; see, for instance, the introduction to Hitchin's paper Self-Duality Equations on Riemann Surface. Quantum mechanics[edit] An instanton can be used to calculate the transition probability for a quantum mechanical particle tunneling through a potential barrier. One of the simplest examples of a system with an instanton effect is a particle in a double-well potential. In contrast to a classical particle, there is non-vanishing probability that it crosses a region of potential energy higher than its own energy. One way to calculate this probability is by means of the semi-classical WKB approximation, which requires the value of \hbar to be small. The Schrödinger equation for the particle reads If the potential were constant, the solution would (up to proportionality) be a plane wave, \psi = \exp(-\mathrm{i}kx)\, This means that if the energy of the particle is smaller than the potential energy, one obtains an exponentially decreasing function. The associated tunneling amplitude is proportional to e^{-\frac{1}{\hbar}\int_a^b\sqrt{2m(V(x)-E)} \, dx}, where a and b are the beginning and endpoint of the tunneling trajectory. Alternatively, the use of path integrals allows an instanton interpretation and the same result can be obtained with this approach. In path integral formulation, the transition amplitude can be expressed as K(a,b;t)=\langle x=a|e^{-\frac{i\mathbb{H}t}{\hbar}}|x=b\rangle =\int d[x(t)]e^{\frac{iS[x(t)]}{\hbar}}. Following the process of Wick rotation (analytic continuation) to Euclidean spacetime (it\rightarrow \tau), one gets K_E(a,b;\tau)=\langle x=a|e^{-\frac{\mathbb{H}\tau}{\hbar}}|x=b\rangle =\int d[x(\tau)]e^{-\frac{S_E[x(\tau)]}{\hbar}}, with the Euclidean action S_E=\int_{\tau_a}^{\tau_b}\left(\frac{1}{2}m\left(\frac{dx}{d\tau}\right)^2+V(x)\right) d\tau. The potential energy changes sign V(x) \rightarrow - V(x) under the Wick rotation and the minima transform into maxima, thereby V(x) exhibits two "hills" of maximal energy. Another way to understand the concept of instantons is to consider the action in the path integral. We generally want to look for solutions to a Hamiltonian that minimize the action. We know that the classical solution is the minimum of this. However if we choose a path that deviates slightly from the classical path it is possible that its action is infinite and so that solution is not viable. It is possible in some cases to find a solution that deviates from the classical path but whose action differs finitely from the classical action. These are the instanton solutions. Results obtained from the mathematically well-defined Euclidean path integral may be Wick-rotated back and give the same physical results as would be obtained by appropriate treatment of the (potentially divergent) Minkowskian path integral. As can be seen from this example, calculating the transition probability for the particle to tunnel through a classically forbidden region (V(x)) with the Minkowskian path integral corresponds to calculating the transition probability to tunnel through a classically allowed region (with potential −V(X)) in the Euclidean path integral (pictorially speaking—in the Euclidean picture—this transition corresponds to a particle rolling from one hill of a double-well potential standing on its head to the other hill). This classical solution of the Euclidean equations of motion is often named "kink solution" and is an example of an instanton. In this example, the two "vacua" of the double-well potential, turn into hills in the Euclideanized version of the problem. Thus, the instanton field solution of the (Euclidean, i. e., with imaginary time) (1 + 1)-dimensional field theory- first quantized quantum mechanical description- allows to be interpreted as a tunneling effect between the two vacua of the physical (1-dimensional space + real time) Minkowskian system. Note that a naive perturbation theory around one of those two vacua (of the Minkowskian description) would never show this non-perturbative tunneling effect, dramatically changing the picture of the vacuum structure of this quantum mechanical system. Therefore, the perturbative approach may not completely describe the vacuum structure of a physical system. This may have important consequences, for example, in the theory of "axions" where the non-trivial QCD vacuum effects (like the instantons) spoil the Peccei–Quinn symmetry explicitly and transform massless Nambu–Goldstone bosons into massive pseudo-Nambu–Goldstone ones. Quantum field theory[edit] Hypersphere S^3 Hypersphere Stereographic projection Parallels (red), meridians (blue) and hypermeridians (green).[note 2] In studying Quantum Field Theory (QFT), the vacuum structure of a theory may draw attention to instantons. Just as a double-well quantum mechanical system illustrates, a naive vacuum may not be the true vacuum of a field theory. Moreover, the true vacuum of a field theory may be an "overlap" of several topologically inequivalent sectors, so called "topological vacua". A well understood and illustrative example of an instanton and its interpretation can be found in the context of a QFT with a non-abelian gauge group,[note 3] a Yang–Mills theory. For a Yang–Mills theory these inequivalent sectors can be (in an appropriate gauge) classified by the third homotopy group of SU(2) (whose group manifold is the 3-sphere S^3). A certain topological vacuum (a "sector" of the true vacuum) is labelled by an unaltered transform, the Pontryagin index. As the third homotopy group of S^3 has been found to be the set of integers, there are infinitely many topologically inequivalent vacua, denoted by |N\rangle , where N is their corresponding Pontryagin index. An instanton is a field configuration fulfilling the classical equations of motion in Euclidean spacetime, which is interpreted as a tunneling effect between these different topological vacua. It is again labelled by an integer number, its Pontryagin index, Q. One can imagine an instanton with index Q to quantify tunneling between topological vacua |N\rangle and |N+Q\rangle . If Q = 1, the configuration is named BPST instanton after its discoverers Alexander Belavin, Alexander Polyakov, Albert S. Schwartz and Yu. S. Tyupkin. The true vacuum of the theory is labelled by an "angle" theta and is an overlap of the topological sectors: |\theta\rangle =\sum_{N=-\infty}^{N=+\infty}e^{i \theta N}|N\rangle. Gerard 't Hooft first performed the field theoretic computation of the effects of the BPST instanton in a theory coupled to fermions in [1]. He showed that zero modes of the Dirac equation in the instanton background lead to a non-perturbative multi-fermion interaction in the low energy effective action. Yang–Mills theory[edit] The classical Yang–Mills action on a principal bundle with structure group G, base M, connection A, and curvature (Yang–Mills field tensor) F is S_{YM} = \int_M \left|F\right|^2 d\mathrm{vol}_M, where d\mathrm{vol}_M is the volume form on M. If the inner product on \mathfrak{g}, the Lie algebra of G in which F takes values, is given by the Killing form on \mathfrak{g}, then this may be denoted as \int_M \mathrm{Tr}(F \wedge *F), since F \wedge *F = \langle F, F \rangle d\mathrm{vol}_M. For example, in the case of the gauge group U(1), F will be the electromagnetic field tensor. From the principle of stationary action, the Yang–Mills equations follow. They are \mathrm{d}F = 0, \quad \mathrm{d}{*F} = 0. The first of these is an identity, because dF = d2A = 0, but the second is a second-order partial differential equation for the connection A, and if the Minkowski current vector does not vanish, the zero on the rhs. of the second equation is replaced by \mathbf J. But notice how similar these equations are; they differ by a Hodge star. Thus a solution to the simpler first order (non-linear) equation {*F} = \pm F\, is automatically also a solution of the Yang–Mills equation. Such solutions usually exist, although their precise character depends on the dimension and topology of the base space M, the principal bundle P, and the gauge group G. In nonabelian Yang–Mills theories, DF=0 and D*F=0 where D is the exterior covariant derivative. Furthermore, the Bianchi identity DF=dF+A\wedge F-F\wedge A=d(dA+A\wedge A)+A\wedge (dA+A\wedge A)-(dA + A\wedge A)\wedge A=0 is satisfied. In quantum field theory, an instanton is a topologically nontrivial field configuration in four-dimensional Euclidean space (considered as the Wick rotation of Minkowski spacetime). Specifically, it refers to a Yang–Mills gauge field A which approaches pure gauge at spatial infinity. This means the field strength vanishes at infinity. The name instanton derives from the fact that these fields are localized in space and (Euclidean) time – in other words, at a specific instant. The case of instantons on the two-dimensional space may be easier to visualise because it admits the simplest case of the gauge group, namely U(1), that is an abelian group. In this case the field A can be visualised as simply a vector field. An instanton is a configuration where, for example, the arrows point away from a central point (i.e., a "hedgehog" state). In Euclidean four dimensions, \mathbb{R}^4, abelian instantons are impossible. The field configuration of an instanton is very different from that of the vacuum. Because of this instantons cannot be studied by using Feynman diagrams, which only include perturbative effects. Instantons are fundamentally non-perturbative. The Yang–Mills energy is given by \frac{1}{2}\int_{\mathbb{R}^4} \operatorname{Tr}[*\bold{F}\wedge \bold{F}] where ∗ is the Hodge dual. If we insist that the solutions to the Yang–Mills equations have finite energy, then the curvature of the solution at infinity (taken as a limit) has to be zero. This means that the Chern–Simons invariant can be defined at the 3-space boundary. This is equivalent, via Stokes' theorem, to taking the integral This is a homotopy invariant and it tells us which homotopy class the instanton belongs to. Since the integral of a nonnegative integrand is always nonnegative, =\int_{\mathbb{R}^4}\operatorname{Tr}[*\bold{F}\wedge\bold{F}+\cos\theta \bold{F}\wedge\bold{F}] for all real θ. So, this means If this bound is saturated, then the solution is a BPS state. For such states, either ∗F = F or ∗F = − F depending on the sign of the homotopy invariant. Instanton effects are important in understanding the formation of condensates in the vacuum of quantum chromodynamics (QCD) and in explaining the mass of the so-called 'eta-prime particle', a Goldstone-boson[note 4] which has acquired mass through the axial current anomaly of QCD. Note that there is sometimes also a corresponding soliton in a theory with one additional space dimension. Recent research on instantons links them to topics such as D-branes and Black holes and, of course, the vacuum structure of QCD. For example, in oriented string theories, a Dp brane is a gauge theory instanton in the world volume (p + 5)-dimensional U(N) gauge theory on a stack of N D(p + 4)-branes. Various numbers of dimensions[edit] Instantons play a central role in the nonperturbative dynamics of gauge theories. The kind of physical excitation that yields an instanton depends on the number of dimensions of the spacetime, but, surprisingly, the formalism for dealing with these instantons is relatively dimension-independent. In 4-dimensional gauge theories, as described in the previous section, instantons are gauge bundles with a nontrivial four-form characteristic class. If the gauge symmetry is a unitary group or special unitary group then this characteristic class is the second Chern class, which vanishes in the case of the gauge group U(1). If the gauge symmetry is an orthogonal group then this class is the first Pontrjagin class. In 3-dimensional gauge theories with Higgs fields, 't Hooft–Polyakov monopoles play the role of instantons. In his 1977 paper Quark Confinement and Topology of Gauge Groups, Alexander Polyakov demonstrated that instanton effects in 3-dimensional QED coupled to a scalar field lead to a mass for the photon. In 2-dimensional abelian gauge theories worldsheet instantons are magnetic vortices. They are responsible for many nonperturbative effects in string theory, playing a central role in mirror symmetry. In 1-dimensional quantum mechanics, instantons describe tunneling, which is invisible in perturbation theory. 4d supersymmetric gauge theories[edit] Supersymmetric gauge theories often obey nonrenormalization theorems, which restrict the kinds of quantum corrections which are allowed. Many of these theorems only apply to corrections calculable in perturbation theory and so instantons, which are not seen in perturbation theory, provide the only corrections to these quantities. Field theoretic techniques for instanton calculations in supersymmetric theories were extensively studied in the 1980s by multiple authors. Because supersymmetry guarantees the cancellation of fermionic vs. bosonic non-zero modes in the instanton background, the involved 't Hooft computation of the instanton saddle point reduces to an integration over zero modes. In N = 1 supersymmetric gauge theories instantons can modify the superpotential, sometimes lifting all of the vacua. In 1984 Ian Affleck, Michael Dine and Nathan Seiberg calculated the instanton corrections to the superpotential in their paper Dynamical Supersymmetry Breaking in Supersymmetric QCD. More precisely, they were only able to perform the calculation when the theory contains one less flavor of chiral matter than the number of colors in the special unitary gauge group, because in the presence of fewer flavors an unbroken nonabelian gauge symmetry leads to an infrared divergence and in the case of more flavors the contribution is equal to zero. For this special choice of chiral matter, the vacuum expectation values of the matter scalar fields can be chosen to completely break the gauge symmetry at weak coupling, allowing a reliable semi-classical saddle point calculation to proceed. By then considering perturbations by various mass terms they were able to calculate the superpotential in the presence of arbitrary numbers of colors and flavors, valid even when the theory is no longer weakly coupled. In N = 2 supersymmetric gauge theories the superpotential receives no quantum corrections. However the correction to the metric of the moduli space of vacua from instantons was calculated in a series of papers. First, the one instanton correction was calculated by Nathan Seiberg in Supersymmetry and Nonperturbative beta Functions. The full set of corrections for SU(2) Yang–Mills theory was calculated by Nathan Seiberg and Edward Witten in Electric – magnetic duality, monopole condensation, and confinement in N=2 supersymmetric Yang–Mills theory, in the process creating a subject that is today known as Seiberg–Witten theory. They extended their calculation to SU(2) gauge theories with fundamental matter in Monopoles, duality and chiral symmetry breaking in N=2 supersymmetric QCD. These results were later extended for various gauge groups and matter contents, and the direct gauge theory derivation was also obtained in most cases. For gauge theories with gauge group U(N) the Seiberg-Witten geometry has been derived from gauge theory using Nekrasov partition functions in 2003 by Nikita Nekrasov and Andrei Okounkov and independently by Hiraku Nakajima and Kota Yoshioka. In N = 4 supersymmetric gauge theories the instantons do not lead to quantum corrections for the metric on the moduli space of vacua. See also[edit] References and notes[edit] 1. ^ Equations of motion are grouped under three main types of motion: translations, rotations, oscillations (or any combinations of these). 3. ^ See also: Non-abelian gauge theory 4. ^ See also:Pseudo-Goldstone boson 1. ^ Instantons in Gauge Theories. Edited by Mikhail A. Shifman. World Scientific, 1994. 2. ^ Interactions Between Charged Particles in a Magnetic Field. By Hrachya Nersisyan, Christian Toepffer, Günter Zwicknagel. Springer, Apr 19, 2007. Pg 23 3. ^ Large-Order Behaviour of Perturbation Theory. Edited by J.C. Le Guillou, J. Zinn-Justin. Elsevier, Dec 2, 2012. Pg. 170. • Instantons in Gauge Theories, a compilation of articles on instantons, edited by Mikhail A. Shifman • Solitons and Instantons, R. Rajaraman (Amsterdam: North Holland, 1987), ISBN 0-444-87047-4 • The Uses of Instantons, by Sidney Coleman in Proc. Int. School of Subnuclear Physics, (Erice, 1977); and in Aspects of Symmetry p. 265, Sidney Coleman, Cambridge University Press, 1985, ISBN 0-521-31827-0; and in Instantons in Gauge Theories • Solitons, Instantons and Twistors. M. Dunajski, Oxford University Press. ISBN 978-0-19-857063-9. • The Geometry of Four-Manifolds, S.K. Donaldson, P.B. Kronheimer, Oxford University Press, 1990, ISBN 0-19-853553-8. External links[edit] • The dictionary definition of instanton at Wiktionary
147de9907ed7e3ad
Semiclassical theory of helium atom From Scholarpedia Gregor Tanner and Klaus Richter (2013), Scholarpedia, 8(4):9818. doi:10.4249/scholarpedia.9818 revision #132323 [link to/cite this article] Jump to: navigation, search Curator and Contributors 1.00 - Gregor Tanner In memory of our teacher and friend, Dieter Wintgen, who died on the 16th August 1994 at the age of 37 years on the descent from the Weisshorn (4505 m). Semiclassical theory of helium atom refers to a description of the quantum spectrum of helium in terms of the underlying classical dynamics of the strongly chaotic three-body Coulomb system formed by the nucleus and the two electrons. Helium and its role for the development of quantum mechanics Helium: an atomic three-body problem The semiclassical theory of the helium atom (or other two-electron atoms) follows the idea of computing and understanding the quantum energy levels starting from trajectories of the underlying classical system. In helium, the classical dynamics is given by the pair of interacting electrons moving in the field of the (heavy) nucleus. Two-electron atoms represent a paradigmatic system for the successful application of concepts of quantum chaos theory and in particular the Gutzwiller trace formula. Figure 1: The helium atom composed of two electrons and a nucleus of charge Z=2 (from Tanner et al. 2000) Helium, as the prototype of a two-electron atom, is composed of the nucleus with charge Z=2 and two electrons, see Figure 1. The interplay between the attractive Coulomb interaction between the nucleus and the electrons and the Coulomb repulsion between the electrons gives rise to exceedingly complicated spectral features, despite the seemingly simple form of the underlying quantum Hamiltonian. Correspondingly, orbits of the two interacting electrons, when considered as classical particles, are predominantly characterized by chaotic dynamics and cannot be calculated analytically. Hence, helium as a microscopic three-body Coulomb system has much in common with its celestial analogue, the gravitational three-body problem. The failure of the "old quantum theory" Modern semiclassical theory of the helium atom has its roots in the early days of quantum theory: The observation that atomic spectra consist of discrete lines called for a then novel theoretical approach, a quantum theory for atoms. Bohr's early attempts were formulated in terms of quantum postulates and successfully reproduced the energy levels of hydrogen by requiring periodic (elliptic) Kepler electron motion with quantized radii, respectively momenta p, \[\tag{1} \oint p d q = n h \] Figure 2: Periodic orbit configurations of the helium electron pair that served as quasi-classical models for the ground state (from Tanner et al. 2000) (where n is an integer and h Planck's constant). It was natural to try this approach also for helium, the simplest atom with more than one electron. By applying Bohr's ad hoc quantization rule (1) to various periodic orbit configurations of the electron pair motion in helium (see Figure 2), a number of leading physicists of that time, including Bohr, Born, Kramers, Landé , Sommerfeld and van Vleck, tried to compute the ground state energy of helium. However, without success: all models gave unsatisfactory results. Figure 3: Heisenberg's proposal for Kepler-type electron pair motion in helium (from Tanner et al. 2000) Heisenberg, then a student of Sommerfeld, devised a different trajectory configuration with the electrons moving on perturbed Kepler ellipses on different sides of the helium nucleus; in Figure 3 Heisenberg's sketch of this configuration posted in a letter to Sommerfeld in 1922 is shown. Assuming half-integer quantum numbers in his letter, Heisenberg arrived at a helium ionization potential of 24.6 V very close to the observed value of 24.5 V. However, discouraged by Bohr who did not accept such half-integer orbital quantum numbers, Heisenberg never published his results. Though the good agreement must be considered as accidental, the Heisenberg model came closest to an adequate semiclassical description of the helium ground state. Modern semiclassical theory reveals that the association of energy levels with individual periodic orbits in the old quantum theory was too simple-minded. Indeed, for chaotic systems such as the three-body problem helium, it is the entirety of all periodic orbits which conspire to form the energy levels such as beautifully shown in Gutzwiller's trace formula. For a comprehensive account of the developments of the semiclassical theory for helium up to the year 2000, see Tanner et al. 2000. The problems and failure of (most of) the attempts to quantize the electron pair motion in helium marked the end of the "old quantum theory" which was subsequently replaced by the "new quantum theory": quantum (wave) mechanics which has proven very successful to this day. Spectral properties and quantum-mechanical concepts By now considerable parts of the rich energy spectrum of the helium atom have been computed quantum mechanically by numerically solving the Schrödinger equation for the two-electron Hamiltonian of helium. To that end, besides the orbital dynamics, the spin degree of freedom of the two electrons has to be considered. The electron spins can be paired antiparallel or parallel leading to the distinction of singlet states (total spin $S = 0$) and triplet states ($S=1$) often referred to as parahelium and orthohelium, respectively. Figure 4 depicts, as a representative case, the level diagram of parahelium. The helium states and energy levels can be classified as follows: (i) the ground state and bound singly excited states, (ii) doubly excited resonant states, and (iii) unbound continuum states at energies above the two particle fragmentation threshold that are not considered here. States of category (i) are composed of one electron in a hydrogen-type ground state with quantum number $N=1$ and the second electron being excited with energy levels (labeled by $n=1,2,3...$) forming a Rydberg series (see Figure 4) converging to the first ionization threshold at an energy of $-Z^2/2$ (in atomic units). In energy region (ii) the doubly excited states have a finite lifetime; they can decay, owing to the mutual repulsive interaction between the electrons, by autoionization where one electron leaves the system while the second one remains bounded to the nucleus. These doubly excited states are organized in doubly infinite level sequences with quantum numbers N and n. As visible in Figure 4, they apparently form individual Rydberg series labeled by the index $N$, the hydrogen-like principle quantum number of the energetically lower electron. However, closer inspection of the energy region approaching complete fragmentation (i.e. the border to regime (iii)) shows that neighboring Rydberg series perturb each other more and more. Figure 4: Helium energy level diagram (from Tanner et al. 2000) With further increasing energy, these states eventually form a rather dense set of energy levels with seemingly irregular spacings, and the specification of the two-electron states in terms of the quantum numbers (N,n) looses its meaning at such high excitations: At these energies electron-electron interaction gets increasingly important, and hence the concept of quantum numbers (N,n) labeling independent electron states breaks down. The labels (N,n) can be partly replaced by new, though approximate, quantum numbers representing the collective dynamics of the electron pair. However, due to the non-integrability of the three-body Coulomb problem, a clear-cut classification is no longer possible (see Tanner et al. 2000). The increasing complexity of the energy spectrum close to the helium double ionisation threshold can be experimentally revealed in photo-ionisation measurements. The single photo-ionisation cross section is proportional to the probability of ionising a helium atom by a photon at a given frequency \(\omega\ .\) It can be compared directly to experimental data measuring the electron flux obtained from shining a laser (at sufficiently weak intensity to avoid effects due to multi-photon ionisation) onto a helium target; a typical photoionisation signal for highly doubly excited helium states is shown in Figure 5 (Jiang et al 2008), exhibiting irregular sequences of peaks from overlapping resonances. Figure 5: Total photoionisation cross section of helium; Ix refers to the ionisation threshold for the xth Rydberg series (from Jiang et al 2008). The helium atom - a semiclassical approach The three-body Coulomb system helium is one of the most complex systems which has been treated fully semiclassically using Gutzwiller's trace formula (Wintgen et al. 1992). The challenge is to describe quantum spectra or photoionisation cross sections of this few particle system in terms of classical trajectories of the nucleus and the two electrons alone. It turns out that the structure of the spectrum is closely linked to features of the underlying classical few-body dynamics such as invariant subspaces in phase space, chaotic or nearly integrable behaviour and the influence of collision events. The bound and resonance spectrum as depicted in Figure 4 is linked via Gutzwiller's trace formula to the set of all periodic orbits of the system. Furthermore, it can be shown that photoionisation or absorption spectra in atoms are related to a set of returning trajectories, that is, trajectories which start and end at the origin (Du et al. 1988). Note that these orbits are in general only closed in position space and thus not periodic. Interestingly, in helium these are triple-collision orbits, that is, orbits for which both electrons hit the nucleus simultaneously. A good knowledge of the phase space dynamics is necessary to classify and determine these sets of trajectories. Classical dynamics The classical three body system can be reduced to four degrees of freedom (dof) after eliminating the centre of mass motion and incorporating the conservation of the total angular momentum. As the nucleus is about 1800 times heavier than an electron, one can work in the infinite nucleus mass approximation without loosing any essential features. After rescaling and making all quantities dimensionless, one can write the classical Hamiltonian in the form \[\tag{2} H = \frac{{\mathbf p}_1^2}{2} + \frac{{\mathbf p}_2^2}{2} - \frac{Z}{r_{1}} - \frac{Z}{r_{2}} + \frac{1}{r_{12}} = \left\{\begin{array}{rcl} +1 & : & E > 0 \\ 0 & : & E = 0 \\ -1 & : & E < 0 \end{array} \right. \] with nucleus charge \(Z = 2\) for helium (Richter et al. 1993). The phase space in Eq. (2) has 6 dof, the dynamics for fixed angular momentum takes place on 4 dof. The \(H = +1\) regime corresponds to the region of positive energy where double ionisation is possible. There exist no periodic orbits of the electron pair and one does not find quantum resonance states in this energy regime, see Figure 4. It is the classical dynamics for negative energies, that is H= -1, which shows complex behaviour, chaos, unstable periodic orbits and is linked to the bound and resonance spectrum of helium in Figure 4. Only one electron can escape classically in this energy regime and it will do so for most initial conditions. Symmetries and invariant subspaces Figure 6: The collinear eZe configuration The equations of motion derived from the Hamiltonian (2) are invariant under the transformation \(({\mathbf r}_1, {\mathbf r}_2) \rightarrow (-{\mathbf r}_1, -{\mathbf r}_2)\), as well as \(({\mathbf r}_1, {\mathbf r}_2) \rightarrow ({\mathbf r}_2, {\mathbf r}_1)\ .\) The symmetries give rise to invariant subspaces in the full phase space. Trajectories which start in such a subspace will remain there for all times thus reducing the relevant degrees of freedom of the dynamics. Invariant subspaces are thus an extremely useful tool to study classical dynamics in a high dimensional phase space. The subspace most important for a semiclassical treatment is the collinear eZe space where the electrons move along a common axis at different sides of the nucleus, see Figure 6. The dynamics in this space describes the spectrum near the ground state as well as some of the Rydberg series in the energy spectrum, Figure 4. Furthermore, the photoionisation spectrum is dominated by the collinear dynamics. Heisenberg's early success is indeed related to the similarity of his 'periodic orbit' in Figure 3 with the shortest periodic orbit in the eZe space. We will discuss the most important properties of the dynamics in this subspace in more detail below. Other subspaces are, for example, the collinear dynamics of both electrons on the same side of the nucleus giving rise to 'frozen planet states' (Richter et al. 1992) and the so-called Wannier ridge space with \({\mathbf r}_1= {\mathbf r}_2; {\mathbf p}_1 = {\mathbf p}_2 \), which is, however, unstable with respect to perturbations away from the subspace and thus less relevant for the spectrum. It plays an important role as a gate-way for ionisation processes, see Lee et al. 2005, Byun et al. 2007. For a more detailed description of the dynamics in other invariant subspaces, see Tanner et al. 2000 and references therein. Figure 7: a) A typical orbit in the eZe - space; b) trajectory in the Poincaré surface of section \( r_2 = 0 \) (from Tanner et al. 2000) Symbolic dynamics in the eZe collinear space The dynamics in the eZe collinear space turns out to be fully chaotic with a binary symbolic dynamics. The two degrees of freedom are the distances \( r_i, i=1,2 \) of electron \( i \) from the nucleus - a typical trajectory is shown in Figure 7. Note that the axis \(r_i = 0\) corresponds to binary collisions, that is, the electron "i" collides with the nucleus - see also the next section for a discussion of collision events. One electron can escape (ionise) to infinity leaving the other electron in a regular Kepler ellipse around the nucleus. Interestingly, escape can only occur after both electron come close to the nucleus simultaneously to allow for momentum transfer between the light particles. The triple collision (discussed below) serves thus as the gateway to electron ionisation. The dynamics is nearly regular having a small, but positive Lyapunov exponent, if the electrons are far apart (that is, \(r_1 \gg r_2\) or vice versa), see the Poincaré surface of section in Figure 7b). The symbolic dynamics for the chaotic eZe - configuration maps each trajectory one-to-one onto a binary symbol string. The symbols are defined through binary collisions, that is, • 1 if a trajectory crosses the line \(r_1=r_2\) between two collisions with the nucleus, (i.e. \(r_1 = 0\) or \(r_2 = 0\)); • 0 otherwise. Figure 8: Representative periodic orbits of the helium electron pair in the eZe - space (from Wintgen et al. 1992) Note that the symbolic dynamics is closely related to the triple collision, that is, the boundaries of the partition are given by trajectories starting in or ending at the singular point \(r_1 = r_2 = 0\) (triple collision manifolds). The symbolic dynamics fully describes the topological properties of the phase space; periodic orbits, for example, can be characterised by a periodic symbol string \(\overline{a} = \ldots aaaa\ldots\) where \(a\) is a finite binary symbol string. There are infinitely many periodic orbits and they are all unstable with respect to the dynamics "in" the collinear plane. Some examples are shown in Figure 8. The number of periodic orbits increases exponentially with the code length and thus with the period of the orbits. The 'asymmetric stretch' orbit \(\overline{1}\) is the shortest orbit in this subspace. The asymptotic periodic orbit \(r_1 \equiv\infty\ ,\) \(p_1\equiv 0\) corresponds to the notation \(\overline{0}\) in the binary code. Collisions, regularisation and the triple collision Collisions are an important feature in few-body dynamics as described above. There is in particular a fundamental difference between two-body (or binary) collisions and many-body collisions where more than two particles collide simultaneously. Binary collisions can be regularised, that is, the dynamics can be continued through the singularity after a suitable transformation of the time and space variables. A popular regularisation scheme is the Kustaanheimo-Stiefel transformation which preserves the Hamiltonian structure of the equations. Binary collisions do not add instability to the classical dynamics. This is in contrast to triple collisions where both electrons hit the nucleus simultaneously. The triple collision is a non-regularisable singularity, that is, there is no unique way to determine the fate of a trajectory after it has encountered a triple collision. The manifold of all orbits coming out of or going into a triple collisions - the so-called triple collision manifold (Waldvogel 2002) - plays an important role in tessellating the full phase space and provides the symbolic dynamics in the eZe space. Triple collision orbits always move along the so-called Wannier orbit \( r_1 = r_2 \) when encountering the singularity. The triple collision singularity thus acts as an infinitely unstable fixed point; a closer analysis shows that the singularity itself has a non-trivial structure and topology which can be illuminated using McGehee transformation techniques. For a discussion of the Kustaanheimo-Stiefel and McGehee transformations in the context of three body Coulomb problems, see Richter et al. 1993 and Lee et al. 2005, respectively. Semiclassical periodic orbit quantisation Figure 9: The Fourier transformed part of the spectrum associated with the eZe space (here denoted \( K_{max} \)) - the binary code (+,-) refers to the code (0,1) introduced above (from Qiu et al. 1996) The Gutzwiller trace formula marked a milestone in the development of semiclassical theories. It relates the spectrum of a quantum system to the set of all periodic orbits of the corresponding classical system in terms of a Fourier-type relation where the eigenenergies and the actions of the classical periodic orbits act as Fourier-pairs. The classical dynamics of the eZe collinear configuration can be used for a quantisation of an important part of the helium spectrum due to a 'lucky' coincidence: It turns out that the electron motion in the vicinity of the collinear space is stable in all degrees of freedom perpendicular to the eZe space. The electrons carry out a regular bending-type vibration while performing chaotic motion in the collinear degrees of freedom. This makes it possible to use the periodic orbits of the eZe configuration for a semiclassical description of parts of the spectrum for angular momentum L=0 including the ground state. The existence of this connection can be shown by Fourier methods. By inverting the Gutzwiller trace formula using Fourier transformation, one obtains an action spectrum related to the full quantum energy spectrum as shown in Figure 9 (Qui et al. 1996). The energy scaling relation for the classical actions \[\tag{3} S_{po} = \frac{1}{\sqrt{|E|}} \tilde{S}_{po}, \] has been used here, where \( \tilde{S}_{po} \) is the action of a periodic orbit (po) at fixed energy \( E=-1\), see (2). The quantum spectrum used in Figure 9 has been obtained from full 3D numerical calculations (Bürgers et al. 1995) and semi-empirical formulas based on approximate quantum numbers. (For more details on approximate quantum numbers, see Tanner et al. 2000). Figure 10: Quantum eigenvalues obtained from cycle expansion techniques using periodic orbits up to length j; the exact quantum results are given in the last column (in atomic units), from Wintgen et al. (1992). Each of the peaks in Figure 9 can be identified with a periodic orbit of the classical two-electron dynamics; furthermore all these periodic orbits lie in the eZe space confirming the statement that large parts of the quantum spectrum are determined by this invariant lower dimensional subspace - a truly amazing result. At last the periodic orbits Niels Bohr was looking for have been found and they are quite close to the solution proposed by Heisenberg which he himself did not dare to publish! For a full blown semiclassical quantisation, one needs information of as many periodic orbits as possible - these can be obtained systematically using the symbolic dynamics in the eZe space. The most extensive semiclassical calculations so far made use of all periodic orbits up to length 16 (\( 2^{16} = 65536\) orbits ) together with cycle expansion techniques to obtain energies as listed in Tab 9 (Wintgen et al. 1992). Pushing the semiclassical calculation to even higher energies is hampered by the exponential increase of the number of periodic orbits with increasing (symbol) length in chaotic systems - a general obstacle for semiclassical quantisation techniques. Photoionisation cross sections Information about atomic spectra is often experimentally obtained through measurements of the photo excitation or ionisation, see Figure 5 for helium. An expression for the photo-ionisation cross section can be written in terms of the retarded Green function G(E) of the full three particle problem, that is, \[\tag{4} \sigma(E) = -\frac{4 \pi}{c} \, \omega \, \Im \langle D \phi_{0}| G(E) |D \phi_{0}\rangle \] where c is the speed of light, \(\phi_{0} \) is the initial state wave function and \( D = {\mathbf \Pi} \cdot ({\mathbf r}_1 + {\mathbf r}_2)\) is the dipole operator with \(\mathbf \Pi\), the polarization of the incoming photon. Using again Gutzwiller's expression for the Green function in terms of classical trajectories, one can relate the cross section to classical trajectories of the three-body dynamics. Semiclassical methods are particularly useful when considering the cross section in the limit \( E \to 0\ ,\) that is, at the double ionisation threshold. Especially the regime just below the threshold with \( E<0 \) is not accessible both to experiments and to fully numerical calculations due to the large density of resonances. Using a semiclassical closed orbit theory together with a semiclassical treatment of triple collision orbits, one can make detailed predictions here; in particular, the cross section can be written in the form (Byun et al 2007, Lee et al 2010) \[\tag{5} \sigma(E) \approx \sigma_0 + \frac{8 \pi^2 \omega}{c}\; |E|^\mu \; \Re \left[2 \pi i\sum_{{\rm CTCO}_\gamma} a_\gamma e^{i \tilde{S}_\gamma/\sqrt{E} - i \pi \nu_\gamma/2}\right] \, , \] where \( \sigma_0 \) gives a smooth background contribution and the sum is taken over all closed triple collision orbits (CTCO), that is, trajectories which start and end in the triple collision. It can be shown that CTCOs are part of the eZe sub-space. Furthermore, \(\tilde S\) is the classical action at energy \(E = -1\) as given in (3) and \(a_\gamma\) is an energy independent coefficient related to the stability of a given CTCO away from the triple collision. Most remarkably is the energy scaling due to the exponent \( \mu \), (for details see Lee et al. 2010), \[\tag{6} \mu= \mu_{eZe} + 2 \mu_{wr} = \frac{1}{4}\left[\sqrt{\frac{100 Z-9}{4Z-1}} + 2\sqrt{\frac{4 Z - 9}{4Z -1}}\right], \] Figure 11: Fourier transform of cross section data; the peaks can be related to the CTCOs depicted in the insets (from Byun et al. 2007). which can be obtained through a stability analysis of the triple collision itself. Here, \(wr\) relates to a contribution from the so-called Wannier Ridge dynamics, an invariant subspace of the full dynamics where the two electrons are always at the same distance from the nucleus. The exponents are related to Siegel exponents (see Waldvogel 2002) or Wannier exponents (Wannier 1953). The energy scaling describes the decay of the fluctuations in the photoionisation cross section towards the threshold as can be seen in Figure 5. The CTCOs can in fact be seen in cross section data using a Fourier transformation of Eqn. (5). The data shown in Figure 11 are obtained from a 1D eZe cross section calculations (Byun et al. 2007) and show a nice one-to-one correspondence between peaks and triple collision trajectories. Experimental and numerical studies confirm that the dominant contribution to the cross section signal is given by the collinear eZe dynamics (Jiang et al. 2008) as predicted by the semiclassical analysis. Recent developments and open questions Exploring the full phase space - approximate symmetries and global structures Helium has provided a prime example where experimental and numerical results of the quantum 3-body problem give clear hints about interesting structures in the phase space of the classical dynamics. However, the story is not finished yet - at the time of writing (2013), large areas of the full 7 dimensional classical phase space are unexplored and the connection between approximate quantum numbers (Herrick's quantum numbers - see Lee et al. 2005, Sano 2010) is still unclear. This also opens up interesting links to celestial mechanics and triple collision encounters in three-body gravitational problems as discussed at the workshops on Few Body Dynamics in Atoms, Molecules and Planetary Systems in Dresden in 2010 and Celestial, Molecular, and Atomic Dynamics (CEMAD) in Victoria in 2013. Highly doubly excited states - recent advances The world record of experimentally accessing and numerically calculating highly doubly excited states in helium is currently held (in 2013) by Jiang et al. 2008 for total cross sections reaching helium resonances up to the ionisation thresholds N=17 and Czasch et al. 2005 for partial cross sections reaching N=13. Going even higher in the spectrum or considering helium under electromagnetic driving (Madronero et al. (2008)) is a formidable challenge asking for new numerical techniques to deal with the large basis sets necessary and experimental techniques to reach the resolutions required. Unusually for atomic physicists, the rewards may lie in looking at the Fourier transforms of their data. Double ionisation of helium for strong laser fields and ultra-short pulses - probing correlated electron-electron dynamics Studying double ionisation (DI) of helium by looking at the classical dynamics of the two electrons as they escape form the nucleus has a long history: Already in 1953, Wannier predicted an unexpected energy scaling of the DI cross section near the threshold governed by exponents similar to those found in Eq.(6). Interesting recent effects being considered are electron-electron correlation effects in strong laser fields and in attosecond pulses. In the strong field case, rescattering can lead to a large contribution to the DI cross section from ionisation events where both electrons escape from the nucleus along the same direction (Prauzner-Bechcicki et al. 2007). Two-photon DI in ultra-short pulses, on the other hand, shows a preference for back-to-back electron escape due to electron-electron repulsion (Feist et al. 2009). These and many other scenarios can be studied using classical electron dynamics. Semiclassics for many-body problems While helium represents a prime example for the success of semiclassics for an interacting few body system, generalizations to other many-body problems remain as a future challenge. • A Bürgers, D Wintgen, and J-M Rost, Highly doubly excited S states of the helium atom, J Phys B 28:3163 (1995). • C W Byun, N N Choi, M-H Lee, and G Tanner, Scaling Laws for the Photoionization Cross Section of Two-Electron Atoms, Phys Rev Lett 98:113001 (2007). • A Czasch et al, Partial Photoionization Cross Sections and Angular Distributions for Double Excitation of Helium up to the N=13 Threshold, Phys Rev Lett 95:243003 (2005). • M L Du and J B Delos, Effect of closed classical orbits on quantum spectra: Ionization of atoms in a magnetic field. I. Physical picture and calculations, Phys Rev A 38:1896 (1988). • J Feist et al., Probing Electron Correlation via Attosecond xuv Pulses in the Two-Photon Double Ionization of Helium, Phys Rev Lett 103:063002 (2009). • Y H Jiang, R Püttner, D Delande, and G Kaindl, Explicit analysis of chaotic behavior in radial and angular motion in doubly excited helium, Phys Rev A 78:021401(R) (2008). • M-H Lee, G Tanner, und N N Choi, Classical dynamics in two-electron atoms near the triple collision, Phys. Rev. E 71:056208 (2005). • M-H Lee, N N Choi, and G Tanner, Classical dynamics of two-electron atoms at zero energy, Phys Rev E 72:066215 (2005). • M-H Lee, C W Byun, N N Choi, and G Tanner, Photoionization of two-electron atoms via highly doubly excited states: Numerical and semiclassical results, Phys Rev A 81:043419 (2010). • J Madronero and A Buchleitner, Ab initio quantum approach to planar helium under periodic driving, Phys Rev A 77:053402 (2008). • J S Prauzner-Bechcicki, K Sacha, B Eckhardt and J Zakrzewski, Time-Resolved Quantum Dynamics of Double Ionization in Strong Laser Fields, Phys Rev Lett 98:203002 (2007). • Y Qiu, J Müller, and J Burgdörfer, Periodic-orbit spectra of hydrogen and helium, Phys Rev A 54:1922 (1996). • K. Richter, J. S. Briggs, D. Wintgen, and E. A. Solov'ev, J. Phys. B 25, 3929 (1992). • K Richter, G Tanner, and D Wintgen, Classical mechanics of two electron atoms, Phys Rev A 48:4182 (1993). • M M Sano, Semiclassical Interpretation of Electron Correlation in Helium, J Phys Soc Japan 79:034003 (2010). • J Waldvogel, Triple Collisions and Close Triple Encounters, in Singularities in Gravitational Systems, Lecture Notes in Physics, 590:81 (2002). • G H Wannier, The Threshold Law for Single Ionization of Atoms or Ions by Electrons, Phys Rev 90:817 (1953). • D Wintgen, K Richter, and G Tanner, The semiclassical helium atom, CHAOS 2:19 (1992); Recommended reading • G Tanner, K Richter, and J-M Rost, The theory of two electron atoms: Between ground state and complete fragmentation, Rev Mod Phys 72:497 (2000). • P Cvitanović, R Artuso, R Mainieri, G Tanner, G Vattay, N Whelan and A Wirzba, Chaos: Classical and Quantum,; see in particular • M C Gutzwiller, Chaos in Classical and Quantum Mechanics, Springer-Verlag, New York (1990). See also Personal tools Focal areas
0fec2f4430df9bef
Take the 2-minute tour × Using the definition of the fine-structure constant $\alpha = \frac{4 \pi \epsilon_0 \hbar c}{e^2}$ and the Compton wavelength of an electron $\lambda_c = \frac{h}{m_e c}$ the classical electron radius $r_e$ and the Bohr radius $a_0$ can be expressed like $$r_e = \alpha \frac{\lambda_c}{2\pi}$$ $$a_0 = \frac{1}{\alpha} \frac{\lambda_c}{2\pi} $$ This means e.g. that the classical electron radius can be expressed in terms of the Bohr radius as $r_e = \alpha^2 a_0$. Isn't that peculiar? Why should the classical radius of the electron and the distance of an electron to the nucleus in an atom be related to each other? And why are both multiples of the Compton wavelength? share|improve this question 3 Answers 3 up vote 3 down vote accepted It is not surprising that both $r_e$ and $a_0$ are multiples of the Compton wavelength: any two positive lengths are multiples of each other. While it is true that there is more to this than that simple statement, the essential fact is that since those three lengths are composed simply and out of the same basic ingredients, there is very little leeway for how they can be different. Let's have a look at these quantities: $$ \lambda_C=\frac{2\pi\hbar}{mc},\,\, r_e=\frac{e^2}{mc^2}\text{ and }a_0=\frac{\hbar^2}{m e^2}, $$ in Gaussian units, where $m$ is the electron mass. $%These are, respectively, the characteristic length scales of the photon momentum that matches an electron, the electron's rest mass as electrostatic energy, and the basic quantum mechanical electrostatic problem for the electron.$ Notice that they are all inversely proportional to the electron rest mass, though for different reasons: heavier electrons would require beefier photons to deflect them; they have higher rest mass and would need a more compact spherical charge to match; and a higher $m$ effectively reduces $\hbar$ in the hydrogenic Schrödinger equation, making it harder to get to the quantum regime. Given that, you have three lengths that are determined by the three constants $\hbar$, $c$ and $e$. That's enough constants to make three different lengths, but they are few enough that any quotient must be a function of the unique dimensionless combination of these constants - the fine structure constant, $$\alpha=\frac {e^2} {\hbar c}.$$ Thus, it is necessary that any two of these three lengths must be multiples of the third and of $\alpha^{\pm1}$ (modulo $2\pi$). This constant, however, is particularly important. It is the natural measure of the strength of electromagnetic interactions: it gives, as a pure number, the electromagnetic coupling $e^2$ between two unit charges, in natural relativistic units where $\hbar=c=1$. Thus, while the relations you remark on are algebraically necessary, it doesn't mean they are devoid of physical content: • $r_e=\alpha \lambda_C/2\pi$ says that for a more strongly interacting QED a more loosely bound spherical electron would suffice to match the rest mass energy. • $a_0=\frac 1\alpha \lambda_C/2\pi$ says that for a more strongly interacting QED the proton would hold its hydrogenic electron in tighter orbits. Both of these are indeed most naturally phrased in terms of the Compton wavelength, as it is the characteristic quantum-relativistic length scale of the electron, and does not depend on any particular physical interaction, whereas the other two do - and are therefore obtained from the first via the strength of that interaction. share|improve this answer The three lengths you are considering are all built using only the $e$, $m_e$ and the fundamental constants. If you look at the definitions, you can notice that they all have the form $\frac{something}{m_e}$. It's clear then that you can get one length from the others just multiplying by some factor of $e$ and fundamental constants; since all the quantities are lengths, the factor must be dimensionless: it must be a power of $\alpha$, times some number. The $2\pi$ factors come from the fact that $\lambda_C$ involves $h$ and the others $\hbar$. share|improve this answer Concerning the compton wavelenght $\lambda_c$ and the classical electron radius $r_e$, you can synthetize this way: $$h\nu=m_ec^2=\frac{e^2}{4\pi \varepsilon_0\, a_0}$$ The first equality represents the interaction between a photon an electron which transfer their energy from one to another (compton scattering). This gives the compton wavelength, typical range of the electron-photon interaction. The second equality represents the energy of the electron matching the potential energy it would experience in the classical electric potential of a point charge (another electron for instance). This gives the classical electron radius, that I would interpret as the typical range for the electron-electron interaction. Thus, the relationship between $\lambda_c$ and $r_e$ is $\alpha$ because it is a measure of the strength of electrostatic interaction. Now, I am not sure about the Bohr radius part. This is not as direct because it involves the quantization of angular momentum. share|improve this answer Your Answer
d157a03352041096
Deductive-nomological model From Wikipedia, the free encyclopedia Jump to: navigation, search The deductive-nomological model (DN model), also known as Hempel's model or Hempel–Oppenheim model or Popper–Hempel model, is a formal view of scientifically answering questions asking, "Why...?". DN model poses scientific explanation as a deductive structure—that is, one where truth of its premises entails truth of its conclusion—hinged on accurate prediction or postdiction of the phenomenon to be explained. Through problems concerning humans' ability to define, discover, and know causality, DN model's initial formulation omitted it, thought to be incidentally approximated by realistic selection of premises that derive the phenomenon of interest from observed, starting conditions plus general laws. Still, DN model formally permitted causally irrelevant factors. Also, derivability from observations and laws sometimes yielded absurd answers. Upon logical empiricism's 1960s fall, DN model was concluded a flawed or greatly incomplete model of scientific explanation, but remained an idealized version, and rather accurate for modern physics. In the early 1980s, revision to DN model emphasized maximal specificity for relevance of the conditions and axioms stated. Together with Hempel's inductive-statistical model, DN model forms scientific explanation's covering law model, as also termed, from critical angle, subsumption theory. The term deductive distinguishes DN model's intended determinism from the probabilism of inductive inferences.[1] The term nomological follows the Greek word νόμος or nomos, meaning "law".[1] DN model holds to a view of scientific explanation whose conditions of adequacy (CA), semiformal but stated classically, are derivability (CA1), lawlikeness (CA2), empirical content (CA3), and truth (CA4).[2] In DN model, a law axiomatizes an unrestricted generalization from antecedent A to consequent B by conditional propositionIf A, then B—and has empirical content testable.[3] A law differs from mere true regularity—for instance, George always carries only $1 bills in his wallet—by supporting counterfactual claims and thus suggesting what must be true,[4] while following from a scientific theory's axiomatic structure.[5] The phenomenon to be explained is the explanandum—an event, law, or theory—whereas the premises to explain it are explanans, true or highly confirmed, containing at least one universal law, and entailing the explanandum.[6][7] Thus, given the explanans as initial, specific conditions C1, C2 . . . Cn plus general laws L1, L2 . . . Ln, the phenomenon E as explanandum is a deductive consequence, thereby scientifically explained.[6] Aristotle's scientific explanation in Physics resembles DN model, an idealized form of scientific explanation.[7] Aristotelian physics' framework—Aristotelian metaphysics—reflected the perspective of this principally biologist, who, amid living entities' undeniable purposiveness, formalized vitalism and teleology, an intrinsic morality in nature.[8] With emergence of Copernicanism, however, Descartes introduced mechanical philosophy, then Newton rigorously posed lawlike explanation, both Descartes and especially Newton shunning teleology within natural philosophy.[9] At 1740, David Hume[10] staked Hume's fork,[11] highlighted the problem of induction,[12] and found humans ignorant of either necessary or sufficient causality.[13][14] Hume also highlighted the fact/value gap, as what is does not itself reveal what ought.[15] Near 1780, countering Hume's ostensibly radical empiricism, Immanuel Kant highlighted extreme rationalism—as by Descartes or Spinoza—and sought middle ground. Inferring the mind to arrange experience of the world into substance, space, and time, Kant placed the mind as part of the causal constellation of experience and thereby found Newton's theory of motion universally true,[16] yet knowledge of things in themselves impossible.[14] Safeguarding science, then, Kant paradoxically stripped it of scientific realism.[14][17][18] Aborting Francis Bacon's inductivist mission to dissolve the veil of appearance to uncover the noumenametaphysical view of nature's ultimate truths—Kant's transcendental idealism tasked science with simply modeling patterns of phenomena. Safeguarding metaphysics, too, it found the mind's constants holding also universal moral truths,[19] and launched German idealism, increasingly speculative. Auguste Comte found the problem of induction rather irrelevant since enumerative induction is grounded on the empiricism available, while science's point is not metaphysical truth. Comte found human knowledge had evolved from theological to metaphysical to scientific—the ultimate stage—rejecting both theology and metaphysics as asking questions unanswerable and posing answers unverifiable. Comte in the 1830s expounded positivism—the first modern philosophy of science and simultaneously a political philosophy[20]—rejecting conjectures about unobservables, thus rejecting search for causes.[21] Positivism predicts observations, confirms the predictions, and states a law, thereupon applied to benefit human society.[22] From late 19th century into the early 20th century, the influence of positivism spanned the globe.[20] Meanwhile, evolutionary theory's natural selection brought the Copernican Revolution into biology and eventuated in the first conceptual alternative to vitalism and teleology.[8] Whereas Comtean positivism posed science as description, logical positivism emerged in the late 1920s and posed science as explanation, perhaps to better unify empirical sciences by covering not only fundamental science—that is, fundamental physics—but special sciences, too, such as biology, psychology, economics, and anthropology.[23] After defeat of National Socialism with World War II's close in 1945, logical positivism shifted to a milder variant, logical empiricism.[24] All variants of the movement, which lasted until 1965, are neopositivism,[25] sharing the quest of verificationism.[26] Neopositivists led emergence of the philosophy subdiscipline philosophy of science, researching such questions and aspects of scientific theory and knowledge.[24] Scientific realism takes scientific theory's statements at face value, thus accorded either falsity or truth—probable or approximate or actual.[17] Neopositivists held scientific antirealism as instrumentalism, holding scientific theory as simply a device to predict observations and their course, while statements on nature's unobservable aspects are elliptical at or metaphorical of its observable aspects, rather.[27] DN model received its most detailed, influential statement by Carl G Hempel, first in his 1942 article "The function of general laws in history", and more explicitly with Paul Oppenheim in their 1948 article "Studies in the logic of explanation".[28][29] Leading logical empiricist, Hempel embraced the Humean empiricist view that humans observe sequence of sensory events, not cause and effect,[23] as causal relations and casual mechanisms are unobservables.[30] DN model bypasses causality beyond mere constant conjunction: first an event like A, then always an event like B.[23] Hempel held natural laws—empirically confirmed regularities—as satisfactory, and if included realistically to approximate causality.[6] In later articles, Hempel defended DN model and proposed probabilistic explanation by inductive-statistical model (IS model).[6] DN model and IS model—whereby the probability must be high, such as at least 50%[31]—together form covering law model,[6] as named by a critic, William Dray.[32] Derivation of statistical laws from other statistical laws goes to the deductive-statistical model (DS model).[31][33] Georg Hendrik von Wright, another critic, named the totality subsumption theory.[34] Amid failure of neopositivism's fundamental tenets,[35] Hempel in 1965 abandoned verificationism, signaling neopositivism's demise.[36] From 1930 onward, Karl Popper had refuted any positivism by asserting falsificationism, which Popper claimed had killed positivism, although, paradoxically, Popper was commonly mistaken for a positivist.[37][38] Even Popper's 1934 book[39] embraces DN model,[7][28] widely accepted as the model of scientific explanation for as long as physics remained the model of science examined by philosophers of science.[30][40] In the 1940s, filling the vast observational gap between cytology[41] and biochemistry,[42] cell biology arose[43] and established existence of cell organelles besides the nucleus. Launched in the late 1930s, the molecular biology research program cracked a genetic code in the early 1960s and then converged with cell biology as cell and molecular biology, its breakthroughs and discoveries defying DN model by arriving in quest not of lawlike explanation but of causal mechanisms.[30] Biology became a new model of science, while special sciences were no longer thought defective by lacking universal laws, as borne by physics.[40] In 1948, when explicating DN model and stating scientific explanation's semiformal conditions of adequacy, Hempel and Oppenheim acknowledged redundancy of the third, empirical content, implied by the other three—derivability, lawlikeness, and truth.[2] In the early 1980s, upon widespread view that causality ensures the explanans' relevance, Wesley Salmon called for returning cause to because,[44] and along with James Fetzer helped replace CA3 empirical content with CA3' strict maximal specificity.[45] Salmon introduced causal mechanical explanation, never clarifying how it proceeds, yet reviving philosophers' interest in such.[30] Via shortcomings of Hempel's inductive-statistical model (IS model), Salmon introduced statistical-relevance model (SR model).[7] Although DN model remained an idealized form of scientific explanation, especially in applied sciences,[7] most philosophers of science consider DN model flawed by excluding many types of explanations generally accepted as scientific.[33] As theory of knowledge, epistemology differs from ontology, which is a subbranch of metaphysics, theory of reality.[46] Ontology poses which categories of being—what sorts of things exist—and so, although a scientific theory's ontological commitment can be modified in light of experience, an ontological commitment inevitably precedes empirical inquiry.[46] Natural laws, so called, are statements of humans' observations, thus are epistemological—concerning human knowledge—the epistemic. Causal mechanisms and structures existing putatively independently of minds exist, or would exist, in the natural world's structure itself, and thus are ontological, the ontic. Blurring epistemic with ontic—as by incautiously presuming a natural law to refer to a causal mechanism, or to trace structures realistically during unobserved transitions, or to be true regularities always unvarying—tends to generate a category mistake.[47][48] Discarding ontic commitments, including causality per se, DN model permits a theory's laws to be reduced to—that is, subsumed by—a more fundamental theory's laws. The higher theory's laws are explained in DN model by the lower theory's laws.[5][6] Thus, the epistemic success of Newtonian theory's law of universal gravitation is reduced to—thus explained by—Einstein's general theory of relativity, although Einstein's discards Newton's ontic claim that universal gravitation's epistemic success predicting Kepler's laws of planetary motion[49] is through a causal mechanism of a straightly attractive force instantly traversing absolute space despite absolute time. Covering law model reflects neopositivism's vision of empirical science, a vision interpreting or presuming unity of science, whereby all empirical sciences are either fundamental science—that is, fundamental physics—or are special sciences, whether astrophysics, chemistry, biology, geology, psychology, economics, and so on.[40][50][51] All special sciences would network via covering law model.[52] And by stating boundary conditions while supplying bridge laws, any special law would reduce to a lower special law, ultimately reducing—theoretically although generally not practically—to fundamental science.[53][54] (Boundary conditions are specified conditions whereby the phenomena of interest occur. Bridge laws translate terms in one science to terms in another science.)[53][54] By DN model, if one asks, "Why is that shadow 20 feet long?", another can answer, "Because that flagpole is 15 feet tall, the Sun is at x angle, and laws of electromagnetism".[6] Yet by problem of symmetry, if one instead asked, "Why is that flagpole 15 feet tall?", another could answer, "Because that shadow is 20 feet long, the Sun is at x angle, and laws of electromagnetism", likewise a deduction from observed conditions and scientific laws, but an answer clearly incorrect.[6] By the problem of irrelevance, if one asks, "Why did that man not get pregnant?", one could in part answer, among the explanans, "Because he took birth control pills"—if he factually took them, and the law of their preventing pregnancy—as covering law model poses no restriction to bar that observation from the explanans. Many philosophers have concluded that causality is integral to scientific explanation.[55] DN model offers a necessary condition of a causal explanation—successful prediction—but not sufficient conditions of causal explanation, as a universal regularity can include spurious relations or simple correlations, for instance Z always following Y, but not Z because of Y, instead Y and then Z as an effect of X.[55] By relating temperature, pressure, and volume of gas within a container, Boyle's law permits prediction of an unknown variable—volume, pressure, or temperature—but does not explain why to expect that unless one adds, perhaps, the kinetic theory of gases.[55][56] Scientific explanations increasingly pose not determinism's universal laws, but probabilism's chance,[57] ceteris paribus laws.[40] Smoking's contribution to lung cancer fails even the inductive-statistical model (IS model), requiring probability over 0.5 (50%).[58] (Probability standardly ranges from 0 (0%) to 1 (100%).) An applied science that applies statistics seeking associations between events, epidemiology cannot show causality, but consistently found higher incidence of lung cancer in smokers versus otherwise similar nonsmokers, although the proportion of smokers who develop lung cancer is modest.[59] Versus nonsmokers, however, smokers as a group showed over 20 times the risk of lung cancer, and in conjunction with basic research, consensus followed that smoking had been scientifically explained as a cause of lung cancer,[60] responsible for some cases that without smoking would not have occurred,[59] a probabilistic counterfactual causality.[61][62] Covering action[edit] Through lawlike explanation, fundamental physics—often perceived as fundamental science—has proceeded through intertheory relation and theory reduction, thereby resolving experimental paradoxes to great historical success,[63] resembling covering law model.[64] In early 20th century, Ernst Mach as well as Wilhelm Ostwald had resisted Ludwig Boltzmann's reduction of thermodynamics—and thereby Boyle's law[65]—to statistical mechanics partly because it rested on kinetic theory of gas,[56] hinging on atomic/molecular theory of matter.[66] Mach as well as Ostwald viewed matter as a variant of energy, and molecules as mathematical illusions,[66] as even Boltzmann thought possible.[67] In 1905, via statistical mechanics, Albert Einstein predicted the phenomenon Brownian motion—unexplained since reported in 1827 by botanist Robert Brown—which Einstein predicated to cohere with Newton's gravitational theory.[66] Soon, most physicists accepted that atoms and molecules were unobservable yet real.[66] Also in 1905, Einstein explained the electromagnetic field's energy as distributed in particles, doubted until this helped resolve atomic theory in the 1910s and 1920s.[68] Meanwhile, all known physical phenomena were gravitational or electromagnetic,[69] whose two theories misaligned.[70] Yet belief in aether as the source of all physical phenomena was virtually unanimous.[71][72][73][74] At experimental paradoxes,[75] physicists modified the aether's hypothetical properties.[76] Finding the luminiferous aether a useless hypothesis,[77] Einstein in 1905 a priori unified all inertial reference frames to state special principle of relativity,[78] which, by omitting aether,[79] converted space and time into relative phenomena whose relativity aligned electrodynamics with the Newtonian principle Galilean relativity or invariance.[63][80] Originally epistemic or instrumental, this was interpreted as ontic or realist—that is, a causal mechanical explanation—and the principle became a theory,[81] refuting Newtonian gravitation.[79][82] By predictive success in 1919, general relativity apparently overthrew Newton's theory, a revolution in science[83] resisted by many yet fulfilled around 1930.[84] In 1925, Werner Heisenberg as well as Erwin Schrödinger independently formalized quantum mechanics (QM).[85][86] Despite clashing explanations,[86][87] the two theories made identical predictions.[85] Paul Dirac's 1928 model of the electron was set to special relativity, launching QM into the first quantum field theory (QFT), quantum electrodynamics (QED).[88] From it, Dirac interpreted and predicted the electron's antiparticle, soon discovered and termed positron,[89] but the QED failed electrodynamics at high energies.[90] Elsewhere and otherwise, strong nuclear force and weak nuclear force were discovered.[91] In 1941, Richard Feynman introduced QM's path integral formalism, which if taken toward interpretation as a causal mechanical model clashes with Heisenberg's matrix formalism and with Schrödinger's wave formalism,[87] although all three are empirically identical, sharing predictions.[85] Next, working on QED, Feynman sought to model particles without fields and find the vacuum truly empty.[92] As each known fundamental force[93] is apparently an effect of a field, Feynman failed.[92] Louis de Broglie's waveparticle duality had rendered atomism—indivisible particles in a void—untenable, and highlighted the very notion of discontinuous particles as selfcontradictory.[94] Meeting in 1947, Freeman Dyson, Richard Feynman, Julian Schwinger, and Sin-Itiro Tomonaga soon introduced renormalization, a procedure converting QED to physics' most predictively precise theory,[90][95] subsuming chemistry, optics, and statistical mechanics.[63][96] QED thus won physicists' general acceptance.[97] Paul Dirac criticized its need for renormalization as showing its unnaturalness,[97] and called for an aether.[98] In 1947, Willis Lamb had found unexpected motion of electron orbitals, shifted since the vacuum is not truly empty.[99] Yet emptiness was catchy, abolishing aether conceptually, and physics proceeded ostensibly without it,[92] even suppressing it.[98] Meanwhile, "sickened by untidy math, most philosophers of physics tend to neglect QED".[97] Physicists have feared even mentioning aether,[100] renamed vacuum,[98][101] which—as such—is nonexistent.[98][102] General philosophers of science commonly believe that aether, rather, is fictitious,[103] "relegated to the dustbin of scientific history ever since" 1905 brought special relativity.[104] Einstein was noncommittal to aether's nonexistence,[77] simply said it superfluous.[79] Abolishing Newtonian motion for electrodynamic primacy, however, Einstein inadvertently reinforced aether,[105] and to explain motion was led back to aether in general relativity.[106][107][108] Yet resistance to relativity theory[109] became associated with earlier theories of aether, whose word and concept became taboo.[110] Einstein explained special relativity's compatibility with an aether,[107] but Einstein aether, too, was opposed.[100] Objects became conceived as pinned directly on space and time[111] by abstract geometric relations lacking ghostly or fluid medium.[100][112] By 1970, QED along with weak nuclear field was reduced to electroweak theory (EWT), and the strong nuclear field was modeled as quantum chromodynamics (QCD).[90] Comprised by EWT, QCD, and Higgs field, this Standard Model of particle physics is an "effective theory",[113] not truly fundamental.[114][115] As QCD's particles are considered nonexistent in the everyday world,[92] QCD especially suggests an aether,[116] routinely found by physics experiments to exist and to exhibit relativistic symmetry.[110] Confirmation of the Higgs particle, modeled as a condensation within the Higgs field, corroborates aether,[100][115] although physics need not state or even include aether.[100] Organizing regularities of observations—as in the covering law model—physicists find superfluous the quest to discover aether.[64] In 1905, from special relativity, Einstein deduced mass–energy equivalence,[117] particles being variant forms of distributed energy,[118] how particles colliding at vast speed experience that energy's transformation into mass, producing heavier particles,[119] although physicists' talk promotes confusion.[120] As "the contemporary locus of metaphysical research", QFTs pose particles not as existing individually, yet as excitation modes of fields,[114][121] the particles and their masses being states of aether,[92] apparently unifying all physical phenomena as the more fundamental causal reality,[101][115][116] as long ago foreseen.[73] Yet a quantum field is an intricate abstraction—a mathematical field—virtually inconceivable as a classical field's physical properties.[121] Nature's deeper aspects, still unknown, might elude any possible field theory.[114][121] Though discovery of causality is popularly thought science's aim, search for it was shunned by the Newtonian research program,[14] even more Newtonian than was Isaac Newton.[92][122] By now, most theoretical physicists infer that the four, known fundamental interactions would reduce to superstring theory, whereby atoms and molecules, after all, are energy vibrations holding mathematical, geometric forms.[63] Given uncertainties of scientific realism,[18] some conclude that the concept causality raises comprehensibility of scientific explanation and thus is key folk science, but compromises precision of scientific explanation and is dropped as a science matures.[123] Even epidemiology is maturing to heed the severe difficulties with presumptions about causality.[14][57][59] Covering law model is among Carl G Hempel's admired contributions to philosophy of science.[124] See also[edit] Types of inference Related subjects 1. ^ a b Woodward, "Scientific explanation", §2 "The DN model", in SEP, 2011. 2. ^ a b James Fetzer, ch 3 "The paradoxes of Hempelian explanation", in Fetzer, ed, Science, Explanation, and Rationality (Oxford U P, 2000), p 113. 3. ^ Montuschi, Objects in Social Science (Continuum, 2003), pp 61–62. 4. ^ Bechtel, Philosophy of Science (Lawrence Erlbaum, 1988), ch 2, subch "DN model of explanation and HD model of theory development", pp 25–26. 5. ^ a b Bechtel, Philosophy of Science (Lawrence Erlbaum, 1988), ch 2, subch "Axiomatic account of theories", pp 27–29. 6. ^ a b c d e f g h Suppe, "Afterword—1977", "Introduction", §1 "Swan song for positivism", §1A "Explanation and intertheoretical reduction", pp 619–24, in Suppe, ed, Structure of Scientific Theories, 2nd edn (U Illinois P, 1977). 7. ^ a b c d e Kenneth F Schaffner, "Explanation and causation in biomedical sciences", pp 79–125, in Laudan, ed, Mind and Medicine (U California P, 1983), p 81. 8. ^ a b G Montalenti, ch 2 "From Aristotle to Democritus via Darwin", in Ayala & Dobzhansky, eds, Studies in the Philosophy of Biology (U California P, 1974). 9. ^ In the 17th century, René Descartes as well as Isaac Newton firmly believed in God as nature's designer and thereby firmly believed in natural purposiveness, yet found teleology to be outside science's inquiry (Bolotin, Approach to Aristotle's Physics, pp 31–33). By 1650, formalizing heliocentrism and launching mechanical philosophy, Cartesian physics overthrew geocentrism as well as Aristotelian physics. In the 1660s, Robert Boyle sought to lift chemistry as a new discipline from alchemy. Newton more especially sought the laws of nature—simply the regularities of phenomena—whereby Newtonian physics, reducing celestial science to terrestrial science, ejected from physics the vestige of Aristotelian metaphysics, thus disconnecting physics and alchemy/chemistry, which then followed its own course, yielding chemistry around 1800. 10. ^ Nicknames for principles attributed to Hume—Hume's fork, problem of induction, Hume's law—were not created by Hume but by later philosophers labeling them for ease of reference. 11. ^ By Hume's fork, the truths of mathematics and logic as formal sciences are universal through "relations of ideas"—simply abstract truths—thus knowable without experience. On the other hand, the claimed truths of empirical sciences are contingent on "fact and real existence", knowable only upon experience. By Hume's fork, the two categories never cross. Any treatises containing neither can contain only "sophistry and illusion". (Flew, Dictionary, "Hume's fork", p 156). 12. ^ Not privy to the world's either necessities or impossibilities, but by force of habit or mental nature, humans experience sequence of sensory events, find seeming constant conjunction, make the unrestricted generalization of an enumerative induction, and justify it by presuming uniformity of nature. Humans thus attempt to justify a minor induction by adding a major induction, both logically invalid and unverified by experience—the problem of induction—how humans irrationally presume discovery of causality. (Chakraborti, Logic, p 381; Flew, Dictionary, "Hume", p 156. 13. ^ For more discursive discussions of types of causality—necessary, sufficient, necessary and sufficient, component, sufficient component, counterfactual—see Rothman & Greenland, Parascandola & Weed, as well as Kundi. Following is more direct elucidation: A necessary cause is a causal condition required for an event to occur. A sufficient cause is a causal condition complete to produce an event. Necessary is not always sufficient, however, since other casual factors—that is, other component causes—might be required to produce the event. Conversely, a sufficient cause is not always a necessary cause, since differing sufficient causes might likewise produce the event. Strictly speaking, a sufficient cause cannot be a single factor, as any causal factor must act casually through many other factors. And although a necessary cause might exist, humans cannot verify one, since humans cannot check every possible state of affairs. (Language can state necessary causality as a tautology—a statement whose terms' arrangement and meanings render it is logically true by mere definition—which, as an analytic statement, is uninformative about the actual world. A statement referring to and contingent on the world's actualities is a synthetic statement, rather.) Sufficient causality is more actually sufficient component causality—a complete set of component causes interacting within a causal constellation—which, however, is beyond humans' capacity to fully discover. Yet humans tend intuitively to conceive of causality as necessary and sufficient—a single factor both required and complete—the one and only cause, the cause. One may so view flipping a light switch. The switch's flip was not sufficient cause, however, but contingent on countless factors—intact bulb, intact wiring, circuit box, bill payment, utility company, neighborhood infrastructure, engineering of technology by Thomas Edison and Nikola Tesla, explanation of electricity by James Clerk Maxwell, harnessing of electricity by Benjamin Franklin, metal refining, metal mining, and on and on—while, whatever the tally of events, nature's causal mechanical structure remains a mystery. From a Humean perspective, the light's putative inability to come on without the switch's flip is neither a logical necessity nor an empirical finding, since no experience ever reveals that the world either is or will remain universally uniform as to the aspects appearing to bind the switch's flip as the necessary event for the light's coming on. If the light comes on without switch flip, surprise will affect one's mind, but one's mind cannot know that the event violated nature. As just a mundane possibility, an activity within the wall could have connected the wires and completed the circuit without the switch's flip. Though apparently enjoying the scandals that trailed his own explanations, Hume was very practical and his skepticism was quite uneven (Flew p 156). Although Hume rejected orthodox theism and sought to reject metaphysics, Hume supposedly extended Newtonian method to the human mind, which Hume, in a sort of antiCopernican move, placed as the pivot of human knowledge (Flew p 154). Hume thus placed his own theory of knowledge on par with Newton's theory of motion (Buckle pp 70–71, Redman pp 182–83, Schliesser § abstract). Hume found enumerative induction an unavoidable custom required for one to live (Gattei pp 28–29). Hume found constant conjunction to reveal a modest causality type: counterfactual causality. Silent as to causal role—whether necessity, sufficiency, component strength, or mechanism—counterfactual causality is simply that alteration of a factor prevents or produces the event of interest. 14. ^ a b c d e Kundi M (2006). "Causality and the interpretation of epidemiologic evidence". Environmental Health Perspectives. 114 (7): 969–974. doi:10.1289/ehp.8297. PMC 1513293. PMID 16835045.  edit 15. ^ Hume noted that authors ubiquitously continue for some time stating facts and then suddenly switch to stating norms—supposedly what should be—with barely explanation. Yet such values, as in ethics or aesthetics or political philosophy, are not found true merely by stating facts: is does not itself reveal ought. Hume's law is the principle that the fact/value gap is unbridgeable—that no statements of facts can ever justify norms—although Hume himself did not state that. Rather, some later philosophers found Hume to merely stop short of stating it, but to have communicated it. Anyway, Hume found that humans acquired morality through experience by communal reinforcement. (Flew, Dictionary, "Hume's law", p 157 & "Naturalistic fallacy", pp 240–41; Wootton, Modern Political Thought, p 306.) 16. ^ Kant inferred that the mind's constants arrange space holding Euclidean geometry—like Newton's absolute space—while objects interact temporally as modeled in Newton's theory of motion, whose law of universal gravitation is a truth synthetic a priori, that is, contingent on experience, indeed, but known universally true without universal experience. Thus, the mind's innate constants cross the tongs of Hume's fork and lay Newton's universal gravitation as a priori truth. 17. ^ a b Chakravartty, "Scientific realism", §1.2 "The three dimensions of realist commitment", in SEP, 2013: "Semantically, realism is committed to a literal interpretation of scientific claims about the world. In common parlance, realists take theoretical statements at 'face value'. According to realism, claims about scientific entities, processes, properties, and relations, whether they be observable or unobservable, should be construed literally as having truth values, whether true or false. This semantic commitment contrasts primarily with those of so-called instrumentalist epistemologies of science, which interpret descriptions of unobservables simply as instruments for the prediction of observable phenomena, or for systematizing observation reports. Traditionally, instrumentalism holds that claims about unobservable things have no literal meaning at all (though the term is often used more liberally in connection with some antirealist positions today). Some antirealists contend that claims involving unobservables should not be interpreted literally, but as elliptical for corresponding claims about observables". 18. ^ a b Challenges to scientific realism are captured succinctly by Bolotin, Approach to Aristotle's Physics (SUNY P, 1998), pp 33–34, commenting about modern science, "But it has not succeeded, of course, in encompassing all phenomena, at least not yet. For it laws are mathematical idealizations, idealizations, moreover, with no immediate basis in experience and with no evident connection to the ultimate causes of the natural world. For instance, Newton's first law of motion (the law of inertia) requires us to imagine a body that is always at rest or else moving aimlessly in a straight line at a constant speed, even though we never see such a body, and even though according to his own theory of universal gravitation, it is impossible that there can be one. This fundamental law, then, which begins with a claim about what would happen in a situation that never exists, carries no conviction except insofar as it helps to predict observable events. Thus, despite the amazing success of Newton's laws in predicting the observed positions of the planets and other bodies, Einstein and Infeld are correct to say, in The Evolution of Physics, that 'we can well imagine another system, based on different assumptions, might work just as well'. Einstein and Infeld go on to assert that 'physical concepts are free creations of the human mind, and are not, however it may seem, uniquely determined by the external world'. To illustrate what they mean by this assertion, they compare the modern scientist to a man trying to understand the mechanism of a closed watch. If he is ingenious, they acknowledge, this man 'may form some picture of a mechanism which would be responsible for all the things he observes'. But they add that he 'may never quite be sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and he cannot even imagine the possibility or the meaning of such a comparison'. In other words, modern science cannot claim, and it will never be able to claim, that is has the definite understanding of any natural phenomenon". 19. ^ Whereas a hypothetical imperative is practical, simply what one ought to do if one seeks a particular outcome, the categorical imperative is morally universal, what everyone always ought to do. 20. ^ a b Bourdeau, "Auguste Comte", §§ "Abstract" & "Introduction", in Zalta, ed, SEP, 2013. 21. ^ Comte, A General View of Positivism (Trübner, 1865), pp 49–50, including the following passage: "As long as men persist in attempting to answer the insoluble questions which occupied the attention of the childhood of our race, by far the more rational plan is to do as was done then, that is, simply to give free play to the imagination. These spontaneous beliefs have gradually fallen into disuse, not because they have been disproved, but because humankind has become more enlightened as to its wants and the scope of its powers, and has gradually given an entirely new direction to its speculative efforts". 22. ^ Flew, Dictionary (St Martin's, 1984), "Positivism", p 283. 23. ^ a b c Woodward, "Scientific explanation", §1 "Background and introduction", in SEP, 2011. 24. ^ a b Friedman, Reconsidering Logical Positivism (Cambridge U P, 1999), p xii. 25. ^ Any positivism placed in the 20th century is generally neo, although there was Ernst Mach's positivism nearing 1900, and a general positivistic approach to science—traceable to the inductivist trend from Bacon at 1620, the Newtonian research program at 1687, and Comptean positivism at 1830—that continues in a vague but usually disavowed sense within popular culture and some sciences. 26. ^ Neopositivists are sometimes called "verificationists". 27. ^ • Chakravartty, "Scientific realism", §4 "Antirealism: Foils for scientific realism", §4.1 "Empiricism", in SEP, 2013: "Traditionally, instrumentalists maintain that terms for unobservables, by themselves, have no meaning; construed literally, statements involving them are not even candidates for truth or falsity. The most influential advocates of instrumentalism were the logical empiricists (or logical positivists), including Carnap and Hempel, famously associated with the Vienna Circle group of philosophers and scientists as well as important contributors elsewhere. In order to rationalize the ubiquitous use of terms which might otherwise be taken to refer to unobservables in scientific discourse, they adopted a non-literal semantics according to which these terms acquire meaning by being associated with terms for observables (for example, 'electron' might mean 'white streak in a cloud chamber'), or with demonstrable laboratory procedures (a view called 'operationalism'). Insuperable difficulties with this semantics led ultimately (in large measure) to the demise of logical empiricism and the growth of realism. The contrast here is not merely in semantics and epistemology: a number of logical empiricists also held the neo-Kantian view that ontological questions 'external' to the frameworks for knowledge represented by theories are also meaningless (the choice of a framework is made solely on pragmatic grounds), thereby rejecting the metaphysical dimension of realism (as in Carnap 1950)". • Okasha, Philosophy of Science (Oxford U P, 2002), p 62: "Strictly we should distinguish two sorts of anti-realism. According to the first sort, talk of unobservable entities is not to be understood literally at all. So when a scientist pus forward a theory about electrons, for example, we should not take him to be asserting the existence of entities called 'electrons'. Rather, his talk of electrons is metaphorical. This form of anti-realism was popular in the first half of the 20th century, but few people advocate it today. It was motivated largely by a doctrine in the philosophy of language, according to which it is not possible to make meaningful assertions about things that cannot in principle be observed, a doctrine that few contemporary philosophers accept. The second sort of anti-realism accepts that talk of unobservable entities should be taken at face value: if a theory says that electrons are negatively charged, it is true if electrons do exist and are negatively charged, but false otherwise. But we will never know which, says the anti-realist. So the correct attitude towards the claims that scientists make about unobservable reality is one of total agnosticism. They are either true or false, but we are incapable of finding out which. Most modern anti-realism is of this second sort". 28. ^ a b Woodward, "Scientific explanation", in Zalta, ed, SEP, 2011, abstract. 29. ^ Carl G Hempel & Paul Oppenheim, "Studies in the logic of explanation", Philosophy of Science, 1948 Apr;15(2):135–175. 30. ^ a b c d Bechtel, Discovering Cell Mechanisms (Cambridge U P, 2006), esp pp 24–25. 31. ^ a b Woodward, "Scientific explanation", §2 "The DN model", §2.3 "Inductive statistical explanation", in Zalta, ed, SEP, 2011. 32. ^ von Wright, Explanation and Understanding (Cornell U P, 1971), p 11. 33. ^ a b Stuart Glennan, "Explanation", § "Covering-law model of explanation", in Sarkar & Pfeifer, eds, Philosophy of Science (Routledge, 2006), p 276. 34. ^ Manfred Riedel, "Causal and historical explanation", in Manninen & Tuomela, eds, Essays on Explanation and Understanding (D Reidel, 1976), pp 3–4. 35. ^ Neopositivism's fundamental tenets were the verifiability criterion of cognitive meaningfulness, the analytic/synthetic gap, and the observation/theory gap. From 1950 to 1951, Carl Gustav Hempel renounced the verifiability criterion. In 1951 Willard Van Orman Quine attacked the analytic/synthetic gap. In 1958, Norwood Russell Hanson blurred the observational/theoretical gap. In 1959, Karl Raimund Popper attacked all of verificationism—he attacked, actually, any type of positivism—by asserting falsificationism. In 1962, Thomas Samuel Kuhn overthrew foundationalism, which was erroneously presumed to be a fundamental tenet of neopositivism. 36. ^ Fetzer, "Carl Hempel", §3 "Scientific reasoning", in SEP, 2013: "The need to dismantle the verifiability criterion of meaningfulness together with the demise of the observational/theoretical distinction meant that logical positivism no longer represented a rationally defensible position. At least two of its defining tenets had been shown to be without merit. Since most philosophers believed that Quine had shown the analytic/synthetic distinction was also untenable, moreover, many concluded that the enterprise had been a total failure. Among the important benefits of Hempel's critique, however, was the production of more general and flexible criteria of cognitive significance in Hempel (1965b), included in a famous collection of his studies, Aspects of Scientific Explanation (1965d). There he proposed that cognitive significance could not be adequately captured by means of principles of verification or falsification, whose defects were parallel, but instead required a far more subtle and nuanced approach. Hempel suggested multiple criteria for assessing the cognitive significance of different theoretical systems, where significance is not categorical but rather a matter of degree: 'Significant systems range from those whose entire extralogical vocabulary consists of observation terms, through theories whose formulation relies heavily on theoretical constructs, on to systems with hardly any bearing on potential empirical findings' (Hempel 1965b: 117). The criteria Hempel offered for evaluating the 'degrees of significance' of theoretical systems (as conjunctions of hypotheses, definitions, and auxiliary claims) were (a) the clarity and precision with which they are formulated, including explicit connections to observational language; (b) the systematic—explanatory and predictive—power of such a system, in relation to observable phenomena; (c) the formal simplicity of the systems with which a certain degree of systematic power is attained; and (d) the extent to which those systems have been confirmed by experimental evidence (Hempel 1965b). The elegance of Hempel's study laid to rest any lingering aspirations for simple criteria of 'cognitive significance' and signaled the demise of logical positivism as a philosophical movement". 37. ^ Popper, "Against big words", In Search of a Better World (Routledge, 1996), pp 89-90. 38. ^ Hacohen, Karl Popper: The Formative Years (Cambridge U P, 2000), pp 212–13. 39. ^ Logik der Forschung, published in Austria in 1934, was translated by Popper from German to English, The Logic of Scientific Discovery, and arrived in the Anglosphere in 1959. 40. ^ a b c d Reutlinger, Schurz & Hüttemann, "Ceteris paribus", § 1.1 "Systematic introduction", in Zalta, ed, SEP, 2011. 41. ^ As scientific study of cells, cytology emerged in the 19th century, yet its technology and methods were insufficient to clearly visualize and establish existence of any cell organelles beyond the nucleus. 42. ^ The first famed biochemistry experiment was Edward Buchner's in 1897 (Morange, A History, p 11). The biochemistry discipline soon emerged, initially investigating colloids in biological systems, a "biocolloidology" (Morange p 12; Bechtel, Discovering, p 94). This yielded to macromolecular theory, the term macromolecule introduced by German chemist Hermann Staudinger in 1922 (Morange p 12). 43. ^ Cell biology emerged principally at Rockefeller Institute through new technology (electron microscope and ultracentrifuge) and new techniques (cell fractionation and advancements in staining and fixation). 44. ^ James Fetzer, ch 3 "The paradoxes of Hempelian explanation", in Fetzer J, ed, Science, Explanation, and Rationality (Oxford U P, 2000), pp 121–122. 45. ^ Fetzer, ch 3 in Fetzer, ed, Science, Explanation, and Rationality (Oxford U P, 2000), p 129. 46. ^ a b Bechtel, Philosophy of Science (Lawrence Erlbaum, 1988), ch 1, subch "Areas of philosophy that bear on philosophy of science", § "Metaphysics", pp 8–9, § "Epistemology", p 11. 47. ^ H Atmanspacher, R C Bishop & A Amann, "Extrinsic and intrinsic irreversibility in probabilistic dynamical laws", in Khrennikov, ed, Proceedings (World Scientific, 2001), pp 51–52. 48. ^ Fetzer, ch 3, in Fetzer, ed, Science, Explanation, and Rationality (Oxford U P, 2000), p 118, poses some possible ways that natural laws, so called, when epistemic can fail as ontic: "The underlying conception is that of bringing order to our knowledge of the universe. Yet there are at least three reasons why even complete knowledge of every empirical regularity that obtains during the world's history might not afford an adequate inferential foundation for discovery of the world's laws. First, some laws might remain uninstantiated and therefore not be displayed by any regularity. Second, some regularities may be accidental and therefore not display any law of nature. And, third, in the case of probabilistic laws, some frequencies might deviate from their generating nomic probabilities 'by chance' and therefore display natural laws in ways that are unrepresentative or biased". 49. ^ This theory reduction occurs if, and apparently only if, the Sun and one planet are modeled as a two-body system, excluding all other planets (Torretti, Philosophy of Physics, pp 60–62). 50. ^ Spohn, Laws of Belief (Oxford U P, 2012), p 305. 51. ^ Whereas fundamental physics has sought laws of universal regularity, special sciences normally include ceteris paribus laws, which are predictively accurate to high probability in "normal conditions" or with "all else equal", but have exceptions [Reutlinger et al § 1.1]. Chemistry's laws seem exceptionless in their domains, yet were in principle reduced to fundamental physics [Feynman p 5, Schwarz Fig 1, and so are special sciences. 52. ^ Bechtel, Philosophy of Science (Lawrence Erlbaum, 1988), ch 5, subch "Introduction: Relating disciplines by relating theories" pp 71–72. 53. ^ a b Bechtel, Philosophy of Science (Lawrence Erlbaum, 1988), ch 5, subch "Theory reduction model and the unity of science program" pp 72–76. 54. ^ a b Bem & de Jong, Theoretical Issues (Sage, 2006), pp 45–47. 55. ^ a b c O'Shaughnessy, Explaining Buyer Behavior (Oxford U P, 1992), pp 17–19. 56. ^ a b Spohn, Laws of Belief (Oxford U P, 2012), p 306. 57. ^ a b Karhausen, L. R. (2000). "Causation: The elusive grail of epidemiology". Medicine, health care, and philosophy 3 (1): 59–67. doi:10.1023/A:1009970730507. PMID 11080970.  edit 58. ^ Bechtel, Philosophy of Science (Lawrence Erlbaum, 1988), ch 3, subch "Repudiation of DN model of explanation", pp 38–39. 59. ^ a b c Rothman, K. J.; Greenland, S. (2005). "Causation and Causal Inference in Epidemiology". American Journal of Public Health 95: S144–S150. doi:10.2105/AJPH.2004.059204. PMID 16030331.  edit 60. ^ Boffetta, "Causation in the presence of weak associations", Crit Rev Food Sci Nutr, 2010;50(S1):13–16. 61. ^ Making no commitment as to the particular causal role—such as necessity, or sufficiency, or component strength, or mechanism—counterfactual causality is simply that alteration of a factor from its factual state prevents or produces by any which way the event of interest. 62. ^ In epidemiology, the counterfactual causality is not deterministic, but probabilistic (Parascandola & Weed, "Causation in epidemiology", J Epidemiol Community Health, 2001;55:905–12) PubMed. 63. ^ a b c d Schwarz, "Recent developments in string theory", Proc Natl Acad Sci U S A, 1998;95:2750–7, esp Fig 1. 64. ^ a b Ben-Menahem, Conventionalism (Cambridge U P, 2006), p 71. 65. ^ Instances of falsity limited Boyle's law to special cases, thus ideal gas law. 66. ^ a b c d Newburgh et al, "Einstein, Perrin, and the reality of atoms", Am J Phys, 2006, p 478. 67. ^ For brief review of Boltmann's view, see ch 3 "Philipp Frank", § 1 "T S Kuhn's interview", in Blackmore et al, eds, Ernst Mach's Vienna 1895–1930 (Kluwer, 2001), p 63, as Frank was a student of Boltzmann soon after Mach's retirement. See "Notes", pp 79–80, #12 for views of Mach and of Ostwald, #13 for views of contemporary physists generally, and #14 for views of Einstein. The more relevant here is #12: "Mach seems to have had several closely related opinions concerning atomism. First, he often thought the theory might be useful in physics as long as one did not believe in the reality of atoms. Second, he believed it was difficult to apply the atomic theory to both psychology and physics. Third, his own theory of elements is often called an 'atomistic theory' in psychology in contrast with both gestalt theory and a continuum theory of experience. Fourth, when critical of the reality of atoms, he normally meant the Greek sense of 'indivisible substance' and thought Boltzmann was being evasive by advocating divisible atoms or 'corpuscles' such as would become normal after J J Thomson and the distinction between electrons and nuclei. Fifth, he normally called physical atoms 'things of thought' and was very happy when Ostwald seemed to refute the reality of atoms in 1905. And sixth, after Ostwald returned to atomism in 1908, Mach continued to defend Ostwald's 'energeticist' alternative to atomism". 68. ^ Physicists had explained the electromagnetic field's energy as mechanical energy, like an ocean wave's bodily impact, not water droplets individually showered (Grandy, Everyday Quantum Reality, pp 22–23). In the 1890s, the problem of blackbody radiation was paradoxical until Max Planck theorized quantum exhibiting Planck's constant—a minimum unit of energy. The quanta were mysterious, not viewed as particles, yet simply as units of energy. Another paradox, however, was the photoelectric effect. As shorter wavelength yields more waves per unit distance, lower wavelength is higher wave frequency. Within the electromagnetic spectrum's visible portion, frequency sets the color. Light's intensity, however, is the wave's amplitude as the wave's height. In a strictly wave explanation, a greater intensity—higher wave amplitude—raises the mechanical energy delivered, namely, the wave's impact, and thereby yields greater physical effect. And yet in the photoelectric effect, only a certain color and beyond—a certain frequency and higher—was found to knock electrons off a metal surface. Below that frequency or color, raising the intensity of the light still knocked no electrons off. Einstein modeled Planck's quanta as each a particle whose individual energy was Planck's constant multiplied by the light's wave's frequency: at only a certain frequency and beyond would each particle be energetic enough to eject an electron from its orbital. Although elevating the intensity of light would deliver more energy—more total particles—each individual particle would still lack sufficient energy to dislodge an electron. Einstein's model, far more intricate, used probability theory to explain rates of elections ejections as rates of collisions with electromagnetic particles. This revival of the particle hypothesis of light—generally attributed to Newton—was widely doubted. By 1920, however, the explanation helped solve problems in atomic theory, and thus quantum mechanics emerged. In 1926, Gilbert N Lewis termed the particles photons. QED models them as the electromagnetic field's messenger particles or force carriers, emitted and absorbed by electrons and by other particles undergoing transitions. 69. ^ Wolfson, Simply Einstein (W W Norton & Co, 2003), p 67. 70. ^ Newton's gravitational theory at 1687 had postulated absolute space and absolute time. To fit Young's transverse wave theory of light at 1804, space was theoretically filled with Fresnel's luminiferous aether at 1814. By Maxwell's electromagnetic field theory of 1865, light always holds a constant speed, which, however, must be relative to something, apparently to aether. Yet if light's speed is constant relative to aether, then a body's motion through aether would be relative to—thus vary in relation to—light's speed. Even Earth's vast speed, multiplied by experimental ingenuity with an interferometer by Michelson & Morley at 1887, revealed no apparent aether drift—light speed apparently constant, an absolute. Thus, both Newton's gravitational theory and Maxwell's electromagnetic theory each had its own relativity principle, yet the two were incompatible. For brief summary, see Wilczek, Lightness of Being (Basic Books, 2008), pp 78–80. 71. ^ Cordero, EPSA Philosophy of Science (Springer, 2012), pp 26–28. 72. ^ Hooper, Aether and Gravitation (Chapman & Hall, 1903), pp 122–23. 73. ^ a b Lodge, "The ether of space", Sci Am Suppl, 1909;67:202–03. 74. ^ Even Mach, who shunned all hypotheses beyond direct sensory experience, presumed an aether, required for motion to not violate mechanical philosophy's founding principle, No instant interaction at a distance (Einstein, "Ether", Sidelights (Methuen, 1922), pp 15–18). 75. ^ Rowlands, Oliver Lodge (Liverpool U P, 1990), pp 159–60: "Lodge's ether experiments have become part of the historical background leading up to the establishment of special relativity and their significance is usually seen in this context. Special relativity, it is stated, eliminated both the ether and the concept of absolute motion from physics. Two experiments were involved: that of Michelson and Morley, which showed that bodies do not move with respect to a stationary ether, and that of Lodge, which showed that moving bodies do not drag ether with them. With the emphasis on relativity, the Michelson–Morley experiment has come to be seen as the more significant of the two, and Lodge's experiment becomes something of a detail, a matter of eliminating the final, and less likely, possibility of a nonstationary, viscous, all-pervading medium. It could be argued that almost the exact opposite may have been the case. The Michelson–Morley experiment did not prove that there was no absolute motion, and it did not prove that there was no stationary ether. Its results—and the FitzGerald–Lorentz contraction—could have been predicted on Heaviside's, or even Maxwell's, theory, even if no experiment had ever taken place. The significance of the experiment, though considerable, is purely historical, and in no way factual. Lodge's experiment, on the other hand, showed that, if an ether existed, then its properties must be quite different from those imagined by mechanistic theorists. The ether which he always believed existed had to acquire entirely new properties as a result of this work". 76. ^ Mainly Hendrik Lorentz as well as Henri Poincaré modified electrodynamic theory and, more or less, developed special theory of relativity before Einstein did (Ohanian, Einstein's Mistakes, pp 281–85). Yet Einstein, free a thinker, took the next step and stated it, more elegantly, without aether (Torretti, Philosophy of Physics, p 180). 77. ^ a b Tavel, Contemporary Physics (Rutgers U P, 2001), pp [1], 66. 78. ^ Introduced soon after Einstein explained Brownian motion, special relativity holds only in cases of inertial motion, that is, unaccelerated motion. Inertia is the state of a body experiencing no acceleration, whether by change in speed—either quickening or slowing—or by change in direction, and thus exhibits constant velocity, which is speed plus direction. 79. ^ a b c Cordero, EPSA Philosophy of Science (Springer, 2012), pp 29–30. 80. ^ To explain absolute light speed without aether, Einstein modeled that a body at motion in an electromagnetic field experiences length contraction and time dilation, which Lorentz and Poincaré had already modeled as Lorentz-FitzGerald contraction and Lorentz transformation but by hypothesizing dynamic states of the aether, whereas Einstein's special relativity was simply kinematic, that is, positing no causal mechanical explanation, simply describing positions, thus showing how to align measuring devices, namely, clocks and rods. (Ohanian, Einstein's Mistakes, pp 281–85). 81. ^ Ohanian, Einstein's Mistakes (W W Norton, 2008), pp 281–85. 82. ^ Newton's theory required absolute space and time. 83. ^ Buchen, "May 29, 1919", Wired, 2009. Moyer, "Revolution", in Studies in the Natural Sciences (Springer, 1979), p 55. Melia, Black Hole (Princeton U P, 2003), pp 83–87. 84. ^ Crelinsten, Einstein's Jury (Princeton U P, 2006), p 28. 85. ^ a b c From 1925 to 1926, independently but nearly simultaneously, Werner Heisenberg as well as Erwin Schrödinger developed quantum mechanics (Zee in Feynman, QED, p xiv). Schrödinger introduced wave mechanics, whose wave function is discerned by a partial differential equation, now termed Schrödinger equation (p xiv). Heisenberg, who also stated the uncertainty principle, along with Max Born and Pascual Jordan introduced matrix mechanics, which rather confusingly talked of operators acting on quantum states (p xiv). If taken as causal mechanically explanatory, the two formalisms vividly disagree, and yet are indiscernible empirically, that is, when not used for interpretation, and taken as simply formalism (p xv). In 1941, at a party in a tavern in Princeton, New Jersey, visiting physicist Herbert Jehle mentioned to Richard Feynman a different formalism suggested by Paul Dirac, who developed bra–ket notation, in 1932 (p xv). The next day, Feynman completed Dirac's suggested approach as sum over histories or sum over paths or path integrals (p xv). Feynman would joke that this approach—which sums all possible paths that a particle could take, as though the particle actually takes them all, canceling themselves out except for one pathway, the particle's most efficient—abolishes the uncertainty principle (p xvi). All empirically equivalent, Schrödinger's wave formalism, Heisenberg's matrix formalism, and Feynman's path integral formalism all incorporate the uncertain principle (p xvi). There is no particular barrier to additional formalisms, which could be, simply have not been, developed and widely disseminated (p xvii). In a particular physical discipline, however, and on a particular problem, one of the three formalisms might be easier than others to operate (pp xvi–xvii). By the 1960s, path integral formalism virtually vanished from use, while matrix formalism was the "canonical" (p xvii). In the 1970s, path integral formalism made a "roaring comeback", became the predominant means to make predictions from QFT, and impelled Feynman to an aura of mystique (p xviii). 86. ^ a b Cushing, Quantum Mechanics (U Chicago P, 1994), pp 113–18. 87. ^ a b Schrödinger's wave mechanics posed an electron's charge smeared across space as a waveform, later reinterpreted as the electron manifesting across space probabilistically but nowhere definitely while eventually building up that deterministic waveform. Heisenberg's matrix mechanics confusingly talked of operators acting on quantum states. Richard Feynman introduced QM's path integral formalism—interpretable as a particle traveling all paths imaginable, canceling themselves, leaving just one, the most efficient—predictively identical with Heisenberg's matrix formalism and with Schrödinger's wave formalism. 88. ^ Torretti, Philosophy of Physics (Cambridge U P, 1999), pp 393–95. 89. ^ Torretti, Philosophy of Physics (Cambridge U P, 1999), p 394. 90. ^ a b c Torretti, Philosophy of Physics (Cambridge U P, 1999), p 395. 91. ^ Recognition of strong force permitted Manhattan Project to engineer Little Boy and Fat Man, dropped on Japan, whereas effects of weak force were seen in its aftermath—radioactive fallout—of diverse health consequences. 92. ^ a b c d e f Wilczek, "The persistence of ether", Phys Today, 1999;52:11,13, p 13. 93. ^ The four, known fundamental interactions are gravitational, electromagnetic, weak nuclear, and strong nuclear. 94. ^ Grandy, Everyday Quantum Reality (Indiana U P, 2010), pp 24–25. 95. ^ Schweber, QED and the Men who Made it (Princeton U P, 1994). 96. ^ Feynman, QED (Princeton U P, 2006), p 5. 97. ^ a b c Torretti, Philosophy of Physics, (Cambridge U P, 1999), pp 395–96. 98. ^ a b c d Cushing, Quantum Mechanics (U Chicago P, 1994), pp 158–59. 99. ^ Close, "Much ado about nothing", Nova, PBS/WGBH, 2012: "This new quantum mechanical view of nothing began to emerge in 1947, when Willis Lamb measured spectrum of hydrogen. The electron in a hydrogen atom cannot move wherever it pleases but instead is restricted to specific paths. This is analogous to climbing a ladder: You cannot end up at arbitrary heights above ground, only those where there are rungs to stand on. Quantum mechanics explains the spacing of the rungs on the atomic ladder and predicts the frequencies of radiation that are emitted or absorbed when an electron switches from one to another. According to the state of the art in 1947, which assumed the hydrogen atom to consist of just an electron, a proton, and an electric field, two of these rungs have identical energy. However, Lamb's measurements showed that these two rungs differ in energy by about one part in a million. What could be causing this tiny but significant difference? "When physicists drew up their simple picture of the atom, they had forgotten something: Nothing. Lamb had become the first person to observe experimentally that the vacuum is not empty, but is instead seething with ephemeral electrons and their anti-matter analogues, positrons. These electrons and positrons disappear almost instantaneously, but in their brief mayfly moment of existence they alter the shape of the atom's electromagnetic field slightly. This momentary interaction with the electron inside the hydrogen atom kicks one of the rungs of the ladder just a bit higher than it would be otherwise. 100. ^ a b c d e 101. ^ a b Riesselmann "Concept of ether in explaining forces", Inquiring Minds, Fermilab, 2008. 102. ^ Close, "Much ado about nothing", Nova, PBS/WGBH, 2012. 103. ^ On "historical examples of empirically successful theories that later turn out to be false", Okasha, Philosophy of Science (Oxford U P, 2002), p 65, concludes, "One that remains is the wave theory of light, first put forward by Christian Huygens in 1690. According to this theory, light consists of wave-like vibrations in an invisible medium called the ether, which was supposed to permeate the whole universe. (The rival to the wave theory was the particle theory of light, favoured by Newton, which held that light consists of very small particles emitted by the light source.) The wave theory was not widely accepted until the French physicist Auguste Fresnel formulated a mathematical version of the theory in 1815, and used it to predict some surprising new optical phenomena. Optical experiments confirmed Fresnel's predictions, convincing many 19th-century scientists that the wave theory of light must be true. But modern physics tells us that the theory is not true: there is no such thing as the ether, so light doesn't consist of vibrations in it. Again, we have an example of a false but empirically successful theory". 104. ^ Pigliucci, Answers for Aristotle (Basic Books, 2012), p 119: "But the antirealist will quickly point out that plenty of times in the past scientists have posited the existence of unobservables that were apparently necessary to explain a phenomenon, only to discover later on that such unobservables did not in fact exist. A classic case is the aether, a substance that was supposed by nineteenth-century physicists to permeate all space and make it possible for electromagnetic radiation (like light) to propagate. It was Einstein's special theory of relativity, proposed in 1905, that did away with the necessity of aether, and the concept has been relegated to the dustbin of scientific history ever since. The antirealists will relish pointing out that modern physics features a number of similarly unobservable entities, from quantum mechanical 'foam' to dark energy, and that the current crop of scientists seems just as confident about the latter two as their nineteenth-century counterparts were about aether". 105. ^ Wilczek, Lightness of Being (Basic Books, 2008), pp 78–80. 106. ^ Laughlin, A Different Universe (Basic Books, 2005), pp 120–21. 107. ^ a b Einstein, "Ether", Sidelights (Methuen, 1922), pp 14–18. 108. ^ Lorentz aether was at absolute rest—acting on matter but not acted on by matter. Replacing it and resembling Ernst Mach's aether, Einstein aether is spacetime itself—which is the gravitational field—receiving motion from a body and transmitting it to other bodies while propagating at light speed, waving. An unobservable, however, Einstein aether is not a privileged reference frame—is not to be assigned a state of absolute motion or absolute rest. 109. ^ Relativity theory comprises both special relativity (SR) and general relativity (GR). Holding for inertial reference frames, SR is as a limited case of GR, which holds for all reference frames, both inertial and accelerated. In GR, all motion—inertial, accelerated, or gravitational—is consequent of the geometry of 3D space stretched onto the 1D axis of time. By GR, no force distinguishes acceleration from inertia. Inertial motion is consequence simply of uniform geometry of spacetime, acceleration is consequence simply of nonuniform geometry of spacetime, and gravitation is simply acceleration. 111. ^ In Einstein's 4D spacetime, 3D space is stretched onto the 1D axis of time flow, which slows while space additionally contracts in the vicinity of mass or energy. 112. ^ Torretti, Philosophy of Physics (Cambridge U P, 1999), p 180. 113. ^ As an effective field theory, once adjusted to particular domains, Standard Model is predictively accurate until a certain, vast energy scale that is a cutoff, whereupon more fundamental phenomena—regulating the effective theory's modeled phenomena—would emerge. (Burgess & Moore, Standard Model, p xi; Wells, Effective Theories, pp 55–56). 114. ^ a b c Torretti, Philosophy of Physics (Cambridge U P, 1999), p 396. 115. ^ a b c Jegerlehner, "The Standard Model as a low-energy effective theory", arXiv:1304.7813: "We understand the SM as a low energy effective emergence of some unknown physical system—we may call it 'ether'—which is located at the Planck scale with the Planck length as a 'microscopic' length scale. Note that the cutoff, though very large, in any case is finite". 116. ^ a b Wilczek, Lightness of Being (Basic Books, 2008), ch 8 "The grid (persistence of ether)", p 73: "For natural philosophy, the most important lesson we learn from QCD is that what we perceive as empty space is in reality a powerful medium whose activity molds the world. Other developments in modern physics reinforce and enrich that lesson. Later, as we explore the current frontiers, we'll see how the concept of 'empty' space as a rich, dynamic medium empowers our best thinking about how to achieve the unification of forces". 117. ^ Mass–energy equivalence is formalized in the equation E=mc2. 118. ^ Einstein, "Ether", Sidelights (Methuen, 1922), p 13: "[A]ccording to the special theory of relativity, both matter and radiation are but special forms of distributed energy, ponderable mass losing its isolation and appearing as a special form of energy". 119. ^ Braibant, Giacomelli & Spurio, Particles and Fundamental Interactions (Springer, 2012), p 2: "Any particle can be created in collisions between two high energy particles thanks to a process of transformation of energy in mass". 120. ^ Brian Greene explained, "People often have the wrong image of what happens inside the LHC, and I am just as guilty as anyone of perpetuating it. The machine does not smash together particles to pulverise them and see what is inside. Rather, it collides them at extremely high energy. Since, by dint of Einstein's famous equation, E=mc2, energy and mass are one and the same, the combined energy of the collision can be converted into a mass, in other words, a particle, that is heavier than either of the colliding protons. The more energy is involved in the collision, the heavier the particles that might come into being" [Avent, "The Q&A", Economist, 2012]. 121. ^ a b c Kuhlmann, "Physicists debate", Sci Am, 2013. 122. ^ Whereas Newton's Principia inferred absolute space and absolute time, omitted an aether, and, by Newton's law of universal gravitation, formalized action at a distance—a supposed force of gravitation spanning the entire universe instantly—Newton's later work Optiks introduced an aether binding bodies' matter, yet denser outside bodies, and, not uniformly distributed across all space, in some locations condensed, whereby "aethereal spirits" mediate electricity, magnetism, and gravitation. (Whittaker, A History of Theories of Aether (Longmans, Green & Co: 1910), pp 17–18) 123. ^ Norton, "Causation as folk science", in Price & Corry, eds, Mature Causation, Physics, and the Constitution of Reality (Oxford U P, 2007), esp p 12. Further reading[edit]
f90ee072ebecf6de
Skip to content Scattering and tunnelling Free CourseFree Course Scattering is fundamental to almost everything we know about the world, such as why the sky is blue. Tunnelling is entirely quantum-mechanical and gives rise to such phenomena as nuclear fusion in stars. Scattering and tunnelling is a free course that investigatyes examples and applications of both these fascinating concepts. • explain the meanings of the emboldened terms and use them appropriately; • describe the behaviour of wave packets when they encounter potential energy steps, barriers and wells; • describe how stationary-state solutions of the Schrödinger equation can be used to analyse scattering and tunnelling; • for a range of simple potential energy functions, obtain the solution of the time-independent Schrödinger equation and use continuity boundary conditions to find reflection and transmission coefficients; • present information about solutions of the time-independent Schrödinger equation in graphical terms; • evaluate probability density currents and explain their significance; • describe and comment on applications of scattering and tunnelling in a range of situations including: three-dimensional scattering, alpha decay, nuclear fusion in stars, and the scanning tunnelling microscope. By: The Open University Study this free course Scattering and tunnelling Unit image Tags, Ratings and Social Bookmarking No votes yet
92250424da356555
Quantum Mechanics: Hydrogen Atom and Electron Spin By Dragica Vasileska1, Gerhard Klimeck2 1. Arizona State University 2. Purdue University View Series Slides/Notes podcast Licensed according to this deed. Published on A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral atom contains a single positively-charged proton and a single negatively-charged electron bound to the nucleus by the Coulomb force. The most abundant isotope, hydrogen-1, protium, or light hydrogen, contains no neutrons; other isotopes contain one or more neutrons. This article primarily concerns hydrogen-1. The hydrogen atom has special significance in quantum mechanics and quantum field theory as a simple two-body problem physical system which has yielded many simple analytical solutions in closed-form. In 1913, Niels Bohr obtained the spectral frequencies of the hydrogen atom after making a number of simplifying assumptions. These assumptions, the cornerstones of the Bohr model, were not fully correct but did yield the correct energy answers. Bohr's results for the frequencies and underlying energy values were confirmed by the full quantum-mechanical analysis which uses the Schrödinger equation, as was shown in 1925/26. The solution to the Schrödinger equation for hydrogen is analytical. From this, the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines can be calculated. The solution of the Schrödinger equation goes much further than the Bohr model however, because it also yields the shape of the electron's wave function ("orbital") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds. The Schrödinger equation also applies to more complicated atoms and molecules. However, in most such cases the solution is not analytical and either computer calculations are necessary or simplifying assumptions must be made. Solution of the Schrodinger equation for the hydrogen atom is provided below: • Slides on the Solution of the Schrodinger equation for Hydrogen atom • In physics and chemistry, spin refers to a non-classical kind of angular momentum intrinsic to a body, as opposed to orbital angular momentum, which is the motion of its center of mass about an external point. A particle's spin is essentially the direction a particle turns along a given axis, which in turn can be used to determine the particle's magneticism.[1] Although this special property is only explained in the relativistic quantum mechanics of Paul Dirac, it plays a most-important role already in non-relativistic quantum mechanics, e.g., it essentially determines the structure of atoms. In classical mechanics, any spin angular momentum of a body is associated with self rotation, e.g., the rotation of the body around its own center of mass. For example, the spin of the Earth is associated with its daily rotation about the polar axis. On the other hand, the orbital angular momentum of the Earth is associated with its annual motion around the Sun. In fact, in classical theories there is no analogue to the quantum mechanical property meant by the name spin. The concept of this nonclassical property of elementary particles was first proposed in 1925 by Ralph Kronig, George Uhlenbeck, and Samuel Goudsmit; but the name related to the phenomenon of spin in physics is Wolfgang Pauli. Applications of Spin in nanoelectronics are given in the presentation slides below: • The story of the two spins • Cite this work Researchers should cite this work as follows: • Dragica Vasileska; Gerhard Klimeck (2008), "Quantum Mechanics: Hydrogen Atom and Electron Spin," https://nanohub.org/resources/4995. BibTex | EndNote In This Series 1. Quantum Mechanics: Hydrogen Atom 2. Quantum Mechanics: The story of the electron spin One of the most remarkable discoveries associated with quantum physics is the fact that elementary particles can possess non-zero spin. Elementary particles are particles that cannot be divided into any smaller units, such as the photon, the electron, and the various quarks. Theoretical and...
5c07815c883dc44e
atom, In the shell atomic model, electrons occupy different energy levels, or shells. The K and L shells are shown for a neon atom.Encyclopædia Britannica, Inc.smallest unit into which matter can be divided without the release of electrically charged particles. It also is the smallest unit of matter that has the characteristic properties of a chemical element. As such, the atom is the basic building block of chemistry. Most of the atom is empty space. The rest consists of a positively charged nucleus of protons and neutrons surrounded by a cloud of negatively charged electrons. The nucleus is small and dense compared with the electrons, which are the lightest charged particles in nature. Electrons are attracted to any positive charge by their electric force; in an atom, electric forces bind the electrons to the nucleus. Because of the nature of quantum mechanics, no single image has been entirely satisfactory at visualizing the atom’s various characteristics, which thus forces physicists to use complementary pictures of the atom to explain different properties. In some respects, the electrons in an atom behave like particles orbiting the nucleus. In others, the electrons behave like waves frozen in position around the nucleus. Such wave patterns, called orbitals, describe the distribution of individual electrons. The behaviour of an atom is strongly influenced by these orbital properties, and its chemical properties are determined by orbital groupings known as shells. This article opens with a broad overview of the fundamental properties of the atom and its constituent particles and forces. Following this overview is a historical survey of the most influential concepts about the atom that have been formulated through the centuries. For additional information pertaining to nuclear structure and elementary particles, see subatomic particles. Atomic model Chemical bonding is the interaction of atoms to join and form molecules and other stable forms of matter. When atoms approach one another, their nuclei and electrons distribute themselves in space so that the total energy is lower than it would be in any other configuration. The number of bonds an atom can form is called its valence, or valency. Atoms can share unpaired electrons, which creates a covalent bond, or transfer them from one atom to the other, which creates an ionic bond. A metallic bond forms in closely packed metal atoms in which the outer electron shell of each atom overlaps with a large number of nearby atoms. In this type of bond, the valence electrons continually move from one atom to another.Encyclopædia Britannica, Inc.Most matter consists of an agglomeration of molecules, which can be separated relatively easily. Molecules, in turn, are composed of atoms joined by chemical bonds that are more difficult to break. Each individual atom consists of smaller particles—namely, electrons and nuclei. These particles are electrically charged, and the electric forces on the charge are responsible for holding the atom together. Attempts to separate these smaller constituent particles require ever-increasing amounts of energy and result in the creation of new subatomic particles, many of which are charged. As noted in the introduction to this article, an atom consists largely of empty space. The nucleus is the positively charged centre of an atom and contains most of its mass. It is composed of protons, which have a positive charge, and neutrons, which have no charge. Protons, neutrons, and the electrons surrounding them are long-lived particles present in all ordinary, naturally occurring atoms. Other subatomic particles may be found in association with these three types of particles. They can be created only with the addition of enormous amounts of energy, however, and are very short-lived. All atoms are roughly the same size, whether they have 3 or 90 electrons. Approximately 50 million atoms of solid matter lined up in a row would measure 1 cm (0.4 inch). A convenient unit of length for measuring atomic sizes is the angstrom (Å), defined as 10−10 metre. The radius of an atom measures 1–2 Å. Compared with the overall size of the atom, the nucleus is even more minute. It is in the same proportion to the atom as a marble is to a football field. In volume the nucleus takes up only 10−14 metres of the space in the atom—i.e., 1 part in 100,000. A convenient unit of length for measuring nuclear sizes is the femtometre (fm), which equals 10−15 metre. The diameter of a nucleus depends on the number of particles it contains and ranges from about 4 fm for a light nucleus such as carbon to 15 fm for a heavy nucleus such as lead. In spite of the small size of the nucleus, virtually all the mass of the atom is concentrated there. The protons are massive, positively charged particles, whereas the neutrons have no charge and are slightly more massive than the protons. The fact that nuclei can have anywhere from 1 to nearly 300 protons and neutrons accounts for their wide variation in mass. The lightest nucleus, that of hydrogen, is 1,836 times more massive than an electron, while heavy nuclei are nearly 500,000 times more massive. Basic properties Atomic number The single most important characteristic of an atom is its atomic number (usually denoted by the letter Z), which is defined as the number of units of positive charge (protons) in the nucleus. For example, if an atom has a Z of 6, it is carbon, while a Z of 92 corresponds to uranium. A neutral atom has an equal number of protons and electrons so that the positive and negative charges exactly balance. Since it is the electrons that determine how one atom interacts with another, in the end it is the number of protons in the nucleus that determines the chemical properties of an atom. Atomic mass and isotopes The number of neutrons in a nucleus affects the mass of the atom but not its chemical properties. Thus, a nucleus with six protons and six neutrons will have the same chemical properties as a nucleus with six protons and eight neutrons, although the two masses will be different. Nuclei with the same number of protons but different numbers of neutrons are said to be isotopes of each other. All chemical elements have many isotopes. It is usual to characterize different isotopes by giving the sum of the number of protons and neutrons in the nucleus—a quantity called the atomic mass number. In the above example, the first atom would be called carbon-12 or 12C (because it has six protons and six neutrons), while the second would be carbon-14 or 14C. The mass of atoms is measured in terms of the atomic mass unit, which is defined to be 1/12 of the mass of an atom of carbon-12, or 1.660538921 × 10−24 gram. The mass of an atom consists of the mass of the nucleus plus that of the electrons, so the atomic mass unit is not exactly the same as the mass of the proton or neutron. The electron Charge, mass, and spin Between 1909 and 1910 the American physicist Robert Millikan conducted a series of oil-drop experiments. By comparing applied electric force with changes in the motion of the oil drops, he was able to determine the electric charge on each drop. He found that all of the drops had charges that were simple multiples of a single number, the fundamental charge of the electron.Encyclopædia Britannica, Inc.Millikan oil-drop experiment.Encyclopædia Britannica, Inc.Scientists have known since the late 19th century that the electron has a negative electric charge. The value of this charge was first measured by the American physicist Robert Millikan between 1909 and 1910. In Millikan’s oil-drop experiment, he suspended tiny oil drops in a chamber containing an oil mist. By measuring the rate of fall of the oil drops, he was able to determine their weight. Oil drops that had an electric charge (acquired, for example, by friction when moving through the air) could then be slowed down or stopped by applying an electric force. By comparing applied electric force with changes in motion, Millikan was able to determine the electric charge on each drop. After he had measured many drops, he found that the charges on all of them were simple multiples of a single number. This basic unit of charge was the charge on the electron, and the different charges on the oil drops corresponded to those having 2, 3, 4,… extra electrons on them. The charge on the electron is now accepted to be 1.602176565 × 10−19 coulomb. For this work Millikan was awarded the Nobel Prize for Physics in 1923. The charge on the proton is equal in magnitude to that on the electron but opposite in sign—that is, the proton has a positive charge. Because opposite electric charges attract each other, there is an attractive force between electrons and protons. This force is what keeps electrons in orbit around the nucleus, something like the way that gravity keeps the Earth in orbit around the Sun. The electron has a mass of about 9.109382911 × 10−28 gram. The mass of a proton or neutron is about 1,836 times larger. This explains why the mass of an atom is primarily determined by the mass of the protons and neutrons in the nucleus. The electron has other intrinsic properties. One of these is called spin. The electron can be pictured as being something like Earth, spinning around an axis of rotation. In fact, most elementary particles have this property. Unlike Earth, however, they exist in the subatomic world and are governed by the laws of quantum mechanics. Therefore, these particles cannot spin in any arbitrary way, but only at certain specific rates. These rates can be 1/2, 1, 3/2, 2,… times a basic unit of rotation. Like protons and neutrons, electrons have spin 1/2. Particles with half-integer spin are called fermions, for the Italian American physicist Enrico Fermi, who investigated their properties in the first half of the 20th century. Fermions have one important property that will help explain both the way that electrons are arranged in their orbits and the way that protons and neutrons are arranged inside the nucleus. They are subject to the Pauli exclusion principle (named for the Austrian physicist Wolfgang Pauli), which states that no two fermions can occupy the same state—for example, the two electrons in a helium atom must have different spin directions if they occupy the same orbit. Because a spinning electron can be thought of as a moving electric charge, electrons can be thought of as tiny electromagnets. This means that, like any other magnet, an electron will respond to the presence of a magnetic field by twisting. (Think of a compass needle pointing north under the influence of the Earth’s magnetic field.) This fact is usually expressed by saying that electrons have a magnetic moment. In physics, magnetic moment relates the strength of a magnetic field to the torque experienced by a magnetic object. Because of their intrinsic spin, electrons have a magnetic moment given by −9.28 × 10−24 joule per tesla. Orbits and energy levels The electron travels in circular orbits around the nucleus. The orbits have quantized sizes and energies. Energy is emitted from the atom when the electron jumps from one orbit to another closer to the nucleus. Shown here is the first Balmer transition, in which an electron jumps from orbit n = 3 to orbit n = 2, producing a photon of red light with an energy of 1.89 eV and a wavelength of 656 nanometres.Encyclopædia Britannica, Inc.Unlike planets orbiting the Sun, electrons cannot be at any arbitrary distance from the nucleus; they can exist only in certain specific locations called allowed orbits. This property, first explained by the Danish physicist Niels Bohr in 1913, is another result of quantum mechanics—specifically, the requirement that the angular momentum of an electron in orbit, like everything else in the quantum world, come in discrete bundles called quanta. In the Bohr atom electrons can be found only in allowed orbits, and these allowed orbits are at different energies. The orbits are analogous to a set of stairs in which the gravitational potential energy is different for each step and in which a ball can be found on any step but never in between. The laws of quantum mechanics describe the process by which electrons can move from one allowed orbit, or energy level, to another. As with many processes in the quantum world, this process is impossible to visualize. An electron disappears from the orbit in which it is located and reappears in its new location without ever appearing any place in between. This process is called a quantum leap or quantum jump, and it has no analog in the macroscopic world. Because different orbits have different energies, whenever a quantum leap occurs, the energy possessed by the electron will be different after the jump. For example, if an electron jumps from a higher to a lower energy level, the lost energy will have to go somewhere and in fact will be emitted by the atom in a bundle of electromagnetic radiation. This bundle is known as a photon, and this emission of photons with a change of energy levels is the process by which atoms emit light. See also laser. In the same way, if energy is added to an atom, an electron can use that energy to make a quantum leap from a lower to a higher orbit. This energy can be supplied in many ways. One common way is for the atom to absorb a photon of just the right frequency. For example, when white light is shone on an atom, it selectively absorbs those frequencies corresponding to the energy differences between allowed orbits. Each element has a unique set of energy levels, and so the frequencies at which it absorbs and emits light act as a kind of fingerprint, identifying the particular element. This property of atoms has given rise to spectroscopy, a science devoted to identifying atoms and molecules by the kind of radiation they emit or absorb. This picture of the atom, with electrons moving up and down between allowed orbits, accompanied by the absorption or emission of energy, contains the essential features of the Bohr atomic model, for which Bohr received the Nobel Prize for Physics in 1922. His basic model does not work well in explaining the details of the structure of atoms more complicated than hydrogen, however. This requires the introduction of quantum mechanics. In quantum mechanics each orbiting electron is represented by a mathematical expression known as a wave function—something like a vibrating guitar string laid out along the path of the electron’s orbit. These waveforms are called orbitals. See also quantum mechanics: Bohr’s theory of the atom. Electron shells In the quantum mechanical version of the Bohr atomic model, each of the allowed electron orbits is assigned a quantum number n that runs from 1 (for the orbit closest to the nucleus) to infinity (for orbits very far from the nucleus). All of the orbitals that have the same value of n make up a shell. Inside each shell there may be subshells corresponding to different rates of rotation and orientation of orbitals and the spin directions of the electrons. In general, the farther away from the nucleus a shell is, the more subshells it will have. See the Electrons fill in shell and subshell levels in a semiregular process, as indicated by the arrows above. After filling the first shell level (with just an s subshell), electrons move into the second level s subshell and then into the p subshell, before starting on another shell level. Because of its lower energy state, the 4s orbital fills before the 3d, and similarly for later s orbitals (for example, 6s fills before 4f).Encyclopædia Britannica, Inc.. This arrangement of possible orbitals explains a great deal about the chemical properties of different atoms. The easiest way to see this is to imagine building up complex atoms by starting with hydrogen and adding one proton and one electron (along with the appropriate number of neutrons) at a time. In hydrogen the lowest-energy orbit—called the ground state—corresponds to the electron located in the shell closest to the nucleus. There are two possible states for an electron in this shell, corresponding to a clockwise spin and a counterclockwise spin (or, in the jargon of physicists, spin up and spin down). The next most-complex atom is helium, which has two protons in its nucleus and two orbiting electrons. These electrons fill the two available states in the lowest shell, producing what is called a filled shell. The next atom is lithium, with three electrons. Because the closest shell is filled, the third electron goes into the next higher shell. This shell has spaces for eight electrons, so that it takes an atom with 10 electrons (neon) to fill the first two levels. The next atom after neon, sodium, has 11 electrons, so that one electron goes into the next highest shell. Periodic table of the elements showing the valence shells.Encyclopædia Britannica, Inc.In the progression thus far, three atoms—hydrogen, lithium, and sodium—have one electron in the outermost shell. As stated above, it is these outermost electrons that determine the chemical properties of an atom. Therefore, these three elements should have similar properties, as indeed they do. For this reason, they appear in the same column of the periodic table of the elements (see periodic law), and the same principle determines the position of every element in that table. The outermost shell of electrons—called the valence shell—determines the chemical behaviour of an atom, and the number of electrons in this shell depends on how many are left over after all the interior shells are filled. Atomic bonds Once the way atoms are put together is understood, the question of how they interact with each other can be addressed—in particular, how they form bonds to create molecules and macroscopic materials. There are three basic ways that the outer electrons of atoms can form bonds: 1. Electrons can be transferred from one atom to another. 2. Electrons can be shared between neighbouring atoms. 3. Electrons can be shared with all atoms in a material. An atom of sodium (Na) donates one of its electrons to an atom of chlorine (Cl) in a chemical reaction. The resulting positive ion (Na+) and negative ion (Cl) form a stable molecule (sodium chloride, or common table salt) based on this ionic bond.Encyclopædia Britannica, Inc.The first way gives rise to what is called an ionic bond. Consider as an example an atom of sodium, which has one electron in its outermost orbit, coming near an atom of chlorine, which has seven. Because it takes eight electrons to fill the outermost shell of these atoms, the chlorine atom can be thought of as missing one electron. The sodium atom donates its single valence electron to fill the hole in the chlorine shell, forming a sodium chloride system at a lower total energy level. An atom that has more or fewer electrons in orbit than protons in its nucleus is called an ion. Once the electron from its valence shell has been transferred, the sodium atom will be missing an electron; it therefore will have a positive charge and become a sodium ion. Simultaneously, the chlorine atom, having gained an extra electron, will take on a negative charge and become a chlorine ion. The electrical force between these two oppositely charged ions is attractive and locks them together. The resulting sodium chloride compound is a cubic crystal, commonly known as ordinary table salt. The second bonding strategy listed above is described by quantum mechanics. When two atoms come near each other, they can share a pair of outermost electrons (think of the atoms as tossing the electrons back and forth between them) to form a covalent bond. Covalent bonds are particularly common in organic materials, where molecules often contain long chains of carbon atoms (which have four electrons in their valence shells). Finally, in some materials each atom gives up an outer electron that then floats freely—in essence, the electron is shared by all of the atoms within the material. The electrons form a kind of sea in which the positive ions float like marbles in molasses. This is called the metallic bond and, as the name implies, it is what holds metals together. In polar covalent bonds, such as that between hydrogen and oxygen atoms, the electrons are not transferred from one atom to the other as they are in an ionic bond. Instead, some outer electrons merely spend more time in the vicinity of the other atom. The effect of this orbital distortion is to induce regional net charges that hold the atoms together, such as in water molecules.Encyclopædia Britannica, Inc.There are also ways for atoms and molecules to bond without actually exchanging or sharing electrons. In many molecules the internal forces are such that the electrons tend to cluster at one end of the molecule, leaving the other end with a positive charge. Overall, the molecule has no net electric charge—it is just that the positive and negative charges are found at different places. For example, in water (H2O) the electrons tend to spend most of their time near the oxygen atom, leaving the region of the hydrogen atoms with a positive charge. Molecules whose charges are arranged in this way are called polar molecules. An atom or ion approaching a polar molecule from its negative side, for example, will experience a stronger negative electric force than the more-distant positive electric force. This is why so many substances dissolve in water: the polar water molecule can pull ions out of materials by exerting electric forces. A special case of polar forces occurs in what is called the hydrogen bond. In many situations, when hydrogen forms a covalent bond with another atom, electrons move toward that atom, and the hydrogen acquires a slight positive charge. The hydrogen, in turn, attracts another atom, thereby forming a kind of bridge between the two. Many important molecules, including DNA, depend on hydrogen bonds for their structure. Finally, there is a way for a weak bond to form between two electrically neutral atoms. The Dutch physicist Johannes van der Waals first theorized a mechanism for such a bond in 1873, and it is now known as van der Waals forces. When two atoms approach each other, their electron clouds exert repulsive forces on each other, so that the atoms become polarized. In such situations, it is possible that the electrical attraction between the nucleus of one atom and the electrons of the other will overcome the repulsive forces between the electrons, and a weak bond will form. One example of this force can be seen in ordinary graphite pencil lead. In this material, carbon atoms are held together in sheets by strong covalent bonds, but the sheets are held together only by van der Waals forces. When a pencil is drawn across paper, the van der Waals forces break, and sheets of carbon slough off. This is what creates the dark pencil streak. Conductors and insulators The way that atoms bond together affects the electrical properties of the materials they form. For example, in materials held together by the metallic bond, electrons float loosely between the metal ions. These electrons will be free to move if an electrical force is applied. For example, if a copper wire is attached across the poles of a battery, the electrons will flow inside the wire. Thus, an electric current flows, and the copper is said to be a conductor. The flow of electrons inside a conductor is not quite so simple, though. A free electron will be accelerated for a while but will then collide with an ion. In the collision process, some of the energy acquired by the electron will be transferred to the ion. As a result, the ion will move faster, and an observer will notice the wire’s temperature rise. This conversion of electrical energy from the motion of the electrons to heat energy is called electrical resistance. In a material of high resistance, the wire heats up quickly as electric current flows. In a material of low resistance, such as copper wire, most of the energy remains with the moving electrons, so the material is good at moving electrical energy from one point to another. Its excellent conducting property, together with its relatively low cost, is why copper is commonly used in electrical wiring. The exact opposite situation obtains in materials, such as plastics and ceramics, in which the electrons are all locked into ionic or covalent bonds. When these kinds of materials are placed between the poles of a battery, no current flows—there are simply no electrons free to move. Such materials are called insulators. Magnetic properties The magnetic properties of materials are also related to the behaviour of electrons in atoms. An electron in orbit can be thought of as a miniature loop of electric current. According to the laws of electromagnetism, such a loop will create a magnetic field. Each electron in orbit around a nucleus produces its own magnetic field, and the sum of these fields, together with the intrinsic fields of the electrons and the nucleus, determines the magnetic field of the atom. Unless all of these fields cancel out, the atom can be thought of as a tiny magnet. In most materials these atomic magnets point in random directions, so that the material itself is not magnetic. In some cases—for instance, when randomly oriented atomic magnets are placed in a strong external magnetic field—they line up, strengthening the external field in the process. This phenomenon is known as paramagnetism. In a few metals, such as iron, the interatomic forces are such that the atomic magnets line up over regions a few thousand atoms across. These regions are called domains. In normal iron the domains are oriented randomly, so the material is not magnetic. If iron is put in a strong magnetic field, however, the domains will line up, and they will stay lined up even after the external field is removed. As a result, the piece of iron will acquire a strong magnetic field. This phenomenon is known as ferromagnetism. Permanent magnets are made in this way. The nucleus Nuclear forces The primary constituents of the nucleus are the proton and the neutron, which have approximately equal mass and are much more massive than the electron. For reference, the accepted mass of the proton is 1.672621777 × 10−24 gram, while that of the neutron is 1.674927351 × 10−24 gram. The charge on the proton is equal in magnitude to that on the electron but is opposite in sign, while the neutron has no electrical charge. Both particles have spin 1/2 and are therefore fermions and subject to the Pauli exclusion principle. Both also have intrinsic magnetic fields. The magnetic moment of the proton is 1.410606743 × 10−26 joule per tesla, while that of the neutron is −0.96623647 × 10−26 joule per tesla. It would be wrong to picture the nucleus as just a collection of protons and neutrons, analogous to a bag of marbles. In fact, much of the effort in physics research during the second half of the 20th century was devoted to studying the various kinds of particles that live out their fleeting lives inside the nucleus. A more-accurate picture of the nucleus would be of a seething cauldron where hundreds of different kinds of particles swarm around the protons and neutrons. It is now believed that these so-called elementary particles are made of still more-elementary objects, which have been given the name of quarks. Modern theories suggest that even the quarks may be made of still more-fundamental entities called strings (see string theory). The forces that operate inside the nucleus are a mixture of those familiar from everyday life and those that operate only inside the atom. Two protons, for example, will repel each other because of their identical electrical force but will be attracted to each other by gravitation. Especially at the scale of elementary particles, the gravitational force is many orders of magnitude weaker than other fundamental forces, so it is customarily ignored when talking about the nucleus. Nevertheless, because the nucleus stays together in spite of the repulsive electrical force between protons, there must exist a counterforce—which physicists have named the strong force—operating at short range within the nucleus. The strong force has been a major concern in physics research since its existence was first postulated in the 1930s. One more force—the weak force—operates inside the nucleus. The weak force is responsible for some of the radioactive decays of nuclei (see below). The four fundamental forces—strong, electromagnetic, weak, and gravitational—are responsible for every process in the universe. One of the important strains in modern theoretical physics is the belief that, although they seem very different, they are different aspects of a single underlying force (see unified field theory). Nuclear shell model Many models describe the way protons and neutrons are arranged inside a nucleus. One of the most successful and simple to understand is the shell model. In this model the protons and neutrons occupy separate systems of shells, analogous to the shells in which electrons are found outside the nucleus. From light to heavy nuclei, the proton and neutron shells are filled (separately) in much the same way as electron shells are filled in an atom. Like the Bohr atomic model, the nucleus has energy levels that correspond to processes in which protons and neutrons make quantum leaps up and down between their allowed orbits. Because energies in the nucleus are so much greater than those associated with electrons, however, the photons emitted or absorbed in these reactions tend to be in the X-ray or gamma ray portions of the electromagnetic spectrum, rather than the visible light portion. Nuclear binding energies, shown as a function of atomic mass number.Encyclopædia Britannica, Inc.When a nucleus forms from protons and neutrons, an interesting regularity can be seen: the mass of the nucleus is slightly less than the sum of the masses of the constituent protons and neutrons. This consistent discrepancy is not large—typically only a fraction of a percent—but it is significant. By Albert Einstein’s principles of relativity, this small mass deficit can be converted into energy via the equation E = mc2. Thus, in order to break a nucleus into its constituent protons and neutrons, energy must be supplied to make up this mass deficit. The energy corresponding to the mass deficit is called the binding energy of the nucleus, and, as the name suggests, it represents the energy required to tie the nucleus together. The binding energy varies across the periodic table and is at a maximum for iron, which is thus the most stable element. Radioactive decay The nuclei of most everyday atoms are stable—that is, they do not change over time. This statement is somewhat misleading, however, because nuclei that are not stable generally do not last long and hence tend not to be part of everyday experience. In fact, most of the known isotopes of nuclei are not stable; instead, they go through a process called radioactive decay, a process that often changes the identity of the original atom. In radioactive decay a nucleus will remain unchanged for some unpredictable period and then emit a high-speed particle or photon, after which a different nucleus will have replaced the original. Each unstable isotope decays at a different rate; that is, each has a different probability of decaying within a given period of time (see decay constant). A collection of identical unstable nuclei do not all decay at once. Instead, like popcorn popping in a pan, they will decay individually over a period of time. The time that it takes for half of the original sample to decay is called the half-life of the isotope. Half-lives of known isotopes range from microseconds to billions of years. Uranium-238 (238U) has a half-life of about 4.5 billion years, which is approximately the time that has elapsed since the formation of the solar system. Thus, the Earth has about half of the 238U that it had when it was formed. There are three different types of radioactive decay. In the late 19th century, when radiation was still mysterious, these forms of decay were denoted alpha, beta, and gamma. In alpha decay a nucleus ejects two protons and two neutrons, all locked together in what is called an alpha particle (later discovered to be identical to the nucleus of a normal helium atom). The daughter, or decayed, nucleus will have two fewer protons and two fewer neutrons than the original and hence will be the nucleus of a different chemical element. Once the electrons have rearranged themselves (and the two excess electrons have wandered off), the atom will, in fact, have changed identity. In beta decay one of the neutrons in the nucleus turns into a proton, a fast-moving electron, and a particle called a neutrino. This emission of fast electrons is called beta radiation. The daughter nucleus has one fewer neutron and one more proton than the original and hence, again, is a different chemical element. In gamma decay a proton or neutron makes a quantum leap from a higher to a lower orbit, emitting a high-energy photon in the process. In this case the chemical identity of the daughter nucleus is the same as the original. When a radioactive nucleus decays, it often happens that the daughter nucleus is radioactive as well. This daughter will decay in turn, and the daughter nucleus of that decay may be radioactive as well. Thus, a collection of identical atoms may, over time, be turned into a mixture of many kinds of atoms because of successive decays. Such decays will continue until stable daughter nuclei are produced. This process, called a decay chain, operates everywhere in nature. For example, uranium-238 decays with a half-life of 4.5 billion years into thorium-234, which decays in 24 days into protactinium-234, which also decays. This process continues until it gets to lead-206, which is stable (see uranium-thorium-lead dating). Dangerous elements such as radium and radon are continually produced in the Earth’s crust as intermediary steps in decay chains. Nuclear energy It is almost impossible to have lived at any time since the mid-20th century and not be aware that energy can be derived from the atomic nucleus. The basic physical principle behind this fact is that the total mass present after a nuclear reaction is less than before the reaction. This difference in mass, via the equation E = mc2, is converted into what is called nuclear energy. Sequence of events in the fission of a uranium nucleus by a neutron.Encyclopædia Britannica, Inc.There are two types of nuclear processes that can produce energy—nuclear fission and nuclear fusion. In fission a heavy nucleus (such as uranium) is split into a collection of lighter nuclei and fast-moving particles. The energy at the end typically appears in the kinetic energy of the final particles. Nuclear fission is used in nuclear reactors to produce commercial electricity. It depends on the fact that a particular isotope of uranium (235U) behaves in a particular way when it is hit by a neutron. The nucleus breaks apart and emits several particles. Included in the debris of the fission are two or three more free neutrons that can produce fission in other nuclei in a chain reaction. This chain reaction can be controlled and used to heat water into steam, which can then be used to turn turbines in an electrical generator. Fusion refers to a process in which two or more light nuclei come together to form a heavier nucleus. The most common fusion process in nature is one in which four protons come together to form a helium nucleus (two protons and two neutrons) and some other particles. This is the process by which energy is generated in stars. Scientists have not yet learned to produce a controllable, commercially useful nuclear fusion on Earth, which remains a goal for the future. Development of atomic theory The concept of the atom that Western scientists accepted in broad outline from the 1600s until about 1900 originated with Greek philosophers in the 5th century bce. Their speculation about a hard, indivisible fundamental particle of nature was replaced slowly by a scientific theory supported by experiment and mathematical deduction. It was more than 2,000 years before modern physicists realized that the atom is indeed divisible and that it is not hard, solid, or immutable. The atomic philosophy of the early Greeks Leucippus of Miletus (5th century bce) is thought to have originated the atomic philosophy. His famous disciple, Democritus of Abdera, named the building blocks of matter atomos, meaning literally “indivisible,” about 430 bce. Democritus believed that atoms were uniform, solid, hard, incompressible, and indestructible and that they moved in infinite numbers through empty space until stopped. Differences in atomic shape and size determined the various properties of matter. In Democritus’s philosophy, atoms existed not only for matter but also for such qualities as perception and the human soul. For example, sourness was caused by needle-shaped atoms, while the colour white was composed of smooth-surfaced atoms. The atoms of the soul were considered to be particularly fine. Democritus developed his atomic philosophy as a middle ground between two opposing Greek theories about reality and the illusion of change. He argued that matter was subdivided into indivisible and immutable particles that created the appearance of change when they joined and separated from others. The philosopher Epicurus of Samos (341–270 bce) used Democritus’s ideas to try to quiet the fears of superstitious Greeks. According to Epicurus’s materialistic philosophy, the entire universe was composed exclusively of atoms and void, and so even the gods were subject to natural laws. Most of what is known about the atomic philosophy of the early Greeks comes from Aristotle’s attacks on it and from a long poem, De rerum natura (“On the Nature of Things”), which the Latin poet and philosopher Titus Lucretius Carus (c. 95–55 bce) wrote to popularize its ideas. The Greek atomic theory is significant historically and philosophically, but it has no scientific value. It was not based on observations of nature, measurements, tests, or experiments. Instead, the Greeks used mathematics and reason almost exclusively when they wrote about physics. Like the later theologians of the Middle Ages, they wanted an all-encompassing theory to explain the universe, not merely a detailed experimental view of a tiny portion of it. Science constituted only one aspect of their broad philosophical system. Thus, Plato and Aristotle attacked Democritus’s atomic theory on philosophical grounds rather than on scientific ones. Plato valued abstract ideas more than the physical world and rejected the notion that attributes such as goodness and beauty were “mechanical manifestations of material atoms.” Where Democritus believed that matter could not move through space without a vacuum and that light was the rapid movement of particles through a void, Aristotle rejected the existence of vacuums because he could not conceive of bodies falling equally fast through a void. Aristotle’s conception prevailed in medieval Christian Europe; its science was based on revelation and reason, and the Roman Catholic theologians rejected Democritus as materialistic and atheistic. The emergence of experimental science De rerum natura, which was rediscovered in the 15th century, helped fuel a 17th-century debate between orthodox Aristotelian views and the new experimental science. The poem was printed in 1649 and popularized by Pierre Gassendi, a French priest who tried to separate Epicurus’s atomism from its materialistic background by arguing that God created atoms. Soon after the Italian scientist Galileo Galilei expressed his belief that vacuums can exist (1638), scientists began studying the properties of air and partial vacuums to test the relative merits of Aristotelian orthodoxy and the atomic theory. The experimental evidence about air was only gradually separated from this philosophical controversy. Demonstration of Boyle’s law showing that for a given mass, at constant temperature, the pressure times the volume is a constant.Encyclopædia Britannica, Inc.The Anglo-Irish chemist Robert Boyle began his systematic study of air in 1658 after he learned that Otto von Guericke, a German physicist and engineer, had invented an improved air pump four years earlier. In 1662 Boyle published the first physical law expressed in the form of an equation that describes the functional dependence of two variable quantities. This formulation became known as Boyle’s law. From the beginning, Boyle wanted to analyze the elasticity of air quantitatively, not just qualitatively, and to separate the particular experimental problem about air’s “spring” from the surrounding philosophical issues. Pouring mercury into the open end of a closed J-shaped tube, Boyle forced the air in the short side of the tube to contract under the pressure of the mercury on top. By doubling the height of the mercury column, he roughly doubled the pressure and halved the volume of air. By tripling the pressure, he cut the volume of air to a third, and so on. This behaviour can be formulated mathematically in the relation PV = PV′, where P and V are the pressure and volume under one set of conditions and P′ and V′ represent them under different conditions. Boyle’s law says that pressure and volume are inversely related for a given quantity of gas. Although it is only approximately true for real gases, Boyle’s law is an extremely useful idealization that played an important role in the development of atomic theory. Soon after his air-pressure experiments, Boyle wrote that all matter is composed of solid particles arranged into molecules to give material its different properties. He explained that all things are made of one Catholick Matter common to them all, and…differ but in the shape, size, motion or rest, and texture of the small parts they consist of. In France Boyle’s law is called Mariotte’s law after the physicist Edme Mariotte, who discovered the empirical relationship independently in 1676. Mariotte realized that the law holds true only under constant temperatures; otherwise, the volume of gas expands when heated or contracts when cooled. Forty years later Isaac Newton expressed a typical 18th-century view of the atom that was similar to that of Democritus, Gassendi, and Boyle. In the last query in his book Opticks (1704), Newton stated: All these things being considered, it seems probable to me that God in the Beginning form’d Matter in solid, massy, hard, impenetrable, moveable Particles, of such Sizes and Figures, and with such other Properties, and in such Proportion to Space, as most conduced to the End for which he form’d them; and that these primitive Particles being Solids, are incomparably harder than any porous Bodies compounded of them; even so very hard, as never to wear or break in pieces; no ordinary Power being able to divide what God himself made one in the first Creation. By the end of the 18th century, chemists were just beginning to learn how chemicals combine. In 1794 Joseph-Louis Proust of France published his law of definite proportions (also known as Proust’s law). He stated that the components of chemical compounds always combine in the same proportions by weight. For example, Proust found that no matter where he got his samples of the compound copper carbonate, they were composed by weight of five parts copper, four parts oxygen, and one part carbon. The beginnings of modern atomic theory Experimental foundation of atomic chemistry The English chemist and physicist John Dalton extended Proust’s work and converted the atomic philosophy of the Greeks into a scientific theory between 1803 and 1808. His book A New System of Chemical Philosophy (Part I, 1808; Part II, 1810) was the first application of atomic theory to chemistry. It provided a physical picture of how elements combine to form compounds and a phenomenological reason for believing that atoms exist. His work, together with that of Joseph-Louis Gay-Lussac of France and Amedeo Avogadro of Italy, provided the experimental foundation of atomic chemistry. On the basis of the law of definite proportions, Dalton deduced the law of multiple proportions, which stated that when two elements form more than one compound by combining in more than one proportion by weight, the weight of one element in one of the compounds is in simple, integer ratios to its weights in the other compounds. For example, Dalton knew that oxygen and carbon can combine to form two different compounds and that carbon dioxide (CO2) contains twice as much oxygen by weight as carbon monoxide (CO). In this case the ratio of oxygen in one compound to the amount of oxygen in the other is the simple integer ratio 2:1. Although Dalton called his theory “modern” to differentiate it from Democritus’s philosophy, he retained the Greek term atom to honour the ancients. Dalton had begun his atomic studies by wondering why the different gases in the atmosphere do not separate, with the heaviest on the bottom and the lightest on the top. He decided that atoms are not infinite in variety as had been supposed and that they are limited to one of a kind for each element. Proposing that all the atoms of a given element have the same fixed mass, he concluded that elements react in definite proportions to form compounds because their constituent atoms react in definite proportion to produce compounds. He then tried to figure out the masses for well-known compounds. To do so, Dalton made a faulty but understandable assumption that the simplest hypothesis about atomic combinations was true. He maintained that the molecules of an element would always be single atoms. Thus, if two elements form only one compound, he believed that one atom of one element combined with one atom of another element. For example, describing the formation of water, he said that one atom of hydrogen and one of oxygen would combine to form HO instead of H2O. Dalton’s mistaken belief that atoms join together by attractive forces was accepted and formed the basis of most of 19th-century chemistry. As long as scientists worked with masses as ratios, a consistent chemistry could be developed because they did not need to know whether the atoms were separate or joined together as molecules. Gay-Lussac soon took the relationship between chemical masses implied by Dalton’s atomic theory and expanded it to volumetric relationships of gases. In 1809 he published two observations about gases that have come to be known as Gay-Lussac’s law of combining gases. The first part of the law says that when gases combine chemically, they do so in numerically simple volume ratios. Gay-Lussac illustrated this part of his law with three oxides of nitrogen. The compound NO has equal parts of nitrogen and oxygen by volume. Similarly, in the compound N2O the two parts by volume of nitrogen combine with one part of oxygen. He found corresponding volumes of nitrogen and oxygen in NO2. Thus, Gay-Lussac’s law relates volumes of the chemical constituents within a compound, unlike Dalton’s law of multiple proportions, which relates only one constituent of a compound with the same constituent in other compounds. The second part of Gay-Lussac’s law states that if gases combine to form gases, the volumes of the products are also in simple numerical ratios to the volume of the original gases. This part of the law was illustrated by the combination of carbon monoxide and oxygen to form carbon dioxide. Gay-Lussac noted that the volume of the carbon dioxide is equal to the volume of carbon monoxide and is twice the volume of oxygen. He did not realize, however, that the reason that only half as much oxygen is needed is because the oxygen molecule splits in two to give a single atom to each molecule of carbon monoxide. In his “Mémoire sur la combinaison des substances gazeuses, les unes avec les autres” (1809; “Memoir on the Combination of Gaseous Substances with Each Other”), Gay-Lussac wrote: Thus it appears evident to me that gases always combine in the simplest proportions when they act on one another; and we have seen in reality in all the preceding examples that the ratio of combination is 1 to 1, 1 to 2 or 1 to 3.…Gases…in whatever proportions they may combine, always give rise to compounds whose elements by volume are multiples of each other.…Not only, however, do gases combine in very simple proportions, as we have just seen, but the apparent contraction of volume which they experience on combination has also a simple relation to the volume of the gases, or at least to one of them. Gay-Lussac’s work raised the question of whether atoms differ from molecules and, if so, how many atoms and molecules are in a volume of gas. Amedeo Avogadro, building on Dalton’s efforts, solved the puzzle, but his work was ignored for 50 years. In 1811 Avogadro proposed two hypotheses: (1) The atoms of elemental gases may be joined together in molecules rather than existing as separate atoms, as Dalton believed. (2) Equal volumes of gases contain equal numbers of molecules. These hypotheses explained why only half a volume of oxygen is necessary to combine with a volume of carbon monoxide to form carbon dioxide. Each oxygen molecule has two atoms, and each atom of oxygen joins one molecule of carbon monoxide. Until the early 1860s, however, the allegiance of chemists to another concept espoused by the eminent Swedish chemist Jöns Jacob Berzelius blocked acceptance of Avogadro’s ideas. (Berzelius was influential among chemists because he had determined the atomic weights of many elements extremely accurately.) Berzelius contended incorrectly that all atoms of a similar element repel each other because they have the same electric charge. He thought that only atoms with opposite charges could combine to form molecules. Because early chemists did not know how many atoms were in a molecule, their chemical notation systems were in a state of chaos by the mid-19th century. Berzelius and his followers, for example, used the general formula MO for the chief metallic oxides, while others assigned the formula used today, M2O. A single formula stood for different substances, depending on the chemist: H2O2 was water or hydrogen peroxide; C2H4 was methane or ethylene. Proponents of the system used today based their chemical notation on an empirical law formulated in 1819 by the French scientists Pierre-Louis Dulong and Alexis-Thérèse Petit concerning the specific heat of elements. According to the Dulong-Petit law, the specific heat of all elements is the same on a per atom basis. This law, however, was found to have many exceptions and was not fully understood until the development of quantum theory in the 20th century. To resolve such problems of chemical notation, the Sicilian chemist Stanislao Cannizzaro revived Avogadro’s ideas in 1858 and expounded them at the First International Chemical Congress, which met in Karlsruhe, Germany, in 1860. Lothar Meyer, a noted German chemistry professor, wrote later that when he heard Avogadro’s theory at the congress, “It was as though scales fell from my eyes, doubt vanished, and was replaced by a feeling of peaceful certainty.” Within a few years, Avogadro’s hypotheses were widely accepted in the world of chemistry. Atomic weights and the periodic table As more and more elements were discovered during the 19th century, scientists began to wonder how the physical properties of the elements were related to their atomic weights. During the 1860s several schemes were suggested. The Russian chemist Dmitry Ivanovich Mendeleyev based his system (see Mendeleyev’s periodic table, 1869.The Granger Collection, New York) on the atomic weights of the elements as determined by Avogadro’s theory of diatomic molecules. In his paper of 1869 introducing the periodic law, he credited Cannizzaro for using “unshakeable and indubitable” methods to determine atomic weights. The elements, if arranged according to their atomic weights, show a distinct periodicity of their properties.…Elements exhibiting similarities in their chemical behavior have atomic weights which are approximately equal (as in the case of Pt, Ir, Os) or they possess atomic weights which increase in a uniform manner (as in the case of K, Rb, Cs). Skipping hydrogen because it is anomalous, Mendeleyev arranged the 63 elements known to exist at the time into six groups according to valence (see ). Valence, which is the combining power of an element, determines the proportions of the elements in a compound. For example, H2O combines oxygen with a valence of 2 and hydrogen with a valence of 1. Recognizing that chemical qualities change gradually as atomic weight increases, Mendeleyev predicted that a new element must exist wherever there was a gap in atomic weights between adjacent elements. His system was thus a research tool and not merely a system of classification. Mendeleyev’s periodic table raised an important question, however, for future atomic theory to answer: Where does the pattern of atomic weights come from? Kinetic theory of gases Whereas Avogadro’s theory of diatomic molecules was ignored for 50 years, the kinetic theory of gases was rejected for more than a century. The kinetic theory relates the independent motion of molecules to the mechanical and thermal properties of gases—namely, their pressure, volume, temperature, viscosity, and heat conductivity. Three men—Daniel Bernoulli in 1738, John Herapath in 1820, and John James Waterston in 1845—independently developed the theory. The kinetic theory of gases, like the theory of diatomic molecules, was a simple physical idea that chemists ignored in favour of an elaborate explanation of the properties of gases. As conceived by Daniel Bernoulli in Hydrodynamica (1738), gases consist of numerous particles in rapid, random motion. He assumed that the pressure of a gas is produced by the direct impact of the particles on the walls of the container.Encyclopædia Britannica, Inc.; based on Daniel Bernoulli, Hydrodynamica (1738)Bernoulli, a Swiss mathematician and scientist, worked out the first quantitative mathematical treatment of the kinetic theory in 1738 by picturing gases as consisting of an enormous number of particles in very fast, chaotic motion. He derived Boyle’s law by assuming that gas pressure is caused by the direct impact of particles on the walls of their container. He understood the difference between heat and temperature, realizing that heat makes gas particles move faster and that temperature merely measures the propensity of heat to flow from one body to another. In spite of its accuracy, Bernoulli’s theory remained virtually unknown during the 18th century and early 19th century for several reasons. First, chemistry was more popular than physics among scientists of the day, and Bernoulli’s theory involved mathematics. Second, Newton’s reputation ensured the success of his more-comprehensible theory that gas atoms repel one another. Finally, Joseph Black, another noted British scientist, developed the caloric theory of heat, which proposed that heat was an invisible substance permeating matter. At the time, the fact that heat could be transmitted by light seemed a persuasive argument that heat and motion had nothing to do with each other. Herapath, an English amateur physicist ignored by his contemporaries, published his version of the kinetic theory in 1821. He also derived an empirical relation akin to Boyle’s law but did not understand correctly the role of heat and temperature in determining the pressure of a gas. Waterston’s efforts met with a similar fate. Waterston was a Scottish civil engineer and amateur physicist who could not even get his work published by the scientific community, which had become increasingly professional throughout the 19th century. Nevertheless, Waterston made the first statement of the law of equipartition of energy, according to which all kinds of particles have equal amounts of thermal energy. He derived practically all the consequences of the fact that pressure exerted by a gas is related to the number of molecules per cubic centimetre, their mass, and their mean squared velocity. He derived the basic equation of kinetic theory, which reads P = NMV2. Here P is the pressure of a volume of gas, N is the number of molecules per unit volume, M is the mass of the molecule, and V2 is the average velocity squared of the molecules. Recognizing that the kinetic energy of a molecule is proportional to MV2 and that the heat energy of a gas is proportional to the temperature, Waterston expressed the law as PV/T = a constant. During the late 1850s, a decade after Waterston had formulated his law, the scientific community was finally ready to accept a kinetic theory of gases. The studies of heat undertaken by the English physicist James Prescott Joule during the 1840s had shown that heat is a form of energy. This work, together with the law of the conservation of energy that he helped to establish, had persuaded scientists to discard the caloric theory by the mid-1850s. The caloric theory had required that a substance contain a definite amount of caloric (i.e., a hypothetical weightless fluid) to be turned into heat; however, experiments showed that any amount of heat can be generated in a substance by putting enough energy into it. Thus, there was no point to hypothesizing such a special fluid as caloric. At first, after the collapse of the caloric theory, physicists had nothing with which to replace it. Joule, however, discovered Herapath’s kinetic theory and used it in 1851 to calculate the velocity of hydrogen molecules. Then the German physicist Rudolf Clausius developed the kinetic theory mathematically in 1857, and the scientific world took note. Clausius and two other physicists, the Scot James Clerk Maxwell and the Austrian Ludwig Eduard Boltzmann (who developed the kinetic theory of gases in the 1860s), introduced sophisticated mathematics into physics for the first time since Newton. In his 1860 paper “Illustrations of the Dynamical Theory of Gases,” Maxwell used probability theory to produce his famous distribution function for the velocities of gas molecules. Employing Newtonian laws of mechanics, he also provided a mathematical basis for Avogadro’s theory. Maxwell, Clausius, and Boltzmann assumed that gas particles were in constant motion, that they were tiny compared with their space, and that their interactions were very brief. They then related the motion of the particles to pressure, volume, and temperature. Interestingly, none of the three committed himself on the nature of the particles. Studies of the properties of atoms Size of atoms The first modern estimates of the size of atoms and the numbers of atoms in a given volume were made by the German chemist Joseph Loschmidt in 1865. Loschmidt used the results of kinetic theory and some rough estimates to do his calculation. The size of the atoms and the distance between them in the gaseous state are related both to the contraction of gas upon liquefaction and to the mean free path traveled by molecules in a gas. The mean free path, in turn, can be found from the thermal conductivity and diffusion rates in the gas. Loschmidt calculated the size of the atom and the spacing between atoms by finding a solution common to these relationships. His result for Avogadro’s number is remarkably close to the present accepted value of about 6.022 × 1023. The precise definition of Avogadro’s number is the number of atoms in 12 grams of the carbon isotope C-12. Loschmidt’s result for the diameter of an atom was approximately 10−8 cm. Much later, in 1908, the French physicist Jean Perrin used Brownian motion to determine Avogadro’s number. Brownian motion, first observed in 1827 by the Scottish botanist Robert Brown, is the continuous movement of tiny particles suspended in water. Their movement is caused by the thermal motion of water molecules bumping into the particles. Perrin’s argument for determining Avogadro’s number makes an analogy between particles in the liquid and molecules in the atmosphere. The thinning of air at high altitudes depends on the balance between the gravitational force pulling the molecules down and their thermal motion forcing them up. The relationship between the weight of the particles and the height of the atmosphere would be the same for Brownian particles suspended in water. Perrin counted particles of gum mastic at different heights in his water sample and inferred the mass of atoms from the rate of decrease. He then divided the result into the molar weight of atoms to determine Avogadro’s number. After Perrin, few scientists could disbelieve the existence of atoms. Electric properties of atoms While atomic theory was set back by the failure of scientists to accept simple physical ideas like the diatomic molecule and the kinetic theory of gases, it was also delayed by the preoccupation of physicists with mechanics for almost 200 years, from Newton to the 20th century. Nevertheless, several 19th-century investigators, working in the relatively ignored fields of electricity, magnetism, and optics, provided important clues about the interior of the atom. The studies in electrodynamics made by the English physicist Michael Faraday and those of Maxwell indicated for the first time that something existed apart from palpable matter, and data obtained by Gustav Robert Kirchhoff of Germany about elemental spectral lines raised questions that would be answered only in the 20th century by quantum mechanics. Until Faraday’s electrolysis experiments, scientists had no conception of the nature of the forces binding atoms together in a molecule. Faraday concluded that electrical forces existed inside the molecule after he had produced an electric current and a chemical reaction in a solution with the electrodes of a voltaic cell. No matter what solution or electrode material he used, a fixed quantity of current sent through an electrolyte always caused a specific amount of material to form on an electrode of the electrolytic cell. Faraday concluded that each ion of a given chemical compound has exactly the same charge. Later he discovered that the ionic charges are integral multiples of a single unit of charge, never fractions. On the practical level, Faraday did for charge what Dalton had done for the chemical combination of atomic masses. That is to say, Faraday demonstrated that it takes a definite amount of charge to convert an ion of an element into an atom of the element and that the amount of charge depends on the element used. The unit of charge that releases one gram-equivalent weight of a simple ion is called the faraday in his honour. For example, one faraday of charge passing through water releases one gram of hydrogen and eight grams of oxygen. In this manner, Faraday gave scientists a rather precise value for the ratios of the masses of atoms to the electric charges of ions. The ratio of the mass of the hydrogen atom to the charge of the electron was found to be 1.035 × 10−8 kilogram per coulomb. Faraday did not know the size of his electrolytic unit of charge in units such as coulombs any more than Dalton knew the magnitude of his unit of atomic weight in grams. Nevertheless, scientists could determine the ratio of these units easily. More significantly, Faraday’s work was the first to imply the electrical nature of matter and the existence of subatomic particles and a fundamental unit of charge. Faraday wrote: The atoms of matter are in some way endowed or associated with electrical powers, to which they owe their most striking qualities, and amongst them their mutual chemical affinity. Faraday did not, however, conclude that atoms cause electricity. Light and spectral lines In 1865 Maxwell unified the laws of electricity and magnetism in his publication “A Dynamical Theory of the Electromagnetic Field.” In this paper he concluded that light is an electromagnetic wave. His theory was confirmed by the German physicist Heinrich Hertz, who produced radio waves with sparks in 1887. With light understood as an electromagnetic wave, Maxwell’s theory could be applied to the emission of light from atoms. The theory failed, however, to describe spectral lines and the fact that atoms do not lose all their energy when they radiate light. The problem was not with Maxwell’s theory of light itself but rather with its description of the oscillating electron currents generating light. Only quantum mechanics could explain this behaviour (see below The laws of quantum mechanics). By far the richest clues about the structure of the atom came from spectral line series. Mounting a particularly fine prism on a telescope, the German physicist and optician Joseph von Fraunhofer had discovered between 1814 and 1824 hundreds of dark lines in the spectrum of the Sun. He labeled the most prominent of these lines with the letters A through G. Together they are now called Fraunhofer lines (see The visible solar spectrum, ranging from the shortest visible wavelengths (violet light, at 400 nm) to the longest (red light, at 700 nm). Shown in the diagram are prominent Fraunhofer lines, representing wavelengths at which light is absorbed by elements present in the atmosphere of the Sun.Encyclopædia Britannica, Inc.). A generation later Kirchhoff heated different elements to incandescence in order to study the different coloured vapours emitted. Observing the vapours through a spectroscope, he discovered that each element has a unique and characteristic pattern of spectral lines. Each element produces the same set of identifying lines, even when it is combined chemically with other elements. In 1859 Kirchhoff and the German chemist Robert Wilhelm Bunsen discovered two new elements—cesium and rubidium—by first observing their spectral lines. Johann Jakob Balmer, a Swiss secondary-school teacher with a penchant for numerology, studied hydrogen’s spectral lines (see The Balmer series of hydrogen as seen by a low-resolution spectrometer.Arthur L. Schawlow, Stanford University, and Theodore W. Hansch, Max Planck Institute for Quantum Optics) and found a constant relationship between the wavelengths of the element’s four visible lines. In 1885 he published a generalized mathematical formula for all the lines of hydrogen. The Swedish physicist Johannes Rydberg extended Balmer’s work in 1890 and found a general rule applicable to many elements. Soon more series were discovered elsewhere in the spectrum of hydrogen and in the spectra of other elements as well. Stated in terms of the frequency of the light rather than its wavelength, the formula may be expressed: Here ν is the frequency of the light, n and m are integers, and R is the Rydberg constant. In the Balmer lines m is equal to 2 and n takes on the values 3, 4, 5, and 6. Discovery of electrons During the 1880s and ’90s scientists searched cathode rays for the carrier of the electrical properties in matter. Their work culminated in the discovery by English physicist J.J. Thomson of the electron in 1897. The existence of the electron showed that the 2,000-year-old conception of the atom as a homogeneous particle was wrong and that in fact the atom has a complex structure. Cathode-ray studies began in 1854 when Heinrich Geissler, a glassblower and technical assistant to the German physicist Julius Plücker, improved the vacuum tube. Plücker discovered cathode rays in 1858 by sealing two electrodes inside the tube, evacuating the air, and forcing electric current between the electrodes. He found a green glow on the wall of his glass tube and attributed it to rays emanating from the cathode. In 1869, with better vacuums, Plücker’s pupil Johann W. Hittorf saw a shadow cast by an object placed in front of the cathode. The shadow proved that the cathode rays originated from the cathode. The English physicist and chemist William Crookes investigated cathode rays in 1879 and found that they were bent by a magnetic field; the direction of deflection suggested that they were negatively charged particles. As the luminescence did not depend on what gas had been in the vacuum or what metal the electrodes were made of, he surmised that the rays were a property of the electric current itself. As a result of Crookes’s work, cathode rays were widely studied, and the tubes came to be called Crookes tubes. Although Crookes believed that the particles were electrified charged particles, his work did not settle the issue of whether cathode rays were particles or radiation similar to light. By the late 1880s the controversy over the nature of cathode rays had divided the physics community into two camps. Most French and British physicists, influenced by Crookes, thought that cathode rays were electrically charged particles because they were affected by magnets. Most German physicists, on the other hand, believed that the rays were waves because they traveled in straight lines and were unaffected by gravity. A crucial test of the nature of the cathode rays was how they would be affected by electric fields. Heinrich Hertz, the aforementioned German physicist, reported that the cathode rays were not deflected when they passed between two oppositely charged plates in an 1892 experiment. In England J.J. Thomson thought Hertz’s vacuum might have been faulty and that residual gas might have reduced the effect of the electric field on the cathode rays. Thomson repeated Hertz’s experiment with a better vacuum in 1897. He directed the cathode rays between two parallel aluminum plates to the end of a tube where they were observed as luminescence on the glass. When the top aluminum plate was negative, the rays moved down; when the upper plate was positive, the rays moved up. The deflection was proportional to the difference in potential between the plates. With both magnetic and electric deflections observed, it was clear that cathode rays were negatively charged particles. Thomson’s discovery established the particulate nature of electricity. Accordingly, he called his particles electrons. From the magnitude of the electrical and magnetic deflections, Thomson could calculate the ratio of mass to charge for the electrons. This ratio was known for atoms from electrochemical studies. Measuring and comparing it with the number for an atom, he discovered that the mass of the electron was very small, merely 1/1,836 that of a hydrogen ion. When scientists realized that an electron was virtually 1,000 times lighter than the smallest atom, they understood how cathode rays could penetrate metal sheets and how electric current could flow through copper wires. In deriving the mass-to-charge ratio, Thomson had calculated the electron’s velocity. It was 1/10 the speed of light, thus amounting to roughly 30,000 km (18,000 miles) per second. Thomson emphasized that we have in the cathode rays matter in a new state, a state in which the subdivision of matter is carried very much further than in the ordinary gaseous state; a state in which all matter, that is, matter derived from different sources such as hydrogen, oxygen, etc., is of one and the same kind; this matter being the substance from which all the chemical elements are built up. Thus, the electron was the first subatomic particle identified, the smallest and the fastest bit of matter known at the time. In 1909 the American physicist Robert Andrews Millikan greatly improved a method employed by Thomson for measuring the electron charge directly. In Millikan’s oil-drop experiment, he produced microscopic oil droplets and observed them falling in the space between two electrically charged plates. Some of the droplets became charged and could be suspended by a delicate adjustment of the electric field. Millikan knew the weight of the droplets from their rate of fall when the electric field was turned off. From the balance of the gravitational and electrical forces, he could determine the charge on the droplets. All the measured charges were integral multiples of a quantity that in contemporary units is 1.602 × 10−19 coulomb. Millikan’s electron-charge experiment was the first to detect and measure the effect of an individual subatomic particle. Besides confirming the particulate nature of electricity, his experiment also supported previous determinations of Avogadro’s number. Avogadro’s number times the unit of charge gives Faraday’s constant, the amount of charge required to electrolyze one mole of a chemical ion. Identification of positive ions In addition to electrons, positively charged particles also emanate from the anode in an energized Crookes tube. The German physicist Wilhelm Wien analyzed these positive rays in 1898 and found that the particles have a mass-to-charge ratio more than 1,000 times larger than that of the electron. Because the ratio of the particles is also comparable to the mass-to-charge ratio of the residual atoms in the discharge tubes, scientists suspected that the rays were actually ions from the gases in the tube. In 1913 Thomson refined Wien’s apparatus to separate different ions and measure their mass-to-charge ratio on photographic plates. He sorted out the many ions in various charge states produced in a discharge tube. When he conducted his atomic mass experiments with neon gas, he found that a beam of neon atoms subjected to electric and magnetic forces split into two parabolas instead of one on a photographic plate. Chemists had assumed the atomic weight of neon was 20.2, but the traces on Thomson’s photographic plate suggested atomic weights of 20.0 and 22.0, with the former parabola much stronger than the latter. He concluded that neon consisted of two stable isotopes: primarily neon-20, with a small percentage of neon-22. Eventually a third isotope, neon-21, was discovered in very small quantities. It is now known that 1,000 neon atoms will contain an average of 909 atoms of neon-20, 88 of neon-22, and 3 of neon-21. Dalton’s assumptions that all atoms of an element have an identical mass and that the atomic weight of an element is its mass were thus disproved. Today the atomic weight of an element is recognized as the weighted average of the masses of its isotopes. Francis William Aston, an English physicist, improved Thomson’s technique when he developed the mass spectrograph in 1919. This device spread out the beam of positive ions into a “mass spectrum” of lines similar to the way light is separated into a spectrum. Aston analyzed about 50 elements over the next six years and discovered that most have isotopes. Discovery of radioactivity Like Thomson’s discovery of the electron, the discovery of radioactivity in uranium by the French physicist Henri Becquerel in 1896 forced scientists to radically change their ideas about atomic structure. Radioactivity demonstrated that the atom was neither indivisible nor immutable. Instead of serving merely as an inert matrix for electrons, the atom could change form and emit an enormous amount of energy. Furthermore, radioactivity itself became an important tool for revealing the interior of the atom. The German physicist Wilhelm Conrad Röntgen had discovered X-rays in 1895, and Becquerel thought they might be related to fluorescence and phosphorescence, processes in which substances absorb and emit energy as light. In the course of his investigations, Becquerel stored some photographic plates and uranium salts in a desk drawer. Expecting to find the plates only lightly fogged, he developed them and was surprised to find sharp images of the salts. He then began experiments that showed that uranium salts emit a penetrating radiation independent of external influences. Becquerel also demonstrated that the radiation could discharge electrified bodies. In this case discharge means the removal of electric charge, and it is now understood that the radiation, by ionizing molecules of air, allows the air to conduct an electric current. Early studies of radioactivity relied on measuring ionization power (see First ionization energies of the elements.Encyclopædia Britannica, Inc.) or on observing the effects of radiation on photographic plates. In 1898 the French physicists Pierre and Marie Curie discovered the strongly radioactive elements polonium and radium, which occur naturally in uranium minerals. Marie coined the term radioactivity for the spontaneous emission of ionizing, penetrating rays by certain atoms. Experiments conducted by the British physicist Ernest Rutherford in 1899 showed that radioactive substances emit more than one kind of radiation. It was determined that part of the radiation is 100 times more penetrating than the rest and can pass through aluminum foil one-fiftieth of a millimetre thick. Rutherford named the less-penetrating emanations alpha rays and the more-powerful ones beta rays, after the first two letters of the Greek alphabet. Investigators who in 1899 found that beta rays were deflected by a magnetic field concluded that they are negatively charged particles similar to cathode rays. In 1903 Rutherford found that alpha rays were deflected slightly in the opposite direction, showing that they are massive, positively charged particles. Much later Rutherford proved that alpha rays are nuclei of helium atoms by collecting the rays in an evacuated tube and detecting the buildup of helium gas over several days. A third kind of radiation was identified by the French chemist Paul Villard in 1900. Designated as the gamma ray, it is not deflected by magnets and is much more penetrating than alpha particles. Gamma rays were later shown to be a form of electromagnetic radiation, like light or X-rays, but with much shorter wavelengths. Because of these shorter wavelengths, gamma rays have higher frequencies and are even more penetrating than X-rays. In 1902, while studying the radioactivity of thorium, Rutherford and the English chemist Frederick Soddy discovered that radioactivity was associated with changes inside the atom that transformed thorium into a different element. They found that thorium continually generates a chemically different substance that is intensely radioactive. The radioactivity eventually makes the new element disappear. Watching the process, Rutherford and Soddy formulated the exponential decay law (see decay constant), which states that a fixed fraction of the element will decay in each unit of time. For example, half of the thorium product decays in four days, half the remaining sample in the next four days, and so on. Until the 20th century, physicists had studied subjects, such as mechanics, heat, and electromagnetism, that they could understand by applying common sense or by extrapolating from everyday experiences. The discoveries of the electron and radioactivity, however, showed that classical Newtonian mechanics could not explain phenomena at atomic and subatomic levels. As the primacy of classical mechanics crumbled during the early 20th century, quantum mechanics was developed to replace it. Since then experiments and theories have led physicists into a world that is often extremely abstract and seemingly contradictory. Models of atomic structure J.J. Thomson’s discovery of the negatively charged electron had raised theoretical problems for physicists as early as 1897, because atoms as a whole are electrically neutral. Where was the neutralizing positive charge and what held it in place? Between 1903 and 1907 Thomson tried to solve the mystery by adapting an atomic model that had been first proposed by the Scottish scientist William Thomson (Lord Kelvin) in 1902. According to the Thomson atomic model, often referred to as the “plum-pudding” model, the atom is a sphere of uniformly distributed positive charge about one angstrom in diameter (see William Thomson (also known as Lord Kelvin) envisioned the atom as a sphere with a uniformly distributed positive charge and embedded within it enough electrons to neutralize the positive charge.Encyclopædia Britannica, Inc.). Electrons are embedded in a regular pattern, like raisins in a plum pudding, to neutralize the positive charge. The advantage of the Thomson atom was that it was inherently stable: if the electrons were displaced, they would attempt to return to their original positions. In another contemporary model, the atom resembled the solar system or the planet Saturn, with rings of electrons surrounding a concentrated positive charge. The Japanese physicist Nagaoka Hantaro in particular developed the “Saturnian” system in 1904. The atom, as postulated in this model, was inherently unstable because, by radiating continuously, the electron would gradually lose energy and spiral into the nucleus. No electron could thus remain in any particular orbit indefinitely. Rutherford’s nuclear model Rutherford overturned Thomson’s model in 1911 with his famous gold-foil experiment, in which he demonstrated that the atom has a tiny, massive nucleus (see In 1911 physicist Ernest Rutherford disproved William Thomson’s model of the atom as a uniformly distributed substance. Because a few of the alpha particles in his beam were scattered by large angles after striking the gold foil, Rutherford knew that the gold atom’s mass must be concentrated in a tiny, dense nucleus.Encyclopædia Britannica, Inc.). Five years earlier Rutherford had noticed that alpha particles beamed through a hole onto a photographic plate would make a sharp-edged picture, while alpha particles beamed through a sheet of mica only 20 micrometres (or about 0.002 cm) thick would make an impression with blurry edges. For some particles the blurring corresponded to a two-degree deflection. Remembering those results, Rutherford had his postdoctoral fellow, Hans Geiger, and an undergraduate student, Ernest Marsden, refine the experiment. The young physicists beamed alpha particles through gold foil and detected them as flashes of light or scintillations on a screen. The gold foil was only 0.00004 cm thick. Most of the alpha particles went straight through the foil, but some were deflected by the foil and hit a spot on a screen placed off to one side. Geiger and Marsden found that about one in 20,000 alpha particles had been deflected 45° or more. Rutherford asked why so many alpha particles passed through the gold foil while a few were deflected so greatly. “It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper, and it came back to hit you,” Rutherford said later. On consideration, I realized that this scattering backwards must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greater part of the mass of the atom was concentrated in a minute nucleus. It was then that I had the idea of an atom with a minute massive centre carrying a charge. Many physicists distrusted the Rutherford atomic model because it was difficult to reconcile with the chemical behaviour of atoms (see Physicist Ernest Rutherford envisioned the atom as like a miniature solar system, with electrons orbiting around a massive nucleus.Encyclopædia Britannica, Inc.). The model suggested that the charge on the nucleus was the most important characteristic of the atom, determining its structure. On the other hand, Mendeleyev’s periodic table of the elements had been organized according to the atomic masses of the elements, implying that the mass was responsible for the structure and chemical behaviour of atoms. Moseley’s X-ray studies Henry Gwyn Jeffreys Moseley, a young English physicist killed in World War I, confirmed that the positive charge on the nucleus revealed more about the fundamental structure of the atom than Mendeleyev’s atomic mass. Moseley studied the spectral lines emitted by heavy elements in the X-ray region of the electromagnetic spectrum. He built on the work done by several other British physicists—Charles Glover Barkla, who had studied X-rays produced by the impact of electrons on metal plates, and William Bragg and his son Lawrence, who had developed a precise method of using crystals to reflect X-rays and measure their wavelength by diffraction. Moseley applied their method systematically to measure the spectra of X-rays produced by many elements. Moseley found that each element radiates X-rays of a different and characteristic wavelength. The wavelength and frequency vary in a regular pattern according to the charge on the nucleus. He called this charge the atomic number. In his first experiments, conducted in 1913, Moseley used what was called the K series of X-rays to study the elements up to zinc. The following year he extended this work using another series of X-rays, the L series. Moseley was conducting his research at the same time that the Danish theoretical physicist Niels Bohr was developing his quantum shell model of the atom. The two conferred and shared data as their work progressed, and Moseley framed his equation in terms of Bohr’s theory by identifying the K series of X-rays with the most-bound shell in Bohr’s theory, the N = 1 shell, and identifying the L series of X-rays with the next shell, N = 2. Moseley presented formulas for the X-ray frequencies that were closely related to Bohr’s formulas for the spectral lines in a hydrogen atom. Moseley showed that the frequency of a line in the X-ray spectrum is proportional to the square of the charge on the nucleus. The constant of proportionality depends on whether the X-ray is in the K or L series. This is the same relationship that Bohr used in his formula applied to the Lyman and Balmer series of spectral lines. The regularity of the differences in X-ray frequencies allowed Moseley to order the elements by atomic number from aluminum to gold. He observed that, in some cases, the order by atomic weights was incorrect. For example, cobalt has a larger atomic mass than nickel, but Moseley found that it has atomic number 27 while nickel has 28. When Mendeleyev constructed the periodic table, he based his system on the atomic masses of the elements and had to put cobalt and nickel out of order to make the chemical properties fit better. In a few places where Moseley found more than one integer between elements, he predicted correctly that a new element would be discovered. Because there is just one element for each atomic number, scientists could be confident for the first time of the completeness of the periodic table; no unexpected new elements would be discovered. Bohr’s shell model In 1913 Bohr proposed his quantized shell model of the atom (see Bohr atomic model) to explain how electrons can have stable orbits around the nucleus. The motion of the electrons in the Rutherford model was unstable because, according to classical mechanics and electromagnetic theory, any charged particle moving on a curved path emits electromagnetic radiation; thus, the electrons would lose energy and spiral into the nucleus. To remedy the stability problem, Bohr modified the Rutherford model by requiring that the electrons move in orbits of fixed size and energy (see ). The energy of an electron depends on the size of the orbit and is lower for smaller orbits. Radiation can occur only when the electron jumps from one orbit to another. The atom will be completely stable in the state with the smallest orbit, since there is no orbit of lower energy into which the electron can jump. Bohr’s starting point was to realize that classical mechanics by itself could never explain the atom’s stability. A stable atom has a certain size so that any equation describing it must contain some fundamental constant or combination of constants with a dimension of length. The classical fundamental constants—namely, the charges and the masses of the electron and the nucleus—cannot be combined to make a length. Bohr noticed, however, that the quantum constant formulated by the German physicist Max Planck has dimensions which, when combined with the mass and charge of the electron, produce a measure of length. Numerically, the measure is close to the known size of atoms. This encouraged Bohr to use Planck’s constant in searching for a theory of the atom. Planck had introduced his constant in 1900 in a formula explaining the light radiation emitted from heated bodies. According to classical theory, comparable amounts of light energy should be produced at all frequencies. This is not only contrary to observation but also implies the absurd result that the total energy radiated by a heated body should be infinite. Planck postulated that energy can only be emitted or absorbed in discrete amounts, which he called quanta (Latin for “how much”). The energy quantum is related to the frequency of the light by a new fundamental constant, h. When a body is heated, its radiant energy in a particular frequency range is, according to classical theory, proportional to the temperature of the body. With Planck’s hypothesis, however, the radiation can be emitted only in quantum amounts of energy. If the radiant energy is less than the quantum of energy, the amount of light in that frequency range will be reduced. Planck’s formula correctly describes radiation from heated bodies. Planck’s constant has the dimensions of action, which may be expressed as units of energy multiplied by time, units of momentum multiplied by length, or units of angular momentum. For example, Planck’s constant can be written as h = 6.6 × 10−34 joule∙seconds. In 1905 Einstein extended Planck’s hypothesis by proposing that the radiation itself can carry energy only in quanta. According to Einstein, the energy (E) of the quantum is related to the frequency (ν) of the light by Planck’s constant in the formula E = hν. Using Planck’s constant, Bohr obtained an accurate formula for the energy levels of the hydrogen atom. He postulated that the angular momentum of the electron is quantized—i.e., it can have only discrete values. He assumed that otherwise electrons obey the laws of classical mechanics by traveling around the nucleus in circular orbits. Because of the quantization, the electron orbits have fixed sizes and energies. The orbits are labeled by an integer, the quantum number n. In Bohr’s model, radius an of the orbit n is given by the formula an = h2n2ε02, where ε0 is the electric constant. As Bohr had noticed, the radius of the n = 1 orbit is approximately the same size as an atom. With his model, Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta (see ). For example, if an electron jumps one orbit closer to the nucleus, it must emit energy equal to the difference of the energies of the two orbits. Conversely, when the electron jumps to a larger orbit, it must absorb a quantum of light equal in energy to the difference in orbits. Bohr’s model accounts for the stability of atoms because the electron cannot lose more energy than it has in the smallest orbit, the one with n = 1. The model also explains the Balmer formula for the spectral lines of hydrogen. The light energy is the difference in energies between the two orbits in the Bohr formula. Using Einstein’s formula to deduce the frequency of the light, Bohr not only explained the form of the Balmer formula but also explained accurately the value of the constant of proportionality R. The usefulness of Bohr’s theory extends beyond the hydrogen atom. Bohr himself noted that the formula also applies to the singly ionized helium atom, which, like hydrogen, has a single electron. The nucleus of the helium atom has twice the charge of the hydrogen nucleus, however. In Bohr’s formula the charge of the electron is raised to the fourth power. Two of those powers stem from the charge on the nucleus; the other two come from the charge on the electron itself. Bohr modified his formula for the hydrogen atom to fit the helium atom by doubling the charge on the nucleus. Moseley applied Bohr’s formula with an arbitrary atomic charge Z to explain the K- and L-series X-ray spectra of heavier atoms. The German physicists James Franck and Gustav Hertz confirmed the existence of quantum states in atoms in experiments reported in 1914. They made atoms absorb energy by bombarding them with electrons. The atoms would only absorb discrete amounts of energy from the electron beam. When the energy of an electron was below the threshold for producing an excited state, the atom would not absorb any energy. Bohr’s theory had major drawbacks, however. Except for the spectra of X-rays in the K and L series, it could not explain properties of atoms having more than one electron. The binding energy of the helium atom, which has two electrons, was not understood until the development of quantum mechanics. Several features of the spectrum were inexplicable even in the hydrogen atom (see Energy levels of the hydrogen atom, according to Bohr’s model and quantum mechanics using the Schrödinger equation and the Dirac equation.Encyclopædia Britannica, Inc.). High-resolution spectroscopy shows that the individual spectral lines of hydrogen are divided into several closely spaced fine lines. In a magnetic field the lines split even farther apart. The German physicist Arnold Sommerfeld modified Bohr’s theory by quantizing the shapes and orientations of orbits to introduce additional energy levels corresponding to the fine spectral lines. The quantization of the orientation of the angular momentum vector was confirmed in an experiment in 1922 by other German physicists, Otto Stern and Walther Gerlach. Their experiment took advantage of the magnetism associated with angular momentum; an atom with angular momentum has a magnetic moment like a compass needle that is aligned along the same axis. The researchers passed a beam of silver atoms through a magnetic field, one that would deflect the atoms to one side or another according to the orientation of their magnetic moments. In their experiment Stern and Gerlach found only two deflections, not the continuous distribution of deflections that would have been seen if the magnetic moment had been oriented in any direction. Thus, it was determined that the magnetic moment and the angular momentum of an atom can have only two orientations. The discrete orientations of the orbits explain some of the magnetic field effects—namely, the so-called normal Zeeman effect, which is the splitting of a spectral line into three separate subsidiary lines. These lines correspond to quantum jumps in which the angular momentum along the magnetic field is increased by one unit, decreased by one unit, or left unchanged. Spectra in magnetic fields displayed additional splittings that showed that the description of the electrons in atoms was still incomplete. In 1925 Samuel Abraham Goudsmit and George Eugene Uhlenbeck, two graduate students in physics at the University of Leiden in the Netherlands, added a quantum number to account for the division of some spectral lines into more subsidiary lines than can be explained with the original quantum numbers. Goudsmit and Uhlenbeck postulated that an electron has an internal spinning motion and that the corresponding angular momentum is one-half of the orbital angular momentum quantum. Independently, the Austrian-born physicist Wolfgang Pauli also suggested adding a two-valued quantum number for electrons, but for different reasons. He needed this additional quantum number to formulate his exclusion principle, which serves as the atomic basis of the periodic table and the chemical behaviour of the elements. According to the Pauli exclusion principle, one electron at most can occupy an orbit, taking into account all the quantum numbers. Pauli was led to this principle by the observation that an alkali metal atom in a magnetic field has a number of orbits in the shell equal to the number of electrons that must be added to make the next noble gas. These numbers are twice the number of orbits available if the angular momentum and its orientation are considered alone. In spite of these modifications, by the early 1920s Bohr’s model seemed to be a dead end. It could not explain the number of fine spectral lines and many of the frequency shifts associated with the Zeeman effect. Most daunting, however, was its inability to explain the rich spectra of multielectron atoms. In fact, efforts to generalize the model to multielectron atoms had proved futile, and physicists despaired of ever understanding them. The laws of quantum mechanics Within a few short years scientists developed a consistent theory of the atom that explained its fundamental structure and its interactions. Crucial to the development of the theory was new evidence indicating that light and matter have both wave and particle characteristics at the atomic and subatomic levels. Theoreticians had objected to the fact that Bohr had used an ad hoc hybrid of classical Newtonian dynamics for the orbits and some quantum postulates to arrive at the energy levels of atomic electrons. The new theory ignored the fact that electrons are particles and treated them as waves. By 1926 physicists had developed the laws of quantum mechanics, also called wave mechanics, to explain atomic and subatomic phenomena. The duality between the wave and particle nature of light was highlighted by the American physicist Arthur Holly Compton in an X-ray scattering experiment conducted in 1922. Compton sent a beam of X-rays through a target material and observed that a small part of the beam was deflected off to the sides at various angles. He found that the scattered X-rays had longer wavelengths than the original beam; the change could be explained only by assuming that the X-rays scattered from the electrons in the target as if the X-rays were particles with discrete amounts of energy and momentum (see When a beam of X-rays is aimed at a target material, some of the beam is deflected, and the scattered X-rays have a greater wavelength than the original beam. The physicist Arthur Holly Compton concluded that this phenomenon could only be explained if the X-rays were understood to be made up of discrete bundles or particles, now called photons, that lost some of their energy in the collisions with electrons in the target material and then scattered at lower energy.Encyclopædia Britannica, Inc.). When X-rays are scattered, their momentum is partially transferred to the electrons. The recoil electron takes some energy from an X-ray, and as a result the X-ray frequency is shifted. Both the discrete amount of momentum and the frequency shift of the light scattering are completely at variance with classical electromagnetic theory, but they are explained by Einstein’s quantum formula. Louis-Victor de Broglie, a French physicist, proposed in his 1923 doctoral thesis that all matter and radiations have both particle- and wavelike characteristics. Until the emergence of the quantum theory, physicists had assumed that matter was strictly particulate. In his quantum theory of light, Einstein proposed that radiation has characteristics of both waves and particles. Believing in the symmetry of nature, Broglie postulated that ordinary particles such as electrons may also have wave characteristics. Using the old-fashioned word corpuscles for particles, Broglie wrote, For both matter and radiations, light in particular, it is necessary to introduce the corpuscle concept and the wave concept at the same time. In other words, the existence of corpuscles accompanied by waves has to be assumed in all cases. Broglie’s conception was an inspired one, but at the time it had no empirical or theoretical foundation. The Austrian physicist Erwin Schrödinger had to supply the theory. Schrödinger’s wave equation In 1926 the Schrödinger equation, essentially a mathematical wave equation, established quantum mechanics in widely applicable form. In order to understand how a wave equation is used, it is helpful to think of an analogy with the vibrations of a bell, violin string, or drumhead. These vibrations are governed by a wave equation, since the motion can propagate as a wave from one side of the object to the other. Certain vibrations in these objects are simple modes that are easily excited and have definite frequencies. For example, the motion of the lowest vibrational mode in a drumhead is in phase all over the drumhead with a pattern that is uniform around it; the highest amplitude of the vibratory motion occurs in the middle of the drumhead. In more-complicated, higher-frequency modes, the motion on different parts of the vibrating drumhead are out of phase, with inward motion on one part at the same time that there is outward motion on another. Schrödinger postulated that the electrons in an atom should be treated like the waves on the drumhead. The different energy levels of atoms are identified with the simple vibrational modes of the wave equation. The equation is solved to find these modes, and then the energy of an electron is obtained from the frequency of the mode and from Einstein’s quantum formula, E = hν. Schrödinger’s wave equation gives the same energies as Bohr’s original formula but with a much more-precise description of an electron in an atom (see Encyclopædia Britannica, Inc.). The lowest energy level of the hydrogen atom, called the ground state, is analogous to the motion in the lowest vibrational mode of the drumhead. In the atom the electron wave is uniform in all directions from the nucleus, is peaked at the centre of the atom, and has the same phase everywhere. Higher energy levels in the atom have waves that are peaked at greater distances from the nucleus. Like the vibrations in the drumhead, the waves have peaks and nodes that may form a complex shape. The different shapes of the wave pattern are related to the quantum numbers of the energy levels, including the quantum numbers for angular momentum and its orientation. The year before Schrödinger produced his wave theory, the German physicist Werner Heisenberg published a mathematically equivalent system to describe energy levels and their transitions. In Heisenberg’s method, properties of atoms are described by arrays of numbers called matrices, which are combined with special rules of multiplication. Today physicists use both wave functions and matrices, depending on the application. Schrödinger’s picture is more useful for describing continuous electron distributions because the wave function can be more easily visualized. Matrix methods are more useful for numerical analysis calculations with computers and for systems that can be described in terms of a finite number of states, such as the spin states of the electron. In 1929 the Norwegian physicist Egil Hylleraas applied the Schrödinger equation to the helium atom with its two electrons. He obtained only an approximate solution, but his energy calculation was quite accurate. With Hylleraas’s explanation of the two-electron atom, physicists realized that the Schrödinger equation could be a powerful mathematical tool for describing nature on the atomic level, even if exact solutions could not be obtained. Antiparticles and the electron’s spin The English physicist Paul Dirac introduced a new equation for the electron in 1928. Because the Schrödinger equation does not satisfy the principles of relativity, it can be used to describe only those phenomena in which the particles move much more slowly than the velocity of light. In order to satisfy the conditions of relativity, Dirac was forced to postulate that the electron would have a particular form of wave function with four independent components, some of which describe the electron’s spin. Thus, from the very beginning, the Dirac theory incorporated the electron’s spin properties. The remaining components allowed additional states of the electron that had not yet been observed. Dirac interpreted them as antiparticles, with a charge opposite to that of electrons (see Antimatter is a substance made up of elementary particles that have the same masses as the electrons, protons, and neutrons in ordinary matter but with charges opposite in sign. Such particles—known collectively as antiparticles—are called positrons, antiprotons, and antineutrons. Although antineutrons, like neutrons, are electrically neutral, they have a magnetic moment opposite in sign to that of the neutron.Encyclopædia Britannica, Inc.). The discovery of the positron in 1932 by the American physicist Carl David Anderson proved the existence of antiparticles and was a triumph for Dirac’s theory. After Anderson’s discovery, subatomic particles could no longer be considered immutable. Electrons and positrons can be created out of the vacuum, given a source of energy such as a high-energy X-ray or a collision (see Electrons and positrons produced simultaneously from individual gamma rays curl in opposite directions in the magnetic field of a bubble chamber. In the top example, the gamma ray has lost some energy to an atomic electron, which leaves the long track, curling left. The gamma rays do not leave tracks in the chamber, as they have no electric charge.Courtesy of the Lawrence Berkeley Laboratory, the University of California, Berkeley). They also can annihilate each other and disappear into some other form of energy. From this point, much of the history of subatomic physics has been the story of finding new kinds of particles, many of which exist for only fractions of a second after they have been created. Advances in nuclear and subatomic physics The 1920s witnessed further advances in nuclear physics with Rutherford’s discovery of induced radioactivity. Bombardment of light nuclei by alpha particles produced new radioactive nuclei. In 1928 the Russian-born American physicist George Gamow explained the lifetimes in alpha radioactivity using the Schrödinger equation. His explanation used a property of quantum mechanics that allows particles to “tunnel” through regions where classical physics would forbid them to be. Structure of the nucleus The constitution of the nucleus was poorly understood at the time because the only known particles were the electron and the proton. It had been established that nuclei are typically about twice as heavy as can be accounted for by protons alone. A consistent theory was impossible until the English physicist James Chadwick discovered the neutron in 1932. He found that alpha particles reacted with beryllium nuclei to eject neutral particles with nearly the same mass as protons. Almost all nuclear phenomena can be understood in terms of a nucleus composed of neutrons and protons. Surprisingly, the neutrons and protons in the nucleus move to a large extent in orbitals as though their wave functions were independent of one another. Each neutron or proton orbital is described by a stationary wave pattern with peaks and nodes and angular momentum quantum numbers. The theory of the nucleus based on these orbitals is called the shell nuclear model. It was introduced independently in 1948 by Maria Goeppert Mayer of the United States and Johannes Hans Daniel Jensen of West Germany, and it developed in succeeding decades into a comprehensive theory of the nucleus. The interactions of neutrons with nuclei had been studied during the mid-1930s by the Italian-born American physicist Enrico Fermi and others. Nuclei readily capture neutrons, which, unlike protons or alpha particles, are not repelled from the nucleus by a positive charge. When a neutron is captured, the new nucleus has one higher unit of atomic mass. If a nearby isotope of that atomic mass is more stable, the new nucleus will be radioactive, convert the neutron to a proton, and assume the more-stable form. Nuclear fission was discovered by the German chemists Otto Hahn and Fritz Strassmann in 1938 during the course of experiments initiated and explained by Austrian physicist Lise Meitner. In fission a uranium nucleus captures a neutron and gains enough energy to trigger the inherent instability of the nucleus, which splits into two lighter nuclei of roughly equal size. The fission process releases more neutrons, which can be used to produce further fissions. The first nuclear reactor, a device designed to permit controlled fission chain reactions, was constructed at the University of Chicago under Fermi’s direction, and the first self-sustaining chain reaction was achieved in this reactor in 1942. In 1945 American scientists produced the first fission bomb, also called an atomic bomb, which used uncontrolled fission reactions in either uranium or the artificial element plutonium. In 1952 American scientists used a fission explosion to ignite a fusion reaction in which isotopes of hydrogen combined thermally into heavier helium nuclei. This was the first thermonuclear bomb, also called an H-bomb, a weapon that can release hundreds or thousands of times more energy than a fission bomb. Quantum field theory and the standard model Dirac not only proposed the relativistic equation for the electron but also initiated the relativistic treatment of interactions between particles known as quantum field theory. The theory allows particles to be created and destroyed and requires only the presence of suitable interactions carrying sufficient energy. Quantum field theory also stipulates that the interactions can extend over a distance only if there is a particle, or field quantum, to carry the force. The electromagnetic force, which can operate over long distances, is carried by the photon, the quantum of light. Because the theory allows particles to interact with their own field quanta, mathematical difficulties arose in applying the theory. The theoretical impasse was broken as a result of a measurement carried out in 1946 and 1947 by the American physicist Willis Eugene Lamb, Jr. Using microwave techniques developed during World War II, he showed that the hydrogen spectrum is actually about one-tenth of one percent different from Dirac’s theoretical picture. Later the German-born American physicist Polykarp Kusch found a similar anomaly in the size of the magnetic moment of the electron. Lamb’s results were announced at a famous Shelter Island Conference held in the United States in 1947; the German-born American physicist Hans Bethe and others realized that the so-called Lamb shift was probably caused by electrons and field quanta that may be created from the vacuum. The previous mathematical difficulties were overcome by Richard Feynman, Julian Schwinger, and Tomonaga Shin’ichirō, who shared the 1965 Nobel Prize for Physics, and Freeman Dyson, who showed that their various approaches were mathematically identical. The new theory, called quantum electrodynamics, was found to explain all the measurements to very high precision. Apparently, quantum electrodynamics provides a complete theory of how electrons behave under electromagnetism. Beginning in the 1960s, similarities were found between the weak force and electromagnetism. Sheldon Glashow, Abdus Salam, and Steven Weinberg combined the two forces in the electroweak theory, for which they shared the Nobel Prize for Physics in 1979. In addition to the photon, three field quanta were also predicted as additional force carriers—the W particle, the Z particle, and the Higgs boson. The W and Z particles were carriers of the weak force, and the Higgs boson was the carrier of the Higgs field, which leads to the W and Z particles being heavy and the photon having a mass of zero. The discoveries of the W and Z particles in 1983, with correctly predicted masses, established the validity of the electroweak theory. A particle that was likely the Higgs boson was finally detected in 2012. In all, hundreds of subatomic particles have been discovered since the first unstable particle, the muon, was identified in cosmic rays in the 1930s. By the 1960s patterns emerged in the properties and relationships among subatomic particles that led to the quark theory. Combining the electroweak theory and the quark theory, a theoretical framework called the Standard Model was constructed; it includes all known particles and field quanta. In the Standard Model there are two broad categories of particles, the leptons and the quarks. Leptons include electrons, muons, and neutrinos, and, aside from gravity, they interact only with the electroweak force. The quarks are subject to the strong force, and they combine in various ways to make bound states. The bound quark states, called hadrons, include the neutron and the proton. Three quarks combine to form a proton, a neutron, or any of the massive hadrons known as baryons. A quark combines with an antiquark to form mesons such as the pion. Quarks have never been observed, and physicists do not expect to find one. The strength of the strong force is so great that quarks cannot be separated from each other outside hadrons. The existence of quarks has been confirmed indirectly in several ways, however. In experiments conducted with high-energy electron accelerators starting in 1967, physicists observed that some of the electrons bombarded onto proton targets were deflected at large angles. As in Rutherford’s gold-foil experiment, the large-angle deflection implies that hadrons have an internal structure containing very small charged objects. The small objects are presumed to be quarks. To accommodate quarks and their peculiar properties, physicists developed a new quantum field theory, known as quantum chromodynamics, during the mid-1970s. This theory explains qualitatively the confinement of quarks to hadrons. Physicists believe that the theory should explain all aspects of hadrons. However, mathematical difficulties in dealing with the strong interactions in quantum chromodynamics are more severe than those of quantum electrodynamics, and rigorous calculations of hadron properties have not been possible. Nevertheless, numerical calculations using the largest computers seem to confirm the validity of the theory.
a4f728df09d676f9
Quantum Mechanics/Time Independent Schrödinger From Wikibooks, open books for an open world < Quantum Mechanics(Redirected from Modern Physics:Time Independant Schrödinger) Jump to: navigation, search Consider a particle confined to a one-dimensional box with impenetrable walls. When you solve the Schrödinger equation for the wavefunctions you get two sets of solutions: those of positive parity, and those of negative parity: \Psi_{P=1} = A \cos \left[\frac{(2n+1) \pi x}{a}\right] and \Psi_{P=-1} = A \cos \left(\frac{2n \pi x}{a}\right), where n is any positive integer and A is a normalisation constant. Now, we can have all of these infinite states and if you've ever studied Fourier Analysis you may have noticed, with these states you can form any function you wish---that is, the wavefunctions are complete. So what have we learned? Well, a lot actually: we have discovered the eigenstates of the Hamiltonian which can be used to determine the particle's time dependance. Derivation of the Time-Independent Schrödinger Equation[edit] We start with the general Schrödinger Equation, and use separation of variables. We have H \Psi = \hat \epsilon \Psi We separate \Psi into two functions: \Psi ( x , t ) = T ( t ) X ( x ) So now the Schrödinger Equation is H T X = \hat \epsilon T X We know from earlier that the "interesting" part of the energy operator \hat \epsilon is a partial derivative with respect to time, and the "interesting" part of the Hamiltonian H is a partial derivative with respect to position. As T does not depend on position, it is not affected by H. Similarly, X is not affected by \hat \epsilon. So we have: T H X = X \hat \epsilon T We can multiply on the left by T^{-1} X^{-1} to obtain X^{-1} H X = T^{-1} \hat \epsilon T Note that the left side only depends on x, on the right side only depends on t. We have two functions which are totally indenpendent, but are somehow equal to each other. This is only possible if both functions are equal to a consant, which we call E. X^{-1} H X = E T^{-1} \hat \epsilon T = E Naturally this implies H X = E X \hat \epsilon T = E T We can then expand H and \hat \epsilon and solve this equation.
30db46027a970f46
Dismiss Notice Dismiss Notice Join Physics Forums Today! Energy Eigenstate 1. Jan 28, 2006 #1 Trying to get my head around this problem and would very much appriciate any suggestions. Given a wavefunction \psi(X) i am asked if it is an energy eigenstate for a free particle moving in one dimension? Any suggestion on how I start a problem like this? 2. jcsd 3. Jan 28, 2006 #2 Well, this is rather easy if you just know how to solve the Schrödinger equation in the case of a free particle (ie potential V(x) = 0) 4. Jan 28, 2006 #3 As was pointed out, a free particle means V(x) = 0. Furthermore, you'll be solving the one dimensional, time-independent S.E, since you're given psi(x). Most undergrad texts work this out at one point or another. I especially like Griffith's explanations - and it should help you a lot (it's done in position space, in 1D). In case you don't have it, to get you started: Write the SE: Rearrange, then define [tex]k \equiv \frac{\sqrt{2mE}}{\hbar}[/tex] Being able to just "see" that you should define k as such, to make it easier (or possible?) to solve isn't something I was able to do. It would have taken me ages to find that on my own. 5. Jan 30, 2006 #4 User Avatar Science Advisor Homework Helper The spectral problem [tex] \hat{H}|\psi\rangle =E|\psi\rangle [/tex] in case of a free particle has a solution of the form [tex] \psi (x)=\langle x|\psi\rangle [/tex] , where [itex] \langle x| [/itex] is a tempered distribution and [itex] |\psi\rangle [/itex] is a test function. So you'll have to see whether your wavefunction can be obtained in this method: applying a linear functional on a vector from [itex] L^{2}\left(\mathbb{R}\right) [/itex]. Have something to add?
0694729e92f947ab
Take the 2-minute tour × There appears to be no way to sort Google Scholar search results by any field. If somebody has figured out a way to do so, please share. share|improve this question migrated from physics.stackexchange.com Nov 1 '11 at 2:55 Unfortunately, Google also doesn't provide API for the scholar search yet! Does this mean that I have to write a crawler+parser to sort the results? –  Aamir Nov 2 '11 at 8:44 Will you accept being able to filter by field? –  simchona Jan 13 '12 at 4:52 Your question is about sorting by number of citations. I just did a search and the results appear to be sorted by number of citations. Is your question still valid? –  Fuhrmanator May 9 '12 at 13:52 3 Answers 3 Some points: 1. this only is reliable if your search returns fewer than 1000 results 2. you can chop your search up and combine the pieces via CSV files and excel, to sort a larger search 3. PoP also sorts by 'cites per year', which removes the bias in favor of older articles, which have had more time to accrue citations. However, this is sometimes misleading for books and articles which have been reprinted or had new editions, since all of the citations are sometimes counted for the new edition. For smaller sets of references that aren't from the same searches, you can use this plugin for Zotero, although it doesn't worth with the standalone version of zotero yet: share|improve this answer If you want to find out what articles are most relevant to your query, then Google Scholar already does a pretty good job in sorting them. If the search query is rather broad (for example "Schrödinger") then the result list will mostly be sorted by the number of citations. If your query is rather narrow, on the other hand (for example "nonlinear time-independent Schrödinger"), then Google tries to provide you with the most relevant results first (namely nonlinear time-independent Schrödinger equations) rather than putting articles high on the list which have a lot of citations but aren't exactly about what you're looking for. That said, I'm also feeling a little uncomfortable in Google trying to find out what I actually want. share|improve this answer Well, I'd say if it doesn't give me sort options, so I can myself decide what sorting to use regardless of the breadth of the query (i.e. the number of results returned), I'd say it's not doing a "pretty good job in sorting them". –  sdaau Feb 22 at 12:01 Google Scholar offers a way to filter the search results by field, though not necessarily sort them. To do so: • Go to Google Scholar, and click Advanced Scholar Search • Enter your search terms • Under "Collections", there is a subcategory "Articles and patents". This category offers two radio button options: "search articles in all subject areas" and "search only articles in the following subject areas". The latter option provides subjects like "Social Sciences, Arts, and Humanities" and "Physics, Astronomy, and Planetary Science" • Click "Search Scholar" Once you open your results, there will be a series of checkboxes under the search bar that let you choose which, if any, subject areas you would like to limit your search to. Google Scholar will also allow you to search for legal decisions from certain courts. share|improve this answer Your Answer
5d3f879445a04c6f
[tt] NS 2759: Seven wonders of the quantum world (series) Premise Checker <checker at panix.com> on Tue May 11 22:51:34 CEST 2010 NS 2759: Seven wonders of the quantum world (series) et seq. * 05 May 2010 by Michael Brooks [Comments added.] From undead cats to particles popping up out of nowhere, from watched pots not boiling--sometimes--to ghostly influences at a distance, quantum physics delights in demolishing our intuitions about how the world works. Michael Brooks tours the quantum effects that are guaranteed to boggle our minds. 1. Corpuscles and buckyballs 2. The Hamlet effect 3. Something for nothing 4. The Elitzur-Vaidman bomb tester 5. Spooky action at a distance 6. The field that isn't there 7. Superfluids and supersolids And finally: Nobody understands Michael Brooks was the Science party candidate for the constituency of Bosworth in the UK general election this week 1. Corpuscles and buckyballs IT DOES not require any knowledge of quantum physics to recognise quantum weirdness. The oldest and grandest of the quantum mysteries relates to a question that has exercised great minds at least since the time of the ancient Greek philosopher Euclid: what is light made History has flip-flopped on the issue. Isaac Newton thought light was tiny particles--"corpuscles" in the argot of the day. Not all his contemporaries were impressed, and in classic experiments in the early 1800s the polymath Thomas Young showed how a beam of light diffracted, or spread out, as it passed through two narrow slits placed close together, producing an interference pattern on a screen behind just as if it were a wave. So which is it, particle or wave? Keen to establish its reputation for iconoclasm, quantum theory provided an answer soon after it bowled onto the scene in the early 20th century. Light is both a particle and a wave--and so, for that matter, is everything else. A single moving particle such as an electron can diffract and interfere with itself as if it were a wave, and believe it or not, an object as large as a car has a secondary wave character as it trundles along the road. That revelation came in a barnstorming doctoral thesis submitted by the pioneering quantum physicist Louis de Broglie in 1924. He showed that by describing moving particles as waves, you could explain why they had discrete, quantised energy levels rather than the continuum predicted by classical physics. De Broglie first assumed that this was just a mathematical abstraction, but wave-particle duality seems to be all too real. Young's classic wave interference experiment has been reproduced with electrons and all manner of other particles (see diagram). We haven't yet done it with a macroscopic object such as a moving car, admittedly. Its de Broglie wavelength is something like 10^-38 metres, and making it do wave-like things such as diffract would mean creating something with slits on a similar scale, a task way beyond our engineering capabilities. The experiment has been performed, though, with a buckyball--a soccer-ball-shaped lattice of 60 carbon atoms that, at about a nanometre in diameter, is large enough to be seen under a microscope (Nature, vol 401, p 680). All that leaves a fundamental question: how can stuff be waves and particles at the same time? Perhaps because it is neither, says Markus Arndt of the University of Vienna, Austria, who did the buckyball experiments in 1999. What we call an electron or a buckyball might in the end have no more reality than a click in a detector, or our brain's reconstruction of photons hitting our retina. "Wave and particle are then just constructs of our mind to facilitate everyday talking," he says. 2. The Hamlet effect A WATCHED pot never boils." Armed with common sense and classical physics, you might dispute that statement. Quantum physics would slap you down. Quantum watched pots do refuse to boil--sometimes. At other times, they boil faster. At yet other times, observation pitches them into an existential dilemma whether to boil or not. This madness is a logical consequence of the Schrödinger equation, the formula concocted by Austrian physicist Erwin Schrödinger in 1926 to describe how quantum objects evolve probabilistically over Imagine, for example, conducting an experiment with an initially undecayed radioactive atom in a box. According to the Schrödinger equation, at any point after you start the experiment the atom exists in a mixture, or "superposition", of decayed and undecayed Each state has a probability attached that is encapsulated in a mathematical description known as a wave function. Over time, as long as you don't look, the wave function evolves as the probability of the decayed state slowly increases. As soon as you do look, the atom chooses--in a manner in line with the wave function probabilities--which state it will reveal itself in, and the wave function "collapses" to a single determined state. This is the picture that gave birth to Schrödinger's infamous cat. Suppose the radioactive decay of an atom triggers a vial of poison gas to break, and a cat is in the box with the atom and the vial. Is the cat both dead and alive as long as we don't know whether the decay has occurred? We don't know. All we know is that tests with larger and larger objects--including, recently, a resonating metal strip big enough to be seen under a microscope--seem to show that they really can be The weirdest thing about all this is the implication that just looking at stuff changes how it behaves. Take the decaying atom: observing it and finding it undecayed resets the system to a definitive state, and the Schrödinger-equation evolution towards "decayed" must start again from scratch. The corollary is that if you keep measuring often enough, the system will never be able to decay. This possibility is dubbed the quantum Zeno effect, after the Greek philosopher Zeno of Elea, who devised a famous paradox that "proved" that if you divided time up into ever smaller instants you could make change or motion impossible. And the quantum Zeno effect does happen. In 1990, researchers at the National Institute of Standards and Technology in Boulder, Colorado, showed they could hold a beryllium ion in an unstable energy configuration rather akin to balancing a pencil on its sharpened point, provided they kept re-measuring its energy (Physical Review A, vol 41, p 2295). The converse "anti-Zeno" effect--making a quantum pot boil faster by just measuring it--also occurs. Where a quantum object has a complex arrangement of states to move into, a decay into a lower-energy state can be accelerated by measuring the system in the right way. In 2001, this too was observed in the lab (Physical Review Letters, vol 87, p 040402). The third trick is the "quantum Hamlet effect", proposed last year by Vladan Pankovic of the University of Novi Sad, Serbia. A particularly intricate sequence of measurements, he found, can affect a system in such a way as to make the Schrödinger equation for its subsequent evolution intractable. As Pankovic puts it: to be decayed or not-decayed, "that is the analytically unsolvable 3. Something for nothing "NOTHING will come of nothing," King Lear admonishes Cordelia in the eponymous Shakespeare play. In the quantum world, it's different: there, something comes of nothing and moves the furniture around. Specifically, if you place two uncharged metal plates side by side reason. They won't move a lot, mind. Two plates with an area of a force equivalent to just over a tenth of a gram. The Dutch physicist Hendrik Casimir first noted this minuscule movement in 1948. "The Casimir effect is a manifestation of the quantum weirdness of the microscopic world," says physicist Steve Lamoreaux of Yale University. It has to do with the quantum quirk known as Heisenberg's uncertainty principle, which essentially says the more we know about can't, for instance, deduce the exact position and momentum of a particle simultaneously. The more certain we are of where a particle A similar uncertainty relation exists between energy and time, with contain exactly zero energy at a precisely defined moment in time - something the uncertainty principle forbids us from knowing. quantum field theory, empty space is actually fizzing with doesn't like it and disappears again, all in the name of preventing the universe from violating the uncertainty principle. For the most part, this stuff is pairs of photons and their antiparticles that quickly annihilate in a puff of energy. The tiny electric fields in metal plates, might explain the Casimir effect. fields associated with the atoms in the metal plates also fluctuate. These variations create tiny attractions called van der Waals forces between the atoms. "You can't ascribe the Casimir force solely of the atoms that make up the plates," says Lamoreaux. "Either view is correct and arrives at the same physical result." components in close proximity to stick together. 1961, Russian physicists showed theoretically that combinations of materials with differing Casimir attractions can create scenarios where the overall effect is repulsion. Evidence for this strange "quantum buoyancy" was announced in January 2009 by physicists from Harvard University who had set up gold and silica plates separated 4. The Elitzur-Vaidman bomb-tester A BOMB triggered by a single photon of light is a scary thought. If such a thing existed in the classical world, you would never even be aware of it. Any photon entering your eye to tell you about it would already have set off the bomb, blowing you to kingdom come. With quantum physics, you stand a better chance. According to a scheme proposed by the Israeli physicists Avshalom Elitzur and Lev Vaidman in 1993, you can use quantum trickery to detect a light-triggered bomb with light--and stay safe a guaranteed 25 per cent of the time (Foundations of Physics, vol 23, p 987). The secret is a device called an interferometer. It exploits the quantumly weird fact that, given two paths to go down, a photon will take both at once. We know this because, at the far end of the device, where the two paths cross once again, a wave-like interference pattern is produced (see "Quantum wonders: Corpuscles and buckyballs"). To visualise what is going on, think of a photon entering the interferometer and taking one path while a ghostly copy of itself goes down the other. In Elitzur and Vaidman's thought experiment, half the time there is a photon-triggered bomb blocking one path (see diagram). Only the real photon can trigger the bomb, so if it is the ghostly copy that gets blocked by the bomb, there is no explosion--and nor is there an interference pattern at the other end. In other words, we have "seen" the bomb without triggering it. Barely a year after Elitzur and Vaidman proposed their bomb-testing paradox, physicists at the University of Vienna, Austria, had brought it to life--not by setting off bombs, but by bouncing photons off mirrors (Physical Review Letters, vol 74, p 4763). In 2000, Shuichiro Inoue and Gunnar Bjork of the Royal Institute of Technology in Stockholm, Sweden, used a similar technique to show that you could get an image of a piece of an object without shining light on it--something that could revolutionise medical imaging. "It would be very useful for something like X-ray scanning, if there were no radiation damage to the tissue because no X-rays actually hit it," says physicist Richard Jozsa of the University of Josza is the brains behind perhaps the most eye-rubbing of such tricks: using a quantum computer to deliver the output of a program even when you don't run the program. As the team that implemented his idea in 2005 showed, quantum physics does at least retain some semblance of classical decency: to deliver a sensible answer, the computer does need to be switched on (Nature, vol 439, p 949). 5. Spooky action at a distance it proof that quantum theory was seriously buggy. It is entanglement: the idea that particles can beed in such a way that changing the quantum state of one instantaneously affects the other, even if they are light years apart. physicist John Bell of the European Organization for Nuclear calculated a mathematical inequality that encapsulated the maximum correlation between the states of remote particles in experiments in which three "reasonable" conditions hold: that experimenters have properties being measured are real and pre-existing, not just faster than the speed of light, the cosmic speed limit. As many experiments since have shown, quantum mechanics regularly violates Bell's inequality, yielding levels of correlation way above those possible if his conditions hold. That pitches us into a somehow predetermines what measurements we take? That is not anyone's first choice. Are the properties of quantum particles not real--implying that nothing is real at all, but exists merely as a result of our perception? That's a more popular position, but it hardly leaves us any the wiser. Or is there really an influence that travels faster than light? Cementing the Swiss reputation for precision timing, in 2008 physicist Nicolas Gisin and his colleagues at the University of transfer of quantum states between entangled photons held in two villages 18 kilometres apart was somewhere above 10 million times Whatever the true answer is, it will be weird. Welcome to quantum 6. The field that isn't there HERE'S a nice piece of quantum nonsense. Take a doughnut-shaped the hole. is no field, right? Wrong. The wave associated with the electron's movement suffers a jolt as if there were something there. Werner Ehrenberg and Raymond Siday were the first to note that this behaviour lurks in the Schrödinger equation (see "Quantum wonders: The Hamlet effect "). That was in 1949, but their result remained some reason their names stuck. So what is going on? The Aharonov-Bohm effect is proof that there is more to electric and magnetic fields than is generally supposed. You can't calculate the size of the effect on a particle by considering just the properties of the electric and magnetic fields where the it isn't. Casting about for an explanation, physicists decided to take a look For a long time, vector potentials were considered just handy mathematical tools--a shorthand for electrical and magnetic properties that didn't have any real-world significance. As it turns out, they describe something that is very real indeed. The Aharonov-Bohm effect showed that the vector potential makes an electromagnetic field more than the sum of its parts. Even when a field isn't there, the vector potential still exerts an influence. That influence was seen unambiguously for the first time in 1986 when Akira Tonomura and colleagues in Hitachi's laboratories in Tokyo, Japan, measured a ghostly electron jolt (Physical Review Letters, vol 48, p 1443). Although it is far from an everyday phenomenon, the Aharonov-Bohm effect might prove to have uses in the real world--in magnetic sensors, for example, or field-sensitive capacitors and data storage buffers for computers that crunch light. 7. Superfluids and supersolids FORGET radioactive spider bites, exposure to gamma rays, or any other accident favoured in Marvel comics: in the real world, it's quantum theory that gives you superpowers. Take helium, for example. At room temperature, it is normal fun: you can fill floaty balloons with it, or inhale it and talk in a squeaky voice. At temperatures below around 2 kelvin, though, it is a liquid and its atoms become ruled by their quantum properties. There, it becomes super-fun: a superfluid. At room temperature, helium is normal fun. Close to absolute zero, though, it becomes super-fun Superfluid helium climbs up walls and flows uphill in defiance of gravity. It squeezes itself through impossibly small holes. It flips the bird at friction: put superfluid helium in a bowl, set the bowl spinning, and the helium sits unmoved as the bowl revolves beneath it. Set the liquid itself moving, though, and it will continue gyrating forever. That's fun, but not particularly useful. The opposite might be said of superconductors. These solids conduct electricity with no resistance, making them valuable for transporting electrical energy, for creating enormously powerful magnetic fields--to steer protons around CERN's Large Hadron Collider, for instance--and for levitating superfast trains. We don't yet know how all superconductors work, but it seems the uncertainty principle plays a part (see "Quantum wonders: Something for nothing"). At very low temperatures, the momentum of individual atoms or electrons in these materials is tiny and very precisely known, so the position of each atom is highly uncertain. In fact, they begin to overlap with each other to the point where you can't describe them individually. They start acting as one superatom or superelectron that moves without friction or resistance. All this is nothing in the weirdness stakes, however, compared with a supersolid. The only known example is solid helium cooled to within a degree of absolute zero and at around 25 times normal atmospheric pressure. Under these conditions, the bonds between helium atoms are weak, and some break off to leave a network of "vacancies" that behave almost exactly like real atoms. Under the right conditions, these vacancies form their own fluid-like Bose-Einstein condensate. This will, under certain circumstances, pass right through the normal helium lattice --meaning the solid flows, ghost-like, through itself. So extraordinary is this superpower that Moses Chan and Eun-Seong Kim of Pennsylvania State University in University Park checked and re-checked their data on solid helium for four years before eventually publishing in 2004 (Nature, vol 427, p 225). "I had little confidence we would see the effect," says Chan. Nevertheless, researchers have seen hints that any crystalline material might be persuaded to perform such a feat at temperatures just a fraction above absolute zero. Not even Superman can do that. 8. Nobody understands It is tempting, faced with the full-frontal assault of quantum weirdness, to trot out the notorious quote from Nobel prize-winning physicist Richard Feynman: "Nobody understands quantum mechanics." It does have a ring of truth to it, though. The explanations attempted here use the most widely accepted framework for thinking about quantum weirdness, called the Copenhagen interpretation after the city in which Niels Bohr and Werner Heisenberg thrashed out its ground rules in the early 20th century. With its uncertainty principles and measurement paradoxes, the Copenhagen interpretation amounts to an admission that, as classical beasts, we are ill-equipped to see underlying quantum reality. Any attempt we make to engage with it reduces it to a shallow classical projection of its full quantum richness. Lev Vaidman of Tel Aviv University, Israel, like many other physicists, touts an alternative explanation. "I don't feel that I don't understand quantum mechanics," he says. But there is a high price to be paid for that understanding--admitting the existence of parallel universes. In this picture, wave functions do not "collapse" to classical certainty every time you measure them; reality merely splits into as many parallel worlds as there are measurement possibilities. One of these carries you and the reality you live in away with it. "If you don't admit many-worlds, there is no way to have a coherent picture," says Vaidman. Or, in the words of Feynman again, whether it is the Copenhagen interpretation or many-worlds you accept, "the 'paradox' is only a conflict between reality and your feeling of what reality ought to Take The Next Step The Copenhagen interpretation was an attempt by a generation of stunned scientist to bridge the difference between our classical sense of the universe and its quantum reality. It distorts the reality of quantum behavior and results in confusion, apparent paradoxes and general misunderstanding. "If you don't admit many-worlds, there is no way to have a coherent picture," says Vaidman. The "coherent picture" that Vaidman refers to is reconciling Einstein's sense of causality with entanglement. Although the Multi-World interpretation (MWI) is an improvement over the Copenhagen interpretation, it still creates confusion and misunderstanding by failing to accept the truth of the non-locality of wave-functions. It generates fantasies of alternate co-existing realities where every choice is explored. "What if Hitler won?" scenarios. In its detail, the MWI doesn't actually support this idea, but as a bad explanation, it spawns them. Take the next step and move past Einstein's causality. Embrace wave-function non-locality and what it means. Space and time are not fundamental properties of the universe, they are emergent. Take The Next Step Mon May 10 09:23:45 BST 2010 by Liza I've read your previous comment on the according to you false particle-wave duality, and you mentioned locality as well. Still, most books on physics for non-physicists still mention the duality as an established theory. Do you have any real reason to assume the duality is false, and non-locality is normal, except for your personal opinion? What you say may make sense or could just as well be nothing but wild speculation, but since I'm not a physicist, I don't have the knowledge to make up my mind. Take The Next Step Mon May 10 17:08:56 BST 2010 by David Allen @Liza "...most books on physics for non-physicists still mention the duality as an established theory." Yes, most books are based on the dominant. and very entrenched, interpretation of quantum mechanics (QM), the Copenhagen Duality is an explanation, not a theory, QM is the theory. Duality attempts to explain why under some conditions we see "classical" particle-like behavior and under others we see QM wave-like @Liza "Do you have any real reason to assume the duality is false, and non-locality is normal, except for your personal opinion?" Yes I have reasons to reject duality as an explanation: The word "duality" implies equivalence, which isn't true. Particle like models of QM systems can't describe everything that is seen in experiments, however the wave-function models can. In other words the wave-function models are complete, they can describe both the wave-like behavior and the particle-like behavior. Particle interpretations are incomplete, and unnecessary. Wheeler's delayed choice experiment can't be explained by duality and wave-function collapse. It can only be explained by the wave-function models. Both duality and wave-function collapse are bad general explanations. In the right circumstances however, the ideas behind them can be used to simplify the math, making real-world problems more tractable. As for non-locality: The issues surrounding non-locality are harder to tease apart, so my conclusions are certainly just my opinion. This will give you some idea of the different approaches: The Copenhagen interpretation doesn't reject non-locality, in fact it needs it to support wave-function collapse. The Many-Worlds interpretation (MWI) does reject non-locality, but other than that is basically in line with my wave-only perspective. MWI's rejection of non-locality appears to be based on a desire to avoid an explanation that contradicts special relativity. I claim however that special relativity only applies to energy in space, and not to quantum information. The non-local behavior of entanglement does not contradict it. I also claim that non-local entanglement may not necessarily allow simultaneity, or specific ordering to be established between frames of reference. This would also avoid a contradiction with special relativity. However I think that there may be a way to falsify this second claim. Wild Speculation: The wild speculation I engage in is that the Universe has no true spacial dimensions, time and space are emergent from quantum decoherence. The flow of time and expansion of space are tied to the rate of decoherence, but the rate of decoherence slows down as space expands due to fewer interactions. The relative rates of growth in space vs. flow of time makes the universe appear to grow at an exponential rate to observers within the universe. When there is no more matter and all energy stretches to the ultimate quantum levels, the dominant quantum interactions become entanglement. The universe begins to collapse, and as it does the rate of entanglement increases. To observers inside the universe, this collapse would appear to be inversely exponential. The distortions in observer time mean that at the point of maximum expansion, the universe appears to have just been created, exploding almost instantly to its current size. At the point of maximum density, the universe appears to have gone on almost forever in this high density state of slowing Take The Next Step Tue May 11 15:53:59 BST 2010 by Liza Hey thanks! I had to read your comment thrice in order to understand what you are explaining (no fault of yours), and it's surely interesting. True, if the wave-function models can explain all what's observed, and the particle-based models can't, there's no real reason to hang on to the particle idea except that it's easier to visualise and understand for most people. I guess non-locality gets rejected because it seems beyond what we can comprehend, magical even. The whole decoherence/dimensions speculation is fascinating, but have you ever tried to make some calculations to see if it holds up? PS: it's pretty nice to read comments on this topic from people who make sense, rather than the usual Zephir-style nonsense. view thread Quantum Wonders: Nobody Understands Sun May 09 15:28:00 BST 2010 by andwor in order to understand the quantum world it is important to take the next quntum leap. Specifically it is important to look at the quantum world at a very much smaller scale. Please see a recently published article (currently online) on exactly this topic. Entitled "The formulation of harmonic quintessence and a fundamental energy equivalence equation" Physics Essays 23: 311-319 Quantum Wonders Nobody Understands Mon May 10 00:00:32 BST 2010 by Julian Mann I do not agree with Vaidman that we are forced into accepting parallel universes. See my comments on the quantum Hamlet Effect and the existence of Classical Time, Anti-time(Quantum World) and Nul time.These concepts suffice to explain all the anomolies in Quantum Mechanics, reconcile it to relativity etc. I have noticed that when scientists do not understand something in Physics, they invoke such concepts for which there is no experimental evidence for existence Quantum Wonders Nobody Understands Mon May 10 15:20:16 BST 2010 by andwor Thank you for your insightful comments. I am also exploring the concept of what you term "anti-time". Specifically I was considering an adaptation to the Wheeler and Feynman theory, where time is symmetric. This effectively means that electromagnetic processes go backwards in time as well as forwards. But in the presence of the "perfect reflector " in the past, i.e. The Big Bang, then the half that goes backwards in time gets refelected forwards in time to arrive when it left, and continues on its forward journey. This means that the whole signal effectively goes forward in time. This requires also that the future is the "perfect absorber" and since the recent discovery of the accelerating Universe then we do have our "perfect" absorber Of course entanglement is the archetypal example of where this might be happening. Equally well the electomanetic signal does not need to go all the way back to the BIG Bang it just needs to go back to the point in time when the electromanetic effect/signal was created, exactly as in entangled particles/photons. Any thoughts More information about the tt mailing list
4bbb2683a6e39ffa
Saturday, December 08, 2012 TED Tries to Clean Up Its Act I claim that the top three criteria for good science reporting are: Accuracy, Accuracy, and Accuracy. Everything else falls into fourth place or lower, including the presentation style. There have been a number of TED (or TEDx) talks on science that fail the top three criteria [TED: Alexander Tsiaras, "It was hard not to attribute divinity to it" ] [The Trouble with TED]. Apparently the high command at TED has woken up to the fact that they are being bamboozled by pseudoscientists. Phil Plait of Bad Astronomy alerts us to a letter that they recently sent out to all TEDx organizers [TEDx Talks: Some Ideas Are Not Worth Spreading]. (I love his title!) Here's a copy of the letter: A letter to the TEDx community on TEDx and bad science. And here's the opening praragraphs—you should read the entire letter because it contains a lot of information about how to recognize bad science. Hello TEDx Community, In light of a few suspect talks that have come out of the TEDx movement — some of which we at TED have taken action to remove, some being examined now — and this recent thread on Reddit [], we feel it is important to reach out to all TEDx organizers on the topic of bad science and pseudoscience. Please know this above all: We take this seriously. Presenting bad science on the TEDx stage is grounds for revoking your license. Apparently TED will take down videos that spread pseudoscience. That explains why I was having so much trouble finding examples. 1. What an extremely heartening response to the criticism!! 2. The Alexander Tsiaras presentation that you wrote about didn't exactly fail your accuracy test, and it wasn't pseudoscience either. As I recall, your real objection to it was that he DARED to, as a personal opinion, confess that it was 'hard not to see divinity' in the coordination of the physiology, and development of human embyos. Are you saying that people working in scientific fields should be discouraged from even speculating, as an opinion, that something supernatural seems evident in the work they are doing? Very relieved that you are not the 'thought police' in charge of TED talks. 1. Nobody's telling people what to think(though in various books of myths, even your thoughts will you be held accountable for, as some of them are sinful). Anyway, they shouldn't take a stage and masquerade their incredulity as elucidated through scientific work. There are other forums for that, they can do that in church. 2. 'they' should be allowed to express themselves in such ways, and with such words, as they feel best convey how they felt about or experienced something. He wasn't pushing his views on anyone. 3. andy, then why are you constantly arguing against Larry, me, and others here who express ourselves in such ways, and with such words, as we feel best convey how we feel about or experience something? We're not pushing our views on anyone. 4. because I enjoy arguing over ideas that I think are important. I think it is perfectly fine that you express your ideas in such ways, and with such words, as you feel best convey how you feel about or experience something, and would never want that freedom to be taken from you. I have said a few times here that I am grateful that this site allows dissenting views to be posted without censorship. That doesn't mean that I find very much to admire in Larry Moran's ideas,or yours, and I have made that clear. 5. Furthermore, rumracket, TED stands for Technology, Entertainment, and Design. It doesn't even have science in its name. Tsirias is a Renaissance man: painter, photographer, graphic artist, mathematician, etc. who has done commissioned work for NASA on the very important subject of virtual surgery on astronauts. It is doubtful that very many people in the world even CAN do what he does; therefore TED should be (and probably is) proud that he chose them over the 'other forums' you mentioned. No one at TED should be hanging their heads in shame because some uptight atheists may have felt uncomfortable with his words. It was an Entertaining presentation that incorporated Technology and Design. In other words, perfect for TED. 6. andy, you switch between your two faces about as fast as any other creobot I've ever encountered. 7. what on earth are you going on about now, twt? 8. Alexander Tsiaras: Not make "mistakes"? They're perfect? Apparently he has never seen or heard of a child born dead, blind, deaf, or with two heads, cancer or other illnesses, retardation, missing or malformed limbs, etc., etc., etc., and apparently he has never seen or heard of miscarriages or other problems regarding 'fetal development'. "The magic of the mechanisms inside each genetic structure saying exactly where that nerve cell should go — the complexity of these mathematical models is beyond human comprehension." Exactly where that nerve cell should go? See my comment above, and no, there's no magic involved and the complexity of "these mathematical models" is not beyond human comprehension. The quotes are from here: 9. @andyboerger There seems to be an ongoing confusion about the right to freely express ones opinions versus the right not to be forced to provide a venue to every crackpot, woo miester and fraudster on the planet. For example, andyboerger has the right to freely express his opinions, no matter how incoherent, inconsistent, repugnant, smug, condescending, ill thought out, immature etc. others find them. On the other hand, Larry Moran has graciously provided andyboerger a forum from which to spout his nonsense, but this is a privilege, not a right. So it goes with TED talks. 10. steve, despite the ad hominem (so typical of you), I basically agree with you here. TED is very wise to set the guidelines that they have. It does no one any good to continually provide a forum for snake oil salesmen; least of all to their own reputation. They are a private organization, though, and don't need to jump through hoops to please people who find words like 'divinity' radioactive. I can assure you that they weren't thinking of the Tsiaras talk when they decided to set stricter guidelines. I can also assure you that when NASA commissioned him to help them figure out ways to do remote surgery on astronauts should the need arise, they didn't hold his metaphysical views against him. It would have been stupid of them had they done so. 11. twt, it is fine to argue with Tsiarias, and disagree with him, as you are doing here. I am sure that he would be able to reply to you ably regarding your facile assumption that he has never "seen or heard of a child born dead, blind, deaf, or with two heads, etc." . Of COURSE he has. There is no reason to suspect that he is indifferent to or ignorant of such tragedies. But I won't argue for him in his place. Furthermore, the talk he gave was hardly to prove to anyone else that humans came from god, but to show the technology that he himself developed to depict the growth of an embryo. He interjected his own personal belief in the talk, but the talk wasn't about that. As for 'not beyond human comprehension', I think you and he probably have a disagreement over the exact meaning of his words. We may be able to 'comprehend' something in terms of its mechanics without being able to really understand how it came to be. An art historian can comprehend the works of El Greco or Klimt, but they wouldn't be able to reproduce them. They would still marvel at them. They will continue to marvel at them until such time as the artistic process is so completely understood that it no longer has the power to inspire. Neither the universe, the birth process, nor art, are even close to being understood to that level. As I have argued here before, if you can't make it, duplicate it, reproduce it, fully comprehend it, and if you need to study it to learn more about it, and furthermore the more you DO study it, the more it reveals, you are not crazy to suppose that it may have come from an intelligence beyond your own. That is what Tsiarias is saying here. Here is an example, from this very site: steve wrote: Science is the only method we have of increasing our understanding of the universe and the universe is manifestly uninterested in our well-being. But to the extent we can increase our understanding of the universe we increase our ability to control and improve our own lives. (end quote) Do you agree with his overall point? Do you also realize that the part about the universe being 'manifestly uninterested in our well-being" is an interjection; a simple value judgment from personal opinion? If steve were asked to give a presentation on science, would you object to the inclusion of that line? If not, then I don't see how you can object to the inclusion of the divinity line in Tsiarias' talk. It's a judgement based on evidence, overwhelming evidence that all there is to the universe is material and that it can be modelled by hypotheses that so far have no need for a non material component. There is not a single shred of contrary evidence that would indicate that the universe has a non material aspect and my position is that the concept of non material is incoherent and a non starter to begin with. Now if I had presented patently ridiculous ideas like camouflage or human intelligence being proof for a teleological or designed aspect to the universe, that would be a value judgement. No one would ask me to give a presentation on science nor would I do so as I am completely unqualified in this area. But I'm honest enough to admit that and not make up stories about how reality works based on my fear of the dark and death. 13. In quantum mechanics, a particle is a probability wavefunction (Schrödinger equation) that collapses to a particle, i.e. matter, when it is observed. Since the Universe is made of matter particles there must be an observer that collapses the particles wavefunction. Unless one believes that the Moon isn't there when no one his looking at it! 14. The Pépéronic interpretation of quantum mechanics. 15. Ah, Pépé, the little creobot search engine that could. Sounds like a story for children. 16. As always with atheists, no solid argumentation only ad hominem attacks. That's expected from immature folks who realized they are on the losing end, as always. 17. steve, your reply to me, above Pepe's comment, expresses True Believer adherence to a concept from start to finish.No point in arguing with it. 18. Pépé, no one is trying to argue with you. They're just saying you're an idiot. No reason to cry ad-hominem when your post is too stupid to try to argue against it. 3. Magna opera Domini exquisita in omnes voluntates ejus (The works of the Lord are great, sought out of all them that have pleasure therein) Inscription on the doors of the Cavendish Laboratory, put there by James Clerk Maxwell who was giving credit where credit is due. 2. On don't think Dawkins will be remembered for his science. 3. "Alchemy is pretty awesome, ain't it?" -Isaac Newton 4. Religion is the link that binds man to God. (Max Planck) And the great mathematician Srinivasa Ramanujan credited his mathematical insights to the Hindu goddess Sri Namagiri Lakshmi. Who cares what scientists believe in private, as long as it doesn't interfere with their work? As far as I know, there are no references to the Bible or the Summa theologiae in Maxwell's articles, or thanks for God's moral or conceptual support in the Acknowledgments section of any of his books. 6. "Who cares what scientists believe in private..." Maxwell cared since he had the Latin inscription of Psalm 111 verse 2 carved on the doors of the first Cavendish Laboratory. And that far from interfering with is work inspired him. 7. PépéS Sunday, December 09, 2012 12:00:00 PM On don't think Dawkins will be remembered for his science. Is Maxwell remembered for his faith or his science? Would his equations be any different had he been Muslim or Buddhist or atheist? 4. Yeah, Pépé, it's the same old Cavendish Laboratory where Crick and Watson carried out their breakthrough research on DNA. Francis Crick: Christianity may be OK between consenting adults in private but should not be taught to young children. James Watson: The luckiest thing that ever happened to me was that my father didn't believe in God. Scientists just do their thing, no matter what they believe in in their spare time, or what is carved over the laboratory door. 5. Science without religion is lame, religion without science is blind. Albert Einstein. I wonder why probably the greatest scientist of the twentieth century said that? Maybe he saw the lame science done by all those atheist scientists! 1. "I'm great at point-getting" -Pépé Google "argument from authority". And also google what Einstein thought about christianity, or any personal-god religions for that matter. 2. The word god is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honorable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can change this.” - Albert Einstein He seems to be a man of many words and thoughts :) 3. Looks like Pépé shot himself in the foot (again). (Albert Einstein) I'm satisfied that Einstein believed in God and recognized God's handiwork. I think that God doesn't care much about the various religions because we are all His children and He loves each and everyone of us. 5. Pépé, When Einstein cheated on his wife and abandoned an out of wedlock daughter, was he also recognizing your god's handiwork ? Did your god love the victims in this little pageant as well ? Reminds me of a story about that world class misogynist douche bag Mother Teresa: One day I met a lady who was dying of cancer in a most terrible condition. And I told her, I say, "You know, this terrible pain is only the kiss of Jesus — a sign that you have come so close to Jesus on the cross that he can kiss you." And she joined her hands together and said, "Mother Teresa, please tell Jesus to stop kissing me". Really, I couldn't much give a shit about whether a famous scientist was religious. Isaac Newton most definitely believed in a personal god, and many would say he was an even greater scientist than Einstein was. My purpose in this discussion is only to dispel a common religious LIE, as if what famous persons believed on any matter has any bearing on what is true about the world, or what we are evidentially justified in believing. 6. Pépé, this quote-mining is pathetic. Einstein did not believe in a personal god; his "religion" (like that of Heisenberg) was a kind of vague pantheism -- more to do with philosophy that theology. Most of the greatest physicists of the 20th century were atheists: Bohr, Dirac, Schrödinger (even if he flirted with eastern mysticism), Feynman, etc., not to mention younger generations. Fermi was a Catholic at young age but turned atheist as he grew up; so did Bethe, for example. Of course, there were a few religious ones as well (Hahn, Pauli), but if you think it mattered professionally, prove it. 1. Piotr, I think that you and I are pretty much on the same wavelength about this. And I am guessing you agree with me that it doesn't really matter what Tsiarias believes, in that it hasn't stopped him from developing the marvelous technologies he has that so many have found useful and/or awe inspiring, right? He is doing the science/design, and if he has something worth showing and talking about, then he has a right to interject an idea or two that others may find questionable while talking about it. 2. Yep, I suppose he has a right to express a personal opinion at his peril. Personally, I prefer science talks by people who know their subject thoroughly.
c59df344f9378698
Open Access Nano Express Trion X+ in vertically coupled type II quantum dots in threading magnetic field Sindi Horta-Piñeres12, Gene Elizabeth Escorcia-Salas1, Ilia D Mikhailov3 and José Sierra-Ortega1* Author affiliations 1 Group of Investigation in Condensed Matter Theory, Universidad del Magdalena, Santa Marta, AA 731, Colombia 2 Universidad de Sucre, Sincelejo, AA 406, Colombia 3 Universidad Industrial de Santander, Bucaramanga, AA 678, Colombia For all author emails, please log on. Citation and License Nanoscale Research Letters 2012, 7:532  doi:10.1186/1556-276X-7-532 Published: 26 September 2012 We analyze the energy spectrum of a positively charged exciton confined in a semiconductor heterostructure formed by two vertically coupled, axially symmetrical type II quantum dots located close to each other. The electron in the structure is mainly located inside the dots, while the holes generally move in the exterior region close to the symmetry axis. The solutions of the Schrödinger equation are obtained by a variational separation of variables in the adiabatic limit. Numerical results are shown for bonding and anti-bonding lowest-lying of the trion states corresponding to the different quantum dots morphologies, dimensions, separation between them, thicknesses of the wetting layers, and the magnetic field strength. Quantum dots; Adiabatic approximation; Trion; 78.67.-n; 71.35.Pq; 73.21.La
0e4f73498a70aff6
What Is Eta In Physics? Relativity and Quantum field theory (physics), η (with two subscripts) represents the metric tensor of Minkowski space (flat spacetime). Statistics, η2 is the “partial regression coefficient”. What does the ETA symbol mean? Eta (uppercase/lowercase Η η) is a letter of the Greek alphabet. In very early Greek writing it stood for the consonant sound “h”, but in Classical Greek it stood for a long vowel “e”. The same letter is also used to represent conformal time in cosmology, efficiency in telecommunications and elasticity in economics. What does P stand for in physics? p = pressure. p = momentum. π = 3.14. Pa = pascal (pressure) What does φ mean in physics? The lowercase letter φ (or often its variant, ϕ) is often used to represent the following: Magnetic flux in physics. The letter phi is commonly used in physics to represent wave functions in quantum mechanics, such as in the Schrödinger equation and bra–ket notation:. The golden ratio. What is eTA used for? In the logistics industry, ETA (estimated time of arrival) indicates when a vehicle, cargo ship, or other modes of transportation will arrive at its final destination. Arrival estimates are used to give customers an approximation of when the vehicle carrying their goods will arrive at their location. What is the difference between Epsilon and ETA? So, an Eta (η); which is pronounced like the “a” in the English word “day” (even though we call it a long e), is distinguished from Epsilon (ε) by transliterating the Greek η as ē and the ε simply as e. You might be interested:  FAQ: What Is Sqrt In Physics? What is Ø in physics? Ø is used for the empty set and I think they should have use ∆Q not ØQ. ∆ is used to denote a finite amount of something:∆t-some amount of time, ∆x some distance in the x direction,∆Q some charge. What is U in physics class 9? What does U mean in physics? In classical mechanics, U is often used to represent potential energy. In particular, it is used as a symbol of gravitational potential energy and elastic potential energy. In electrodynamics, U is used to represent electrical potential energy. Leave a Reply Back to Top
b5edfd972b150494
Superposition always exists, 2.2.2 Decoherence and the Collapse, 2.2 | Quantum decoherence 7.2.2 superposition ~ indistinguishability superposition state ~ logically indistinguishable states (forming one SINGLE quantum state) logically indistinguishable ~ indistinguishable by definition ~ indistinguishable due to “the experiment setup is without detector” part of the definition By the Leibniz’s Law (Identity of indiscernibles), logically indistinguishable cases are actually the same one SINGLE case, represented by one SINGLE quantum state. Classically, there are no such logically indistinguishable cases because classically, all particles are distinguishable. So the probability distribution in the newly invented non-classical state should be completely different from any probability distributions provided by classical physics. Such cases of a new kind are called quantum states. A quantum state’s probability distribution can be calculated from its wave function. “Why that single quantum state is represented by a superposition of eigenstates and why its wave function is governed by the Schrödinger equation” is ANOTHER set of questions, whose correct answers may or may not be found in the Wikipedia article Theoretical and experimental justification for the Schrödinger equation. Superpositions always exist. Logically indistinguishable cases are always there. You just trade some logically indistinguishable cases with some other logically indistinguishable cases. The “superpositions” are superpositions in definition, in language, in logic, in calculation, and in mathematics, but not in physical reality, not in physical spacetime. — Me@2021-01-24 09:29:13 PM 2021.01.24 Sunday (c) All rights reserved by ACHK
f1799336204abce1
Physicists Prove That the Imaginary Part of Quantum Mechanics Really Exists! Particle Quantum Physics Concept An international research team has proven that the imaginary part of quantum mechanics can be observed in action in the real world. For almost a century, physicists have been intrigued by the fundamental question: why are complex numbers so important in quantum mechanics, that is, numbers containing a component with the imaginary number i? Usually, it was assumed that they are only a mathematical trick to facilitate the description of phenomena, and only results expressed in real numbers have a physical meaning. However, a Polish-Chinese-Canadian team of researchers has proved that the imaginary part of quantum mechanics can be observed in action in the real world. We need to significantly reconstruct our naive ideas about the ability of numbers to describe the physical world. Until now, it seemed that only real numbers were related to measurable physical quantities. However, research conducted by the team of Dr. Alexander Streltsov from the Centre for Quantum Optical Technologies (QOT) at the University of Warsaw with the participation of scientists from the University of Science and Technology of China (USTC) in Hefei and the University of Calgary, found quantum states of entangled photons that cannot be distinguished without resorting to complex numbers. Moreover, the researchers also conducted an experiment confirming the importance of complex numbers for quantum mechanics. Articles describing the theory and measurements have just appeared in the journals Physical Review Letters and Physical Review A. Photons Quantum Mechanics Complex Numbers Photons can be so entangled that within quantum mechanics their states cannot be described without using complex numbers. Credit: QOT/jch “In physics, complex numbers were considered to be purely mathematical in nature. It is true that although they play a basic role in quantum mechanics equations, they were treated simply as a tool, something to facilitate calculations for physicists. Now, we have theoretically and experimentally proved that there are quantum states that can only be distinguished when the calculations are performed with the indispensable participation of complex numbers,” explains Dr. Streltsov. There is nothing in the physical world that can be directly related to the number i. If there are 2 or 3 apples on a table, this is natural. When we take one apple away, we can speak of a physical deficiency and describe it with the negative integer -1. We can cut the apple into two or three sections, obtaining the physical equivalents of the rational numbers 1/2 or 1/3. If the table is a perfect square, its diagonal will be the (irrational) square root of 2 multiplied by the length of the side. At the same time, with the best will in the world, it is still impossible to put i apples on the table. Photon Source Used to Produce Quantum States The photon source used to produce quantum states requiring description by complex numbers. Credit: USTC The surprising career of complex numbers in physics is related to the fact that they can be used to describe all sorts of oscillations much more conveniently than with the use of popular trigonometric functions. Calculations are therefore carried out using complex numbers, and then at the end only the real numbers in them are taken into account. Compared to other physical theories, quantum mechanics is special because it has to describe objects that can behave like particles under some conditions, and like waves in others. The basic equation of this theory, taken as a postulate, is the Schrödinger equation. It describes changes in time of a certain function, called the wave function, which is related to the probability distribution of finding a system in a specific state. However, the imaginary number i openly appears next to the wave function in the Schrödinger equation. “For decades, there has been a debate as to whether one can create coherent and complete quantum mechanics with real numbers alone. So, we decided to find quantum states that could be distinguished from each other only by using complex numbers. The decisive moment was the experiment where we created these states and physically checked whether they were distinguishable or not,” says Dr. Streltsov, whose research was funded by the Foundation for Polish Science. The experiment verifying the role of complex numbers in quantum mechanics can be presented in the form of a game played by Alice and Bob with the participation of a master conducting the game. Using a device with lasers and crystals, the game master binds two photons into one of two quantum states, absolutely requiring the use of complex numbers to distinguish between them. Then, one photon is sent to Alice and the other to Bob. Each of them measures their photon and then communicates with the other to establish any existing correlations. “Let’s assume Alice and Bob’s measurement results can only take on the values of 0 or 1. Alice sees a nonsensical sequence of 0s and 1s, as does Bob. However, if they communicate, they can establish links between the relevant measurements. If the game master sends them a correlated state, when one sees a result of 0, so will the other. If they receive an anti-correlated state, when Alice measures 0, Bob will have 1. By mutual agreement, Alice and Bob could distinguish our states, but only if their quantum nature was fundamentally complex,” says Dr. Streltsov. An approach known as quantum resource theory was used for the theoretical description. The experiment itself with local discrimination between entangled two-photon states was carried out in the laboratory at Hefei using linear optics techniques. The quantum states prepared by the researchers turned out to be distinguishable, which proves that complex numbers are an integral, indelible part of quantum mechanics. The achievement of the Polish-Chinese-Canadian team of researchers is of fundamental importance, but it is so profound that it may translate into new quantum technologies. In particular, research into the role of complex numbers in quantum mechanics can help to better understand the sources of the efficiency of quantum computers, qualitatively new computing machines capable of solving some problems at speeds unattainable by classical computers. “Operational Resource Theory of Imaginarity” by Kang-Da Wu, Tulja Varun Kondra, Swapan Rana, Carlo Maria Scandolo, Guo-Yong Xiang, Chuan-Feng Li, Guang-Can Guo and Alexander Streltsov, 1 March 2021, Physical Review Letters. DOI: 10.1103/PhysRevLett.126.090401 “Resource theory of imaginarity: Quantification and state conversion” by Kang-Da Wu, Tulja Varun Kondra, Swapan Rana, Carlo Maria Scandolo, Guo-Yong Xiang, Chuan-Feng Li, Guang-Can Guo and Alexander Streltsov, 1 March 2021, Physical Review A. DOI: 10.1103/PhysRevA.103.032401 The Centre for Quantum Optical Technologies at the University of Warsaw (UW) is a unit of the International Research Agendas program implemented by the Foundation for Polish Science from the funds of the Intelligent Development Operational Programme. The seat of the unit is the Centre of New Technologies at the University of Warsaw. The unit conducts research on the use of quantum phenomena such as quantum superposition or entanglement in optical technologies. These phenomena have potential applications in communications, where they can ensure the security of data transmission, in imaging, where they help to improve resolution, and in metrology to increase the accuracy of measurements. The Centre for Quantum Optical Technologies at the University of Warsaw is actively looking for opportunities to cooperate with external entities in order to use the research results in practice. 28 Comments on "Physicists Prove That the Imaginary Part of Quantum Mechanics Really Exists!" 1. You can put i apples on a table as follows. If we say an apple is represented by complex number a * bi, that is equivalent to an apple with amplitude a and frequency of b. We think of an apple as having a constant value, but if say you ate it and planted the seeds, and waited until another apple grew, you could say it has a frequency, say of five years. So with an apple of size a, and frequency b = 5 years, it makes sense to represent it by a complex number. 2. This is probably because i itself has a repetitive( therefore wavelike) exponential sequence: i⁰ = 1, i¹ = i, i² = -1, i³ = -i, i⁴ = 1, etc.. 3. Ignatz Laripu | April 28, 2021 at 7:48 am | Reply There is absolutely a physical meaning for i. It is a counterclockwise rotation by 90°. Start at (1,0) on the x-y plane. Rotate by 90° twice. You end up at (-1,0). Two rotations are a “multiplication” of imaginary numbers. 4. Imaginery numbers are related to a “measurable, physical quantity.” Ask any engineer about real and reactive power in AC circuits. 5. Phil Hasseljian | April 28, 2021 at 11:34 am | Reply What if we “i”magnined there were 3 apples on the table sitting next to 2 real apples: 2 + 3i. 6. The issue most people have is they confuse mathematical terms like real and imaginary with their general or common meaning. In mathematics, real and imaginary (or complex) numbers actually exist and are used to accurately describe actual physical systems. In the mathematical sense, imaginary or complex means the square root of -1 is needed to describe a system. 7. All complex numbers and functions can be equivalently represented by 2×2 real number matices. So there is nothing necessarily more “real” about imaginary numbers than there has always been about vectors and tensors. 8. Timothy Havel | April 28, 2021 at 12:52 pm | Reply Sorry to be a party pooper, but there’s a pretty good size group of physicists out there (and not a few computer scientists and engineers as well) who’ve known for a very long time that imaginary numbers are real — and I mean that literally. The trick lies in an extension of standard 3D vector algebra to higher dimensions and even non-Euclidean spaces, known to mathematicians as Clifford algebras although its aficionados prefer to call them geometric algebras (as Clifford himself did). These algebras are over the real numbers (i.e. they contain only real scalars) but typically contain entities that square to -1 and commute with vectors, just like complex scalars would. The difference is that these geometric imaginary units can be understood as oriented spatial magnitudes, in much the same way that vectors are understood as oriented linear magnitudes. Such geometric interpretations can go a long way towards demystifying quantum mechanics (although the do not, of course, solve the measurement problem). For just one of the many introductions to geometric algebra out there, see 9. While the described experiments are informative, the talk that imaginary numbers are “real-world” and it is “first time” is a usual PR talk to gain more publicity and hence grants. I’ve seen pop-sci talks about “real” imaginary numbers some 40 years ago, only in the context of electromagnetics. 10. Since the square 0f imaginary part i is itself has a repetitive pattern in nature ( therefore wavelike) exponential sequence: i⁰ = 1, i¹ = i, i² = -1, i³ = -i, i⁴ = 1, etc.. it predicts the past, present and future of the probable events at quantum energy states of an events or things. Q Sheikh Jamshedpur, India Mobile: +913354510, gmail : [email protected] 11. Of course it all exists, but some of it doesn’t matter. 😉 12. Laszlo Meszaros | April 28, 2021 at 6:18 pm | Reply Harry Potter science in action…(Not understanding what quantum mechanics is all about gives you the freedom to do pseudo-science.) 13. Chad English | April 28, 2021 at 7:24 pm | Reply Conceptually I think the problem people have is the use of the word “imaginary”, which stems I think from the misunderstanding of “negative” numbers. For example, a “deficiency” of 1 is an interpretation of what -1 means. If you have -1 dollars, it means you owe +1 dollars. If a step forward is +1 and a step backward is -1, then -1 is just +1 step in the opposite direction. Or on a number line, to the right is + and to the left is -. Since adding positive leftward numbers is the same as subtracting positive rightward numbers, we use the same symbol for subtraction and “negatives” which are really positives in an opposite metric. We could call rightward counting by 1 as ‘1R’ and leftward counting as ‘1L’ instead of + and -. Where it goes astray is multiplication and the commutative ordering. 5 groups of 3 items = 3 groups of 5 items = 15 items, or in our new naming, 5R x 3R = 3R x 5R = 15R. By analogy, meaning by our desired definition for consistency, 5 groups of -3 items = -3 groups of 5 items = -15 items, or 5R x 3L = 3L x 5R = 15L. In other words, multiplying by a “negative” or “leftward” number acts to mirror across the origin, which is a new property that “positive” or “rightward” numbers don’t do. They they stay on the rightward side. And hence starting with a leftward (negative) number and multiplying it by a leftward (negative) number mirrors it to be a rightward (positive) number, (-5)x(-5)=25 (or 5Lx5L = 25R). “Negative multiplying” represents a mirror functions. Similarly, “negative exponents” define a reciprocal function: x^(-y) = 1/(x^y), or xR^yL = 1/(xR^yR). This new mirroring function of multiplying negatives also adds to the solutions to the square root function. sqrt(25R) = 5R or 5L (5 or -5). But, because of this mirroring function of multiplying leftward numbers (negatives), there are no two rightward or leftward numbers that multiply by themselves to produce a leftward number. So, you can find a square root of a leftward number within rightward or leftward (positive or negative) numbers. No problem. Just as we did with defining leftward (negative) numbers and their interpretation (opposite direction, owing, deficiency) and their math rules, we can define another new set of numbers called Perpendicular numbers, where 1P x 1P = 1L. Et voila, we have “found” the sqrt(1L) = sqrt(-1) = 1P. The Perpendicular number systems has its own self-consistent math function rules the same as rightward and leftward numbers. It has positive Perpendicular numbers and negative Perpendicular numbers, including how to add, subtract, multiply, divide, exponents, roots, and so forth. They all work consistently in a closed math system. We now have 3 sets of numbers: R, L, and P. You can jump between them via math functions. You can go from R to L by subtracting larger R numbers (3R – 5R = 2L), by adding bigger L numbers (3R + 5L = 2L), or by mirroring through multiplying by an R number (3R x 5L = 15L). You can go from L to R numbers by reversing these functions. You can go from L to P via the sqrt function, 1P = sqrt(1L) aka 1i = sqrt(-1). You can only go from R to P either via intermediary L, 1P = sqrt(4R-5R)=sqrt(1L), or by multiplying by a P number, 4Rx2P = 8P (4x2i=8i). “Imaginary” numbers are not imaginary any more than negative numbers are imaginary. You can’t have negative things, only positive things in an opposite or mirrored direction, which is how we use negatives. They create a closed and consistent algebra. We’ve just gotten used to translating what “negative” means in context and accept that negative numbers are therefore “real”. Imaginary numbers are similarly “real” in terms of translating them into how they are applied. Going back to normal notation (1P = i), i^2 = -1; i^3 = -i; i^4 = 1; i^5 = i; and so on. Multiplying by i rotates a number line by 90 degrees. Complex numbers then represent things that have two independent degrees of freedom. If you go further, quaternions define 4 degrees of freedom, a+bi+cj+dk, where i^2 = j^2 = k^2 = -1, and ixj=k, jxk=i, kxi=j, and reversing the order changes the sign, ergo loses commutative property. So quaternions could represent time plus 3 perpendicular spatial dimensions, for example, scaled appropriately. Octonians do the same with 8 dimensions. And, I’m pretty sure that is it for closed algebra systems: real (positive, negative), complex (real, imaginary), quaternions, and octonians, or 1, 2, 4, 8. No other systems work. They all have meaning and can represent something, so long as the mathematical functions properly represent the things and we can interpret what they mean. E.g., if you walk the perimeter of a rectangle 5i x 3j = 15k, it means you define the plan of the rectangle with the normal in the k-direction, perpendicular to i and j, using the “right-hand” rule. Reversing direction, 3j x 5i = -15k = 15(-k), or a rectangle with normal in the opposite direction. Hence quaternions can be used to represent both size (15) and directionality (k) of a surface. Hence they can be “real”. 14. Timothy Havel’s reply concerning Clifford (geometric) algebras is correct in my opinion. The actual paper I believe after reading it is more concerned with proving the authors “resource theory of imaginarity” all of this is simply pointing to directed magnitudes in higher dimensions… nothing unreal exists. 15. Alex Stewart | April 28, 2021 at 9:43 pm | Reply Since they arent even sure of the standard model anymore, aren’t these physicists trying to describe using complex #s”things” they are not even sure of in the first place..? Looking for funds perhaps? 16. Yeah, I should note that as an electrical engineer complex numbers literally describe our entire electric grid. Ever hear of 3-phase power? Each line has the same amplitude (voltage) and frequency (50/60 Hz) but they are out of phase with each other in complex number world. Reactive power and power factors are very real. This is actually quite common knowledge among electrical and mechanical engineers. 17. Of course Quantum mechanics is real? Our consciousness precides, nanoseconds into A future that is governed by an infinite amount of possibilities. From moment to moment we live on an infiteciamal small string between chaos and order. It is truly a Beautiful thing to wanderer a world where imagination is endless and we realize numbers don’t matter without words. Now, if you number lovers admit, “A negative times a negative doesn’t not equal a positive; I don’t care if you turn left and look backwards. It’s no different than saying, “Two wrongs don’t make a right” I’ll respect your philosophy. 18. Sankaravelayudhan Nandakumar | April 28, 2021 at 10:10 pm | Reply The very basic of complex number itself is dealibg with zeta function tgat oscillate between real and imaginary combination of horizontal and vertical spin directions as sum reflect zero half and also between +1 and -1 rotating vectors that converge and diverge along asymmetric regularity.This is leading to a converging string theory involved.Thenumber times any infra red waves osciilate a specific superconductive graphene phase itself is a typical example in 1.1 degree conical spin that seems to give specific superconductive outputs.It hs to deal with pressure surrounding a chemical bonding is integrated and differentiated in between in enhancing a superconductive effect.This really meansan imaginary part is really feeding a real part by experimenting with superconductivity asekectron flow in between parallelplates contributing towards asymmetric grapphics. 19. Michael Gonzales | April 29, 2021 at 1:16 am | Reply I’m stuck on this .. 11÷10−1×11÷10=0. But…if,you multiply 0. You get 1.1 but if you add before the x0 you get what you add. That still doesn’t explain where the .1 disappears too. It would be imaginary. I’m not a big brain. I just had this come to me as a big flaw in the number system itself. To me, if this cannot be solved or it’s some black hole in the matrix, 0 becomes the source of everything and nothing. It’s also means nothing is actually real. And with the quantum mechanics of things, nothing is 0 and 0 is something. I’m sure the order of operations has something to do with it but, if, you don’t multiply by 0 and multiply any other # the .1 is involved in the final answer. I found this while I was messing around with time. 365.25/364.25-1 second. Is 0.002745367. This is missing time. Because 365.25/364.25-1×365.25/364.25=0. And that’s where the same problem happens. Where did the 0.002745367 go? It’s still there if you multiply but it disappears if you add. And it doesn’t come back either. When you take 1001/1000 basically just grabbing the binary by the balls as it dangles from the ascii. All those 1s and 0s begin to disappear forever. It’s like the matrix mind and anyone linking their brain to it begins to loose their minds. Was their mind even real to begin with or was their entire ego state of being just a dream? Seems as though even the quantum state can be erased too. Making what is what is and not just a bunch of whack jobs with giant heads in the sky thinking they can solve reality. I’m not sure who said…everything is a theory. Just because the same thing always happens it doesn’t mean it will always happen. At some point it can change. Everything is changing alright. The line has been drawn…which side your on all depends on how well you understand the unknown unknown. That is you. Numbers didn’t create you. You created numbers. Before nothing there was you right here right now. So, all the effort to construct an idea our quantum physics is a moot point. I can smash my hand and the bone fragments, blood and skin can be called something else. It still doesn’t make it into a hand. No matter what I do to piece it together, the body is what knows how to fix it. The same goes for everything. Smashing atoms together doesn’t create. It’s not necessary unless you trying to kill yourselves. It probably why it’s so much fun to dig in the sand. It’s easy to destroy. It’s hard to create. I say all this because I am aware of what these crazy scientists working to destroy humanity are up to. While, the majority of the world is lost in a trance believing the same ideas that less them away from themselves, there are a very few of us if flesh that have to stand up for the many. With a backing of our ancestors, I like to make a fool of myself at times. But…. The points is….maybe, this simple 11/10-1*11/10 Explains how nothing can exist if, nothing cannot exist. If, nothing cannot exist, nothing can exist. If, you got dizzy, chills, uneasy, or numb…your welcome. If, you didn’t….you will… • JASON Dean BENDER | April 29, 2021 at 10:57 am | Reply dude you are just doing math wrong. 11/10-1 =.1, when you say 11/10-1*11/10 it is (11/10)-(1*11/10) which is (11/10)-(11/10) which is 1.1-1.1 which is 0. that is not the same thing as (11/10-1)*(11/10) which is 0.1*1.1 which is 0.11 20. Solution to PvNP resolves geometry, and puts trigonometry in the dog house. i can be expressed geometrically and still there is an even greater dilemma when you solve the initial stage of Quantum/Relative reconciliation. The moral of the story? We are already WAY past the level of technology we need to fix earth. What are they still looking for? The ability to cheat death? The ability to wage war against god directly? Time to beam back down to Earth before these “physicists” cause a massive catastrophe that ends all life on Earth. 21. In electromagnetic physics, “i” is used to represent phase shifts. This makes the calculations appear simpler than when a sum of sine and cosine terms are used. So “i” is just a mathematical device. It has nothing to do with whether the result is “real.” 22. Interesting too because in optical fiber the light propagating in the core has a skin depth penetration into the cladding is also modeled as a mathematical “imaganery” number. However, the energy in that little bit is enough to exploit it’s real effects. I used it to build and patent a fiber optic liquid level sensor. Demonstrated it to GM as a technology demonstration for a gas level sensor. 23. i is no more “real” than any other mathematical concept. It’s just a human construct used as a tool to facilitate calculations. All you can say is that there are quantum states that are indistinguishable in our mathematical system without using complex numbers. That’s got nothing to do with reality. 24. Etherage Ingram | April 30, 2021 at 11:03 am | Reply And what is the mathematical probability of that being true? Why 50-50 of course. Either it’s true or not. Could we change that statistic with a half truth gleaned utilizing imaginary values? Sure. You think science will answer ALL the questions in the Universe? We need mystery of the unknown to retain our awe. Our fascination and awe drives our imaginations. 25. BibhutibhusanPatel | June 6, 2021 at 2:40 am | Reply Complex number consists of a real number added with it an ìmaginzry number.which contains +ve square root of _1 as a factor.Thus,complex number involves in many of calculations of physìcs,like in radient energy and so on.So QC can come for calculations invoĺving complex numbers in some problems related to the stars.Often calculations need to compute pròblems còntain,e to power ix. 26. BibhutibhusanPatel | June 6, 2021 at 3:14 am | Reply Òften probĺemsìn star radiation can .involves with term like,e to powèŕ ix be solved by QC.These are examples of complex numbers involvement in physics are fòund out in practical. 27. A complex wave function psi multiplied by it’s complex conjugate psi-star IS REAL. That’s what used on quantum mechanics.p Leave a Reply to BibhutibhusanPatel Cancel reply
152b52fa5dce350e
The high-frequency Floquet theory (HFFT) of laser-atom interactions solves the space-translated version of the Schrödinger equation by an iterative procedure, leading to corrections of increasing order in the inverse photon energy. The lowest-order approximation (high-frequency limit) has been often evaluated before, but its accuracy at finite frequencies has not been established. To explore this issue we have computed the corrections yielded by the first-order iteration to the energy levels, and the ionization rates of a one-dimensional atomic model with a Gauss attractive potential. We have then compared, at frequencies above the field-free ionization potential |W0|, the HFFT results with those of a full Floquet calculation. We show that the agreement is substantially improved by the inclusion of the HFFT corrective terms. The agreement is good at all intensities for photon energies larger than several times |W0|. Even when the photon energy is marginally larger than |W0| the discrepancies vanish at sufficiently high intensities.
4cf6afbad1b288b0
skip to content The Laboratory for Scientific Computing at Physics at Work 2018 Laboratory for Scientific Computing What will students see at your exhibit? Scientific computing can be used to study anything from the behaviour of atoms in a crystal, to the motion of hurricanes, to the mechanics of black holes. When scientists study the natural world they do experiments and make observations. From these observations they come up with mathematical equations, or models, which describe physical phenomena. These models can be anything from classical models like Newton's laws of motion, to quantum mechanical models like the Schrödinger equation. The job of scientific computing is to solve these equations on a computer and look at how the models behave. This involves a mixture of physics, chemistry, mathematics, and computer science. Students attending our exhibit will, therefore, be introduced to the predictive power of computer simulations. Students will see examples of how, where, and why computer simulation plays an important role in our everyday lives, from prediction of the weather to fluid flow over aircraft wings. Students can also participate in an interactive modelling session. What physics is used?  Fluid dynamics, solid mechanics, heat transfer Why is it useful?  High-performance computing and computer modelling are now commonly employed in almost all scientific disciplines (and many non-scientific ones too). It is extremely valuable, as it allows scientists and engineers to probe the behaviour of their systems and devices in ways that cannot be explored experimentally, but also much quicker and at comparatively low cost. Predicted contours of density during a Taylor Impact Test.
85396b847a29e8a7
From victor Revision as of 16:32, 19 August 2014 by Layla (Talk | contribs) Jump to: navigation, search The Victor2.0 library (Virtual Construction Toolkit for Proteins) is an open-source project dedicated to providing a C++ implementation of tools for analyzing and manipulating protein structures. Victor is composed of four main modules: • Biopool - BIOPolymer Object Oriented Library. Generates the protein object and provides useful methods to manipulate the structure. • Align - ALIGNment generation and analysis. • Energy - A library to calculate statistical potentials from protein structures. • Lobo - LOop Build-up and Optimization. Ab intio prediction of missing loop conformation in protein models. The Biopool class implementation follows the composite design pattern and for a complete description of the class hierarchy we recommend to see the Doxygen documentation. Without going into implementation details a Protein object is just a container for vectors representing chains. Each vector has 2 elements: the Spacer and the Ligand Set. The Spacer is the container for AminoAcid objects whereas the LigandSet is a container for all other molecules and ions, including DNA/RNA chains. Ultimately all molecules, both in the Spacer and in the LigandSet are collections of Atom objects. The main feature in Biopool is that each AminoAcid object in the Spacer is connected to its neighbours by means of one rotational vector plus one translational vector. This implementation make easy the modification of the protein structure and lot of functions were implemented to modify/perturbate/transformate the residue relative position in an efficient way, rotation and Translation vectors. Vector aa.png For more detail on how to use it look the Features features section. The package comes with several options. The necessary data files (e.g. substitution matrices) are provided. The most important feature of the package is the modular object oriented design, which should allow a moderately experienced C++ programmer to rapidly implement and test new features for sequence alignment. Inside this package, you can use, different weighting schemes, scoring functions, ways to penalize gaps, and typologies of structural information. The Align library was designed to be modular and easy to expand. There are four basic components which are needed to use the alignment methods. The four main components are: • AlignmentData - Stores information on sequence (SequenceData) and, when needed, secondary structure (SecSequenceData). • ScoringScheme - Stores information on how a single position shall be scored in the alignment (it requires both AlignmentData and Blosum objects to be initialized), possible specialization of this class are: • ScoringS2S - sequence-to-sequence • ScoringP2S - profile-to-sequence • ScoringP2P - profile-to-profile • Align - The alignment algorithm. It requires both AlignmentData and ScoringScheme objects, and can be specialized in: • SWAlign - local (Smith-Waterman) • NWAlign - global (Needleman-Wunsch) • FSAlign - glocal/overlap (Free-Shift) . • Blosum - The substitution matrix. If P2S or P2P scoring is used, the class Profile stores the necessary information to generate the profile from a multiple sequence alignment. Two advanced options, which may be useful in certain circumstances, are supported by the software: 1. ReverseScoring This allows the estimation of a staistical significance of the raw alignment score by testing it against an ensemble of alignments based on the reversed sequence in the form of a Z-score. 2. Suboptimal alignments Rather than generating a single solution, the user may decide on a number of different, alternative, suboptimal alignments to be generated. The complete representation of all classes is this: Align classes.png Energy functions are used in a variety of roles in protein modelling. An energy function precise enough to always discriminate the native protein structure from all possible decoys would not only simplify the protein structure prediction problem considerably. It would also increase our understanding of the protein folding process itself. If feasible, one would like to use quantum mechanical models, being the most detailed representation, to calculate the energy of a protein. It can theoretically be done by solving the Schrödinger equation. This equation can be solved exactly for the hydrogen atom, but is no longer trivial for three or more particles. In recent years it has become possible to approximately solve the Schrödinger equation for systems up to hundred atoms with the Hartree-Fock or self-consistent field approximations. Their main idea is that the many-body interactions are reduced to several two-body interactions. Energy functions are important to all aspects of protein structure prediction, as they give a measure of confidence for optimization. An ideal energy function would also explain the process of protein folding. The most detailed way to calculate energies are quantum mechanical methods. These are, to date, still overly time consuming and impractical. Two alternative classes of functions have been developed: force fields and knowledge-based potentials. Force fields (e.g. AMBER) are empirical models approximating the energy of a protein with bonded and non-bonded interactions, attempting to describe all contributions to the total energy. They tend to be very detailed and are prone to yield many erroneous local minima. An alternative are knowledge-based potentials, where the “energy” is derived from the probability of a structure being similar to interaction patterns found in the database of known structures. This approach is very popular for fold recognition, as it produces a smoother “global” energy surface, allowing the detection of a general trend. Abstraction levels for knowledge-based potentials vary greatly, and several functional forms have been proposed. The energy functions presented in the package allow to optimize procedures. The main feature is its applicability in the context of the protein classes implemented in the package. It should be possible to invoke the energy calculation with any structure from all programs. At the same time the parameters of the energy models had to be stored externally to allow their rapid modification. With this considerations in mind, the package Energy was designed to collect the classes and programs dealing with energy calculation. The main design decision was to use the “strategy” design pattern from Gamma et al. The abstract class Potential was defined to provide a common interface for energy calculation. It contains the necessary methods to load the energy parameters during initialization of an object. Computing the energy value for objects of the Atom and Spacer classes as well as a combination of both is allowed. For more detail on how to use energy look Features Current database methods using solely experimentally determined loop fragments do not cover all possible loop conformations, especially for longer fragments. On the other hand it is not feasible to use a combinatorial search of all possible torsion angle combinations. For an algorithm to be efficient, a compromise has to be found. One improvement in ab initio loop modelling is the use of look-up tables(LUT) to avoid the repetitive calculation of loop fragments. LUTs can be generated once and stored, only requiring loading during loop modelling. Using a set of LUTs reduces the computational time significantly. The next problem is how to best explore the conformational space. Especially for longer loops, it is useful to generate a set of different candidate loops to exclude improbable ones by ranking. The method should therefore be able to select different loops by global exploration of the conformational space independently of starting conditions. Methods building the loop stepwise from one anchor residue to the other bias the solutions depending on choices made in conformation of the first few residues. Rather a global approach to the optimization is required. This criterion is fulfilled by the divide & conquer algorithm, which is recursively described by the following steps: 1. if start = end, compute result; 2. else use algorithm for: (a) start to end/2 (b) end/2 to end 3. combine the partial solutions into the full result. Applied to loop modelling, the basic idea of a divide & conquer approach is to divide the loop into two segments of half the original length choosing a good central position, as shown: The segments can be recursively divided and transformed, until the problem is small enough to be solved analytically (conquered). The positions of main-chain atoms for segments of a single amino acid can be calculated analytically, using the vector representation. Longer loop segments can be stored in LUTs and their coordinates extracted by geometrically transforming the coordinates for single amino acids back into the context of the initial problem. To this end we need to define an unambiguous way to represent the conformation of any given residue along the chain and a set of operations to concatenate and decompose loop segments. For more detail on how to use Lobo look Features
5f6e6d469c12e0a1
The Duality Duets ~ A Novel About The Joys of Exploration And Discovery ~ This is a work of fiction inspired by real experiences of my life. It includes passages of real events, people, places etc. as commentary, but the story itself is fabricated from my imagination in which all events, places and people–living, dead, or anywhere in between–are entirely fictional. The real stuff is clearly indicated as such at the beginning of each chapter, followed by nothing but fiction until the beginning of the next chapter. Why? I'm not sure. It just seemed the best way to get this story written. NOTE: Equations and other illustrated formulae or technical graphics presented in this novel are currently in use in the real world. In this work of fiction, they serve only to introduce and reinforce chapter imagery related to science, technology, engineering, arts, and mathematics. For those interested, search them up on the internet using the associated descriptive text provided beside each equation, formulae, or technical graphic in the story. It's fascinating stuff even if the math is not understood. And don't blame yourself while reading this novel for not knowing every detail of the math. I certainly don't. Also, they just look so cool! ~ The True Stuff ~ When I really began getting into playing music, I enrolled in Junior High Orchestra playing cello and loved it. Symphonic music sent chills racing up and down my spine so intense they almost made me pass out. I was also starting to love playing guitar a lot. So much so that I frequently fell into a trance while playing. One day I was sitting on the front porch of our house playing guitar for a next door neighbor and fell into a deep trance only to be pulled out of it when he started gently shaking me by the shoulder saying "Hey! Hey, man! You're drooling on your guitar!", which I was. But he was cool about it, only commenting on how into it I was instead of ridiculing me for being a slobbering idiot. While living on a pig farm near the southern tip of Lake Michigan, I signed up for Junior/Senior Band after meeting the band instructor during summer orientation. He appeared to be a very staid fellow at first sight, but after talking with him a few minutes it became obvious he was anything but. Besides the usual concert and marching band  stuff we did, he organized a few of us into a dixieland band with me strumming banjo and later on that winter, three of us teamed up to form a bluegrass band. Near the end of the school year, he loaded us all up into a school bus and we drove up to Northern Illinois University in Dekalb to see Maynard Ferguson perform on his quarter tone trumpet with a pianist playing a quarter tone piano. That music was spacey enough, but then Maynard switched on a new device called a digital delay processor and started playing along with himself. Blew my freaking mind, and I began dreaming of doing something similar playing guitar. Having played cello, I imagined a fretless guitar played through a delay processor would be just the instrument to achieve that goal. I still drop into a trance when playing music–especially when composing a new piece, achieving an intense feeling of being transported far away from all of the bullshit society seems hellbent on stirring up and dealing out for no good reason. Recent casual study of quantum wave theory set me to speculating that these trances may be more than a mere dream state and I'm somehow tapping into a powerful means of telepathation. Yep, that's a made-up word. ~The Made-Up Stuff ~ Momentum of a photon. Energy of a photon. Momentum of a photon in terms of relativistic mass. Kuel always kept these three simple equations scribbled on sticky notes attached to the top edge of his quantcomp monitor. On the bottom edge he kept another. Heisenberg's Uncertainty Principle. Key concepts essential to understanding his invention: the Quantum Contrapuntal Interspatial Synthesizer (QISS for short, but in no way a contemporary reference to that awful rock band of the 20th century KISS). QISS. A revolutionary musical instrument. QISS was now Kuel's focus in life. Also a starship of sorts. One without hulls, engines, life support, armor, shields, weaponry, galley, head or captain's chair. Instead of all of that useless crap, it leveraged wave particle duality to find and open paths Kuel could follow in an instant to any place and/or time he wished to go. A high-energy physics instrument for instantaneously traveling vast distances and timespans within any universe and for traversing across multiverse brane boundaries. Something no one has been able to achieve to date, even in the most powerful qwarpships. With QISS, Kuel was the ultimate traveler of space and time. QISS was his secret too. No one else knew of its existence. Well, almost no one else. Staring at the sticky notes on the monitor, he said a silent goodbye to them, to his trusty quantum computer, to his home and to everything and everyone he had ever known. He was about to become a hermit of the strangest sort–always keeping company and secrets of QISS with no one except himselves. After building QISS, he had understood theoretically what to expect, but theory and practice are frequently two different beasts, especially in applications of bleeding edge high-energy physics. Hoping for nothing and believing in less as he fired it up and began playing the instrument for the first time, he found–as expected–that he was able to sense dualities in musical patterns produced by QISS correlating to dualities of waves and particles, and that he was actually able to manipulate those physical dualities through nuanced dualities of musical expression. As waves and particles became one at the crest of musically orgasmic waves, he was instantly quantum teleported to a parallel universe where another musician/physicist was composing for the first time on a QISS of his own invention. It was him₂. Not a doppelganger, but actually him in the other universe to which he had just successfully Qported. Upon meeting, no time or energy was wasted on verbal introductions as they immediately set about collaborating in their music. Conversing exclusively in tonalities and resonance durations woven instinctively through actions applied to their instruments, they soon discovered compositional elements which allowed them to also cross brane world boundaries into parallel universes together. Upon conclusion of that first collaboration, they uttered a single exclamatory word of mutual satisfaction. Now they meet themselves in all of the universes they visit to collaborate, join forces and continue on their journey exploring the multiverse through quantumusical compositions which they call The Duality Duets. Because, after all, they are but one and the other. But because they are both one and many too; all the same and yet singularly varied; their duets become much more than that. Through telepathation, they become–in deed (and indeed)–quantusymphonic composers of the cosmos. ~ The True Stuff ~ I took partial differential equations in college and somehow passed the course, although I cannot claim to have achieved any appreciable level of maturity in mathematics. Certainly not enough maturity to delve into mathematical formulations of quantum mechanics. So as I try to at least get a layman's grasp on the subject of quantum wave theory, I look at things like Schrödinger's equation which describes changes over time in physical systems in which quantum effects like wave-particle duality are significant and try to imagine visually what they mean through Gedankenexperiment (thought experiments) the way Albert Einstein liked to do. When playing a musical instrument I do something similar, rarely ever visualizing musical scores representing the music I'm playing in notes sprinkled over staves governed by clefs and signatures of key and time, instead visualizing the sounds of a musical piece in four-dimensional flowing, interacting waves of multiple colors. Sometimes, mere visualizations aren't enough, though, and other senses come into play, including sense of motion. Thinking of Euclidean n-space and n-vectors, the notion of n-dimensionality of music crosses my mind. Four-dimensions is pretty easy for my mind to handle in most thought experiments, but when more dimensions are considered, my mind balks and falters, wanting to take a break to kick back with a bowl of sugary cereal to watch a good Saturday Morning TV cartoon. Entanglement... ~The Made-Up Stuff ~ Schrödinger equation. Feynman derivation of wave equation in three dimensions. Dirac equation - relativistic wave. Wave-particle duality...spatial and temporal frequency transformations...relativistic subatomic particle movement...a r r g h E N O U G H ! Kuel hated it when he dreamed in equations and solutions, always waking in a cold sweat. Much more preferable to don his augrealrig and mesh with appropriate community knowledge clouds for such exploratory thinking expeditions. Dream learning was iffy, at best. Enmeshed with CKCs through the ARRig he had received at age two–as every child did on their second birthday–never were. Enmeshment learning was a lot more fun too. Swinging vertical in his suspensorbed, he levitated for a moment to yawn, stretch and scratch. He did not totally discount the damned dream, having learned to trust his subconscious mind's unstructured wanderings in the realm of mathematics as much as his waking mind's methodical wonderings in it. He had solved some hairy damned math problems while dreaming of them entirely intuitively, especially some particularly vexing Fourier Transformations of n-dimensional quantum wave forms. There were only a bakers dozen of CKCs in the mesh (that he was aware of and had access to) which could assist him with such transformations via quanputational modeling, and they could not hold a candle to the speed and efficiency of a deeply intuitive dream triggered in the constantly running supercomputer situated between his ears. A quick visit to the head and a long, hot shower set him to reflecting on his first days with his ARRig. His parents had presented it to him with marked nonchalance along with other birthday gifts. A few of his neighborhood friends had already received theirs and had told him about them. After the cruelty of compulsory education and all of its horrors–not the least of which included teacher bias–had been outlawed, no child needed prodding to don their ARRig and begin the astounding journey of discovery only self-directed learning expeditions into the world's CKCs could spark. Thanking his parents for the other birthday gifts, Kuel recalled ignoring them and eagerly donning his ARRig to begin his own learning journey. Within a matter of weeks, music had become central to his learning experiences and he began requesting access to various musical instruments. CKC ArtifIntAdmin had checked his learning stats and had approved acquisition of the instruments which were delivered within a matter of a few days. Kuel had progressed rapidly, learning not only how to play the instruments but the basics of music theory as well as practice and performance technique and had then started exploring the mathematical foundations of music. By age four, he had settled into his professional niche entirely of his own, self-directed learning accord–a budding PhysiMusician determined to positively change the world with his music. Drying off with an old-fashioned terrycloth towel and starting the autoshave, he chuckled at his childish thinking patterns. "Change the world, lol." He hoped what he and himselves were now doing across universes would bring about positive change, because there apparently wasn't anyone or anything standing in their way as they composed the music of the cosmos. ~ The True Stuff ~ In the late 1990s I began tinkering with multitrack recording using a Roland VS-880 Multitrack Digital Recording Workstation and CD Burner, producing thirty original compositions and songs with it before the turn of the century. It wasn't long before I decided to delve into the world of audio spectral analysis and manipulation using a software application called Sound Forge on my recordings. Especially useful was its Fast Fourier Transform (FFT) feature which I used to analyze and fix inevitable glitches introduced by the crappy Wintel personal computing hardware and OS I was using at the time. Prior to getting my hands on the VS880 and Sound Forge, I could only dream of building a home recording studio because the equipment costs were so enormous. With this set of digital audio recording and analysis tools I spent every minute I could on nights and weekends recording and mastering original musical works I had been composing for years prior to emergence of such affordable, home studio toolkits. And as this happy hobby time progressed through the end of the 20th century into the 21st century, a lot of the mathematics I had struggled to learn in cold isolation of infuriating textbook-to-test method of study employed in college coursework began to reveal clearly and meaningfully in practical applications which were of specific interest to me. This was when it finally dawned on me that project-oriented learning was the best way for me to gain and expand knowledge in any subject matter. ~The Made-Up Stuff ~ Fourier Transform in terms of position space. Fourier Transform in terms of momentum space. Waiting for the QISS to power up, initialize and run through its built in tests, Kuel composed a farewell letter to his surviving family members and sent it off via delaymail to arrive in their inboxes in a couple of days. Eventually, he would travel back in time to tell those already dead about the journey upon which he was about to embark, especially his maternal grandparents who played a key role fostering development of his keen interest in music by replacing his inexpensive CKC-provided starter model musical instruments with professional quality models costing a great deal more. They did this five years before he would begin earning his own income at age ten–a gift of generosity he had only been able to thank them for by performing professionally with the instruments and always making sure they had front row seats at every recital and concert they were able to attend before they were killed in the NeoCon attacks. The QISS beeped softly after power up and BITs had completed. Kuel inserted its binaural mics and let them attune and adjust to his especially complex ear canal topology. He wondered sometimes if he had been born with more normal ear canals if he would have ever taken the course of study and profession he had chosen to pursue or if he would have ever invented the QISS. Doctors examining them had admitted they were particularly torturous but scoffed at the idea they enhanced his musical abilities, but then doctors were consummate scoffers in regards to patients' intuitive notions of their physiology–a very limiting, arrogant form of bigotry in his opinion. Picking up the QISS and plucking at its string clusters, he allowed it a few more seconds to automatically adjust gains as he prepared for "takeoff" by silently thanking everyone and everything which had enabled him to reach this key moment in time and space. Then he played in a style of improvisation well suited to inducing telepathation through ultra-fast Fourier Transformations of aural patterns he performed into relativistic position and momentum space quantum wave forms. The effect was mesmerizing and as wave/particle dualities emerged he could feel them through the feedback mechanism of the QISS as a deep thrumming which resonated throughout his body very much like a heavy-lift quantpter's sonic effects on huge volumes of air felt in flesh and bone as it approached at low speed and altitude. A quick tweak of QISS's quantum compass calibration to adjust to the planet's constantly shifting magnetic field, a subliminal flash of light and burp of atmospheric disturbance and he was there, looking at himself holding a QISS looking back at him. Kuel₂ smiled and nodded at him and without further delay they began composing their first Duality Duet, aiming to travel backward in time to visit with his grandparents and grandparents₂. For this trip through time, the dueling QISSs required significantly larger amounts of power and drew it directly from dark energy sources detected and drawn upon as the newly composed duet's form solidified. No one else could hear the duet as Kuel and Kuel₂ composed and performed it on the fly, but their QISSs recorded every musical element of it for future reference and reuse. The two Kuels wondered in tandem if someday the recordings might end up in a CKC of their own design and construction for future generations of PhysiMusicians to appreciate and learn from. They grinned at each other, knowing that goal would someday become a project they would enjoy working on together. Their QISSs pushed spent energy as dark matter out exhaust ports to maintain dark energy/matter balance in their respective regions of the universe pair they were now traversing through time in. It would not do to allow any imbalance to develop from their quantuphysical activities. The outcome of such carelessness would bring an untimely end to themselves and to everything else in existence. Their mission was positive creation of possibilities for others, not total destruction in an abrupt, unannounced collapse of time and space. For the two Kuels, the trip was instantaneous and they each appeared before their grandparents with a soft buffeting of air expanding outward about their point of arrival that lightly disturbed the hair of their beloved lost relatives. Speaking as one to grandparents and grandparents₂ in each of their respective universes, they told them of what and how they had accomplished their goal in life to return through time and space to tell them how much they appreciated their generosity and devotion to life and happiness of their grandson and grandson₂. "It would not have happened without your influences," Kuel told them. "Not just the pro-level instruments, but your patient, deliberate knowledge sharing too." His grandmothers nodded. "So you were listening. Paying attention," she complemented with a warm smile. A puzzled expression crossed Kuel's face. "Sometimes you seemed so frantic to get back to the music, we wondered," his grandfather added. "We weren't sure much of the math and physics were being absorbed." Smiling and bowing his head in shamed respect, he admitted it was difficult. "Gaining maturity in mathematics wasn't easy. Not like the music was. More natural talent there than in math and sciences, I suppose. But your lessons in rewards of perseverance helped. As soon as thinking mathematically became more intuitive, the physics were easier to grasp, though none of it is ever a cinch." They laughed with their grandson over this honest declaration. "It never is," his grandfather said. "But damn, the achievement effect is fun to experience, eh?" Since his grandparents were well versed in quantum mechanics and wave theory themselves, they were not so very shocked by appearance of an adult Kuel and Kuel₂ telling them these things when they had just seen their preteen Kuel and Kuel₂ performing with the Quark Interplanetary Symphony Orchestra the evening before. They were very surprised at how he had leveraged his knowledge and skills, though, and expressed their extreme pleasure with his astounding accomplishment, first hugging and kissing him with deep delight then examining his QISS with deep curiosity. "It's so light," his grandmother observed. "How did you accomplish that?" "It carries material resources to manufacture mechanisms and fixed parts only needed occasionally on the fly, then recycles the materials. Not a lot of heavier moving parts otherwise, most of those needed only to provide haptic feedback." "Ah, then no magic involved," she said. "None whatsoever," he replied. "I may be an entertainer in the realm of music, but I'm horrible at method and effect in the art of distraction and slight of hand conjuring, not to mention severely lacking in persona without an instrument in my hands." They decided to prepare a meal and eat while discussing details of it all together at their leisure rather than try to absorb it in an intense, impersonal cram session. It wasn't important, after all, for them to retain any of the information, even if it were possible. After dining and talking, he informed them they would not remember anything about their reunion or anything they had talked about upon the moment he departed. Only impressions of emotion might remain, depending on how deeply felt during their time together. They all understood the temporal implications of such memories and accepted them without protest, simply enjoying the time together while it lasted. "Any idea when you'll build and mesh the first CKC?" his grandmother asked. Shaking his head, Kuel admitted he didn't know. "I've never built one before, only using them, though by the thousands. So I have to learn the ins and outs of that. It will depend on how rapidly the trudatsets grow, and I certainly don't want to allow any bias to creep in. That would make them useless." "Why are you keeping this secret?" "Because I can't keep it secret forever. Eventually effects of playing the QISS around the multiverse will be noticed, tracked down and investigated. If not, someone else will build one of their own invention. By that time, I hope to at least have the first CKC up and connected to the mesh and I can reveal it as it should be, to everyone equally through freemesh access. Besides, as the inventor, I deserve to enjoy first-field-run fun with it." When it was time for Kuel and Kuel₂ to leave his grandparents and grandparents₂, they thanked him for returning to tell them all about it, happy that their influence in his life had perpetrated such an amazingly positive outcome for him. Then the two Kuels took up their QISSs and primed them for departure, performing a brief, purely musical prelude for his grandparents before shifting into quantum duality manipulation strains for travel, concluding the movement with a lyric: Your perceptions of promise And faith in talent Repaid now only with love With a small, atmospheric implosion and a minuscule exchange of dark matter for dark energy consumed upon exit, their talented grandson and grandson₂ were gone from their respective time/space positions in their respective universes without a shred of memory of having ever been present remaining. An ineffable sense of warm satisfaction settled in and persisted, though, and they smiled at each other for the gift of it. ~ The True Stuff ~ It is always gratifying when someone does something so many have been saying for so long is impossible. This happened fairly recently (as reported last summer) when 18-year-old Ewin Tang proved classical computing techniques could solve the "recommendation problem" almost as rapidly as a quantum machine learning algorithm proposed by Kerenidis and Prakash could. The promise of quantum computing is intriguing, if not yet a reality, and I hope to see it come into mainstream usage before breathing my final breath. In the meantime, work by upstarts like Mr. Tang are a refreshing breath of fresh air. ~The Made-Up Stuff ~ Coltrane's Circle of Tones Coltrane's Circle of Tones It always stuck in Kuel's craw that he was expected to blithely accept the notion that instantaneous action at a distance is impossible. It wasn't that he disrespected theorists' stand that simple correlation was not to be construed as "spooky actions at a distance", what he could not–would not–accept was an insurmountable barrier preventing superluminal communication and travel. What fun are universes without ability to move about within and between them at speeds conducive to practical exploration? EM spectrum analysis of ancient waves and particles through observatories hanging in space just wasn't going to cut it for him. So like that young upstart of long ago who bucked established thought that only quantum machine learning algorithms could provide ample computing speeds for solving big-data problems and then proceeded to prove that classical computing techniques were capable of attaining comparable speeds, Kuel set his mind to task at finding a way to attain faster than light speeds for exploring the multiverse. Having discovered the means to do so (and not by mere accident, but through concerted, brain-busting research and meticulous experimentation), Kuel₁₋ₓ now set out into the multiverse upon a simple, straightforward, 3-fold mission for the remainder of his lives: 1. Exploration of the multiverse 2. Composition of original musical works 3. Construction of Community Knowledge Clouds He had no ambitions to amass monetary riches or possessions. He did not wish to gain political leverage of any kind. He didn't want to teach anything to anyone, form any sort of religions, or start any kind of cultural movements. He was not compelled to propagate his genetic material through family building (though he and himselves had already securely banked their genomes in case they changed their minds about that some day). He felt no drive to control and manipulate anyone or anything except elements of music. He didn't even really care if anyone ever meshed with the CKCs he and himselves would eventually create and load up with their compositions and massive sets of discovdata across the multiverse. He would leave it entirely up to learners seeking knowledge from the un-biased trudatsets contained within them to decide if they were worthy of perusal and absorption. Pushing his thoughtbiases upon anyone was as revolting to himselves as committing assault, robbery, kidnap, rape or murder. His only burning self-centered desires from this point forward were to experience surprises he suspected were in store for him in his explorations of the multiverse, and creating musical compositions which might induce the spine-tingling responses within himselves he always enjoyed so much. These were his frank responses when his grandparents had pointedly asked him what his intentions were, wielding his powerful invention. They expressed no relief upon hearing his answers–knowing him as well as they did. Only joy and satisfaction had coursed through them to know that he had found a lifeniche which might bring about both positive progress and deep meaning for himselves without causing harm. So after leaving his grandparents, Kuel used the QISS's quantprobes to locate a good place to rest for a while, Qported to a cool, quiet, remote mountain valley on a strange terrestrial body harboring no poisonous atmosphere or deadly flora or fauna which might bring injury or death and began composing a new solo piece while Kuel₂₋ₓ did the same in their respective universes. They played separately in the multiverse, their work beginning pensively while they tinkered a bit to decide upon mood and timbre for their pieces. Subconsciously referencing the Circle of Fifths, influences of Coltrane drifted into the session and a vision of his own Circle of Tones pervaded. Then the parts came together, soared and expanded into quantusymphonic majesty as they improvised, revised and crystalized their compositions. Satisfied with the outcome of the session, they found fruit trees standing beside clear, cold streams and enjoyed strange new flavors and long, quenching drinks before tending to bodily functions and settling down to sleep beneath stars as their QISSs deployed quantshields–adjusting temperature and humidity to optimal levels within–and kept close watch over them, softly playing back their new compositions, primed and ready to autoQport in a nanosecond to safety, if necessary. Bathed in the sonic blanket of their new musical creations, Kuel₁₋ₓ pondered the next exploratory expedition and soon dozed. Then they began to dream. ~ The True Stuff ~ At age twelve, my sleep time requirements plummeted from more than eight hours per night to six, at most, and more frequently to just four hours, and that became the new norm. Why it happened is a mystery, possibly connected to puberty. It was disturbing at first because I had no idea what to do during the extra waking hours while the rest of the family was fast asleep. Reading helped but wasn't enough. Fortunately, I had my own bedroom by then where I could work on quiet hobbies until sunrise when it was time to prepare for school during weekdays, or dash outdoors to play on weekends and holidays. Practicing cello or guitar during wee hours was out of the question. Homework was never so taxing it couldn't be completed long before dinner time (usually finished during the bus ride home). Watching the only TV we owned in the living room at such ungodly hours was not permitted. So a four-inch reflector telescope pointed out a north-facing window, an electronics project kit, a chemistry set, a junior weather forecaster station, writing for fun, model building, drawing, preparing slides and examining specimens of plants, animals, and minerals collected during daytime outdoor explorations through microscope and extended reflection on previous experiences in life nicely filled spare nighttime hours. Sometimes our dog Tanji would come upstairs to keep quiet company with me, but no one else in the family ever did–all sleeping soundly throughout the nights, which was just fine for my purposes. Enjoying freedom from social interaction expectations and tensions (good, bad and everywhere in between) during these hours may have been the beginnings of tendency to desire and seek solitude. My mother tells me I take after a branch of her father's side of the family, comfortable with company of none but themselves. This may be true. Now more than half a year into retirement and free of duties working for income and the majority of thought cycles focused on complex problem solving for others, twenty waking hours per day are now spent entirely on personal projects. A recent acquaintance has encouraged workshopping to develop this enjoyable writing hobby, something I may try via online workshop channels, reluctant to expose myself to various strains of influenza running rampant at this time of year (having already contracted and survived flu four times over the last six years), not to mention exposure to ever-increasing numbers of other contagions from E. coli, hepatitis (A,B,C) and Rotavirus to tuberculosis; all easily picked up while mingling with people downslope. And as global warming encourages proliferation of contagious diseases, I'm prone to limit exposure to those vectored from local higher-altitude sources like hantavirus, plague, and pneumonia. It's not that I'm afraid of death, just not eager to experience it anytime soon. A long, prosperous life has always been an attractive prospect. And it's not like physically living apart from others is pure solitude in this still-budding but rocking and rolling information age. A nice, fast, two-way connection to the internet keeps me well connected to family and friends for rich communication and passively sharing meager creative works, as well as around-the-clock access to a plethora of information sources from which I can select the best of the best, safe and sound here where quiet hours are so plentiful apart from churning masses. The recent acquaintance freely expended a lot of valuable time and effort to share vast knowledge and solid advice, though. A remarkably generous thing to do considering his busy lifestyle, still fully engaged in close proximity with others. So I won't ignore it. ~The Made-Up Stuff ~ A time evolution formulation of the cognitive process. Kuel slept briefly, soundly and dreamt deeply, stirring only once to relieve point pressure stresses on hip and shoulder without surfacing to full consciousness. Binocular rivalries spurred his dreams. Alternating pairs of distinctly separate visual imagery projected virtually by subconscious mind onto crisscrossed retinal perception centers of his brain competed for dominance, efficiently kindling conceptual thoughts into indeterminately retrievable patterns for future use during conscious mind thought cycles. Upon waking, he sat up and looked at the creatures gathered around him, crowded up close, within centimeters of shield perimeter. He knew they weren't dangerous. The QISS would have alerted him of their approach long before they were so close if it perceived any threat. "Hello," he said. The QISS's AI translated the word into a quantumusical phrase of friendly greeting. This elicited an excited response from the creatures. Many stood, hopped about and chattered in bright, musical tones and percussive vibrations. "They are excited a stranger so full of new music has come to their world," the QISS answered without delay. "How long have you been here?" he asked. The QISS translated with a lightly modulated sonic sweep and short, pleasant clang. The largest of the creatures, a headless, hulking hexapodish creature still squatting on rear haunches with two rows of blinking eyes on each of its front shoulders answered with low, resonant whistles and moans through an orifice, or orifices, Kuel could not see. The QISS translated: "Not long. Heard you playing. Came to listen. Please play more." So Kuel picked up the QISS and did so for his first non-human, highly-sentient audience. That first performance brought rewards for both himselves and for the creatures of the world he had stopped upon for sustenance and rest. And as he played pieces refined and well practiced for the small group of inhabitants of the world, they listened intently as if there were answers in the music for them. Answers they had been seeking within themselves and their own musical languages but had never been able to find. The first performance also brought about increasingly clear and agile communication between Kuel and his audience as they intoned appreciation between performance pieces. The QISS AI absorbed and processed the intonements to build upon its knowledge of their languages and taught them to Kuel through tonal feedback guidances. Soon, the AI was able to relinquish all communication actions to Kuel as he became adept at understanding the musical languages of the creatures and played his own musical responses conversationally. "Where do you come from?" they asked. Analysing quantunavigational waypoints, the QISS conveyed to Kuel the general region of the night sky to point to. "From a planet near the edge of our galaxy, just near that cluster of three stars beside empty space...there," he told them. "We call our home planet Earth. What do you call yours?" Their answer was delivered entirely musically and the name they spoke was a beautiful contrapuntal weave of encompassing multimeaning. The QISS recorded it all, and as their conversations progressed over the months he stayed there, Kuel began to realize this would be the first place he would construct a CKC for eventual connection to the mesh, as their questions were many more than he could answer in a lifetime. He also realized he would construct only the shell of the CKC populated only with basic trudatsets related to Earth and his relationship to his home world as evidence of his brief presence here, leaving it to the creatures of this world to populate it with trudatsets of their own. So he set about showing them how they could do this through his musical performances, which the creatures seemed to always enjoy, while spending his time apart from them designing the CKC to be built and activated for trudatpopulation by the creatures after he departed. It required more time and effort than expected, but Kuel persevered, with the help of QISS AI at first, then with the help of the creatures to prepare interfaces for them to feed their trudatsets into it. "This CKC is now yours," he told them when it was ready for them to fill. "Feed it all you know and all you wish to know. It will eventually be connected to a multiverse mesh to allow sharing it with others across the cosmos and to allow you to mesh with a multitude other CKCs already in existence and more still to be constructed." Thanking him for the gift, they lavished Kuel with praise first, then attempted to give him gifts in appreciation. He kindly refused the gifts, "I cannot carry them with me when I leave." Hearing this, a dissonant cacophony rose up from them in protest. "You cannot leave us! We need you to help us! We need your music to mix with ours to survive!" More shocked by their reaction than flattered, Kuel tried to convince them they did not. "Your own music is all you need. Mine is stored in the CKC for you to reference when desired, but now your own music is all that you need to proceed on vectors of positive progress, just as you were doing before I came here!" But the creatures were not convinced and moved aggressively to restrain him. To capture and cage him. The QISS AI signaled its intent to Kuel in a short, sharp tone and emergency Qported them to safety of an uninhabited world it had located and cataloged for just such an eventuality. Kuel sat down hard on the ground and shuddered, trying to understand what had just happened. He had given his music and gift of a CKC freely. He had not created any mystery for the creatures to solve to use it. Why had they so fervently insisted they were incapable of continuing without him? He had not been amongst them so long that their eons of history were warped or disrupted by effect of his presence. He knew his music was pretty good but nothing in comparison to theirs, music produced through their own physiology rather than through an invented contraption. A few other Kuels joined him within the confines of the shield his QISS maintained about him to discuss the unexpected turn of events. "We must have given too much of ourselves to them," Kuel₉ suggested. The others agreed this was possible. "Too much inspiration can be as limiting as it may be liberating. They seem to have lost confidence in themselves, in their own abilities," Kuel₁₅ offered. "Hopefully only temporarily." "But we left them CKCs₁₋ₓ across the multiverse, and knowledge to use them," Kuel₃₇ replied. "They have not been irrecoverably damaged, and as soon as they realize this they will begin filling the CKCs with their own trudatsets. And when ready, they will allow them to connect to the mesh." "Possibly," Kuel₁ said. "Or they may ignore them altogether now that we're no longer on their worlds for command performances." All Kuels nodded agreement to this, understanding there could be no other way for it, knowing time is precious and that none of them were immortal or invincible. "Their choice," they said in unison. "As it should be for all sentient beings we encounter." ~ The True Stuff ~ An interest in construction of stringed musical instruments developed at age fourteen thanks to an excellent West Texas wood shop class instructor, Mr. Butcher. His knowledge of woods and their grain properties and extended discussions we had about Stradivarius violins sparked an interest in materials and their harmonic energy properties at microscopic and atomic levels. He also helped me learn to be flexible in my thinking and planning when an overly ambitious project for the year had to be scaled back to something doable within the limits of class time and within my elementary woodworking skills capability. Instead of building an acoustic mandolin which entailed more hours of work shaping the front and back of the instrument to thicknesses and 3-D shapes conducive to good acoustic resonation, I ended up building a modified Rickenbacker style electric guitar out of beautiful slabs of black walnut. Still a big project, I was able to complete it in time, finishing it out with a hand-rubbed gunstock oil that brought out the complexity of the walnut grain. This project selection switch happened after spending days producing detailed mechanical drawings for the mandolin then having to redo them all for the guitar design. I would have panicked if Mr. Butcher hadn't showed me how to shift gears quickly and efficiently at the drawing table by stepping back to assess what I had already done and how I might utilize it as much as possible for the modified project. Applying minor scaling modifications to the existing mandolin design drawings to transform them into the design for a guitar shaped just like the symmetrical mandolin design I had come up with required only one day of work. Always too optimistic in planning phases of my projects, that lesson has served me well in absolutely every design project I've ever worked on since then, from musical compositions to small and large-scale metal fabrications as well as software engineerings. ~The Made-Up Stuff ~ Hamiltonian for quantum mechanical model of a lattice that allows phonons to arise. Hamiltonian in wavevector space. Hamiltonian in three dimensions. System For Propagation of Quasineutral Ion-acoustic Waves: Ion continuity equation. Ion momentum conservation equation. Kuel lingered on the uninhabited planet for a few days making specific modifications to phonon excitation distribution patterns in the resonant quantum ion-acoustic lattice properties of the QISS. After the rattling experience with the creatures and their lack of confidence in ability to carry on independently after his departure, he wanted to engineer capability to convey and instill a mounting, persisting sense of self confidence within audiences he communicated with through his music rather than merely inspire them. Not quantusonic brainwashing, just reinforcement of innate abilities through the power of quasiparticle manipulations via propagation of quasineutral ion-acoustic waves. It wasn't an easy modification to make. By the time he finished and QAed the mods to satisfaction his food and water supplies were completely exhausted, and so was he. The world he had been emergency Qported to by the QISS AI did not have any natural sources of food and water he could safely utilize, so he returned to his home on Earth to rest and recharge and to work on adding a matter synthesis module for conversion of energy into essential nutrients, clothing and basic personal hygiene items he could use while traveling. The QISS chimed upon completion of matter synthesis mods integration and verification. Then it chimed again indicating some of himselves wanted to meet to discuss the next excursion. After a quick shower, Kuel Qported to the designated meeting location on the far side of Milky Way₅₅ on water world with a single island no bigger than a contemporary small farming community of one hundred thousand acres. There were four thousand seventy three Kuels already there setting up camp when Kuel₁ blupped in on the pale blue-sanded beach with QISS shield already raised. One of himself parted from the throng, approached and motioned for him to walk along with him, their shields merging into one. "Well, that first outing was a trip, eh?" "More than expected, not exactly a pleasant surprise, but an interesting one." "This backbrained notion to explore a water world teeming with pre-sentience is interesting too. It will be a nice break from performing for demanding audiences." Looking inland, trees were blue green and thick a top turquoise undergrowth lush and vine-ridden. Four snow-capped mountain peaks served as backdrop. A white sun about to set was met by another blue one rising. Turning and walking backwards, an enormous, golden sickle moon consumed a broad arc of sky. The large multi-colored disc of a not-so-distant planet situated nearby glowed orange around the edges. "Is it a gas giant?" "Yes. About ten times the size of Jupiter. A large ice moon on the far side is about one tenth its diameter. A spectacular sight when it swings around into view. Its orbit keeps it just ahead of the planet's terminator by about three diameters the entire time it's visible. Tidal forces keep geysers spewing constantly, leaving a bright, long, curved tail. We're calling it Racer." "Whoa! I'm looking forward to seeing that." And this was what Kuel loved about working with himselves. Even though they were one in many blazing telepathation trails for all, they weren't telepathic. None knew anything of what others had observed in their explorations of the multiverse. These explorshares were the essence of their mutual joy. Surprises around every corner. He wondered if their mission would ever become boring, seriously doubting it was possible for that to happen. At least not in their lifetimes. Turning back around to resume walking the direction he was facing, the Kuels talked on about life forms spotted so far, on the island and beyond and means of exploring the world. "We're Qporting in a larger matter synthesis device the size of a small factory to build subs and flitters–mostly single seaters with QISS pairing for AI guidance and piloting. Each has dual backup quantcomps with lesser AI for safety too. All will travel fully shielded at all times to prevent cross contamination." "Many new phase-flips observed needing custom quantum error-correcting codes here?" "None that are observable with our gear. Existing shields and correctors seem to be more than adequate, but one never knows when a bit-flip will pop up, eh?" "Nope. They do make for a surprising performance, though. And fun for musicians with chops enough to adjust and keep playing on smoothly when encountered without being rattled." For this project, all QISS musicians knew they would not be performing their music for any actively listening "audience". Instead their music would become integral as sensor signals and their performances sensory propagation and reception operations. All of the performances would be recorded on each QISS for ultimate upstorage in this world's CKC for future reference against discovdat gathered and stored, providing essential sensory calibration for fatdatflights by anyone wishing to browse the CKC content via ARRig interface. Without those calibrations, bigdatpilots would just be flying blind through content. Fun to experience, but ultimately meaningless. It would require extensive application of Kalman filters divided into two distinct parts: 1) a part for time periods between sensor outputs and 2) a part for measurements incorporation. Time to do some applied control theory math. Kalman filtering, or linear quadratic estimation (LQE for short), was the ticket for adapting QISSs for Qsensorspray work by utilizing dynamic series of measurements observed over time. The sensed signals would inevitably contain statistical noise and other inaccuracies introduced by the environment. To produce good, solid estimates of unknown variables tending toward the more accurate end of the Qsensorspray spectrum than those based on single measurements, estimation of joint probability distributions over the variables for each Qsensorspray timeframe was required. Kuel was glad he included a coffee synthesis module in the QISS upgrades. This was going to be a chore. ~ The True Stuff ~ Filtering out noise in audio recordings is a tricky operation. Not as difficult these days of digital recording as it used to be when magnetic tape made one wonder "is it live or is it"...well, never mind about that old bit of hammered-in commercialism. The point is, noise is everywhere and demands careful filtering to obtain quality end product. ~The Made-Up Stuff ~ Kalman Filter Predict Phase: Predicted state estimate. Predicted error covariance. Kalman Filter Update Phase: Innovation–measurement pre-fit residual. Innovation–pre-fit residual–covariance. Optimal Kalman gain. Updated state estimate. Updated estimate covariance. Measurement post-fit residual. Since first reading about control theory on the vast MITCKC at age ten, Kuel had been fascinated by that subfield of mathematics. After all, a good musical performance is as meticulously engineered a process as any continuously operating dynamical system can be. Spending thousands of kidhours flying through the fatdatsets and working out hundreds of self-directed projlabs, Kuel developed a keen, almost intuitive sense in development of optimal models for controlling systems virtually free of delay and overshoot. MITCKCmentors noticed his control model building efforts early on in his studies on the subject matter and laid out a few breadcrumbs for him to follow into areas he might find interesting, and he did–progressively and consistently producing complex, useful control systems of matchless stability. Thus, Kuel unwittingly established a second professional niche in the lucrative field of Control Systems Engineering (CSE) from which a second income stream began feeding his digicurr account with torrents of Qbitcoin. He barely noticed it, though, always consumed by the learning experience journey rather than any monetary rewards. By the time he decided to apply his control theory skills to building the first QISS prototype, he finally took a look at his account balance and saw that he had enough to fund that project independently, allowing him to keep it totally secret. Kuel₁ surveyed the buzz of activity of himselves along the beachfront they had established. Their mission team had swelled to fifteen thousand Kuels all wrapping up final preparations to begin Qsensorsweeps of this sentient-free world. A small group of eight from the control engineering team approached and they began discussing a sticky issue they were facing with Qsensorspray calibrations to compensate for planetary magnetics.<
de0a91cb23267f09
Quantum Approaches to Consciousness First published Tue Nov 30, 2004; substantive revision Thu Apr 16, 2020 It is widely accepted that consciousness or, more generally, mental activity is in some way correlated to the behavior of the material brain. Since quantum theory is the most fundamental theory of matter that is currently available, it is a legitimate question to ask whether quantum theory can help us to understand consciousness. Several approaches answering this question affirmatively, proposed in recent decades, will be surveyed. There are three basic types of corresponding approaches: (1) consciousness is a manifestation of quantum processes in the brain, (2) quantum concepts are used to understand consciousness without referring to brain activity, and (3) matter and consciousness are regarded as dual aspects of one underlying reality. Major contemporary variants of these quantum-inspired approaches will be discussed. It will be pointed out that they make different epistemological assumptions and use quantum theory in different ways. For each of the approaches discussed, both problematic and promising features will be highlighted. 1. Introduction The problem of how mind and matter are related to each other has many facets, and it can be approached from many different starting points. The historically leading disciplines in this respect are philosophy and psychology, which were later joined by behavioral science, cognitive science and neuroscience. In addition, the physics of complex systems and quantum physics have played stimulating roles in the discussion from their beginnings. As regards the issue of complexity, this is evident: the brain is one of the most complex systems we know. The study of neural networks, their relation to the operation of single neurons and other important topics do and will profit a lot from complex systems approaches. As regards quantum physics, there can be no reasonable doubt that quantum events occur and are efficacious in the brain as elsewhere in the material world—including biological systems.[1] But it is controversial whether these events are efficacious and relevant for those aspects of brain activity that are correlated with mental activity. The original motivation in the early 20th century for relating quantum theory to consciousness was essentially philosophical. It is fairly plausible that conscious free decisions (“free will”) are problematic in a perfectly deterministic world,[2] so quantum randomness might indeed open up novel possibilities for free will. (On the other hand, randomness is problematic for goal-directed volition!) Quantum theory introduced an element of randomness standing out against the previous deterministic worldview preceding it, in which randomness expresses our ignorance of a more detailed description (as in statistical mechanics). In sharp contrast to such epistemic randomness, quantum randomness in processes such as the spontaneous emission of light, radioactive decay, or other examples has been considered a fundamental feature of nature, independent of our ignorance or knowledge. To be precise, this feature refers to individual quantum events, whereas the behavior of ensembles of such events is statistically determined. The indeterminism of individual quantum events is constrained by statistical laws. Other features of quantum theory, which became attractive in discussing issues of consciousness, were the concepts of complementarity and entanglement. Pioneers of quantum physics such as Planck, Bohr, Schrödinger, Pauli (and others) emphasized the various possible roles of quantum theory in reconsidering the old conflict between physical determinism and conscious free will. For informative overviews with different focal points see e.g., Squires (1990), Kane (1996), Butterfield (1998), Suarez and Adams (2013). 2. Philosophical Background Assumptions Variants of the dichotomy between mind and matter range from their fundamental distinction at a primordial level of description to the emergence of mind (consciousness) from the brain as an extremely sophisticated and highly developed material system. Informative overviews can be found in Popper and Eccles (1977), Chalmers (1996), and Pauen (2001). One important aspect of all discussions about the relation between mind and matter is the distinction between descriptive and explanatory approaches. For instance, correlation is a descriptive term with empirical relevance, while causation is an explanatory term associated with theoretical attempts to understand correlations. Causation implies correlations between cause and effect, but this does not always apply the other way around: correlations between two systems can result from a common cause in their history rather than from a direct causal interaction. In the fundamental sciences, one typically speaks of causal relations in terms of interactions. In physics, for instance, there are four fundamental kinds of interactions (electromagnetic, weak, strong, gravitational) which serve to explain the correlations that are observed in physical systems. As regards the mind-matter problem, the situation is more difficult. Far from a theoretical understanding in this field, the existing body of knowledge essentially consists of empirical correlations between material and mental states. These correlations are descriptive, not explanatory; they are not causally conditioned. It is (for some purposes) interesting to know that particular brain areas are activated during particular mental activities; but this does, of course, not explain why they are. Thus, it would be premature to talk about mind-matter interactions in the sense of causal relations. For the sake of terminological clarity, the neutral notion of relations between mind and matter will be used in this article. In many discussions of material [ma] brain states and mental [me] states of consciousness, the relations between them are conceived in a direct way (A): \[ [\mathbf{ma}] \substack{\leftarrow \\ \rightarrow} [\mathbf{me}] \] This illustrates a minimal framework to study reduction, supervenience, or emergence relations (Kim 1998; Stephan 1999) which can yield both monistic and dualistic pictures. For instance, there is the influential stance of strong reduction, stating that all mental states and properties can be reduced to the material domain or even to physics (physicalism).[3] This point of view claims that it is both necessary and sufficient to explore and understand the material domain, e.g., the brain, in order to understand the mental domain, e.g., consciousness. It leads to a monistic picture, in which any need to discuss mental states is eliminated right away or at least considered as epiphenomenal. While mind-brain correlations are still legitimate though causally irrelevant from an epiphenomenalist point of view, eliminative materialism renders even correlations irrelevant. Much discussed counterarguments against the validity of such strong reductionist approaches are qualia arguments, which emphasize the impossibility for physicalist accounts to properly incorporate the quality of the subjective experience of a mental state, the “what it is like to be” (Nagel 1974) in that state. This leads to an explanatory gap between third-person and first-person accounts for which Chalmers (1995) has coined the notion of the “hard problem of consciousness”. Another, less discussed counterargument is that the physical domain itself is not causally closed. Any solution of fundamental equations of motion (be it experimentally, numerically, or analytically) requires to fix boundary conditions and initial conditions which are not given by the fundamental laws of nature (Primas 2002). This causal gap applies to classical physics as well as quantum physics, where a basic indeterminacy due to collapse makes it even more challenging. A third class of counterarguments refer to the difficulties to include notions of temporal present and nowness in a physical description (Franck 2004, 2008; Primas 2017). However, relations between mental and material states can also be conceived in a non-reductive fashion, e.g. in terms of emergence relations (Stephan 1999). Mental states and/or properties can be considered as emergent if the material brain is not necessary or not sufficient to explore and understand them.[4] This leads to a dualistic picture (less radical and more plausible than Cartesian dualism) in which residua remain if one attempts to reduce the mental to the material. Within a dualistic scheme of thinking, it becomes almost inevitable to discuss the question of causal influence between mental and material states. In particular, the causal efficacy of mental states upon brain states (“downward causation”) has recently attracted growing interest (Velmans, 2002; Ellis et al. 2011).[5] The most popular approaches along those lines as far as quantum behavior of the brain is concerned will be discussed in Section 3, “Quantum Brain”. It has been an old idea by Bohr that central conceptual features of quantum theory, such as complementarity, are also of pivotal significance outside the domain of physics. In fact, Bohr became familiar with complementarity through the psychologist Edgar Rubin and, more indirectly, William James (Holton 1970) and immediately saw its potential for quantum physics. Although Bohr was also convinced of the extraphysical relevance of complementarity, he never elaborated this idea in concrete detail, and for a long time after him no one else did so either. This situation has changed: there are now a number of research programs generalizing key notions of quantum theory in a way that makes them applicable beyond physics. Of particular interest for consciousness studies are approaches that have been developed in order to pick up Bohr’s proposal with respect to psychology and cognitive science. The first steps in this direction were made by the group of Aerts in the early 1990s (Aerts et al. 1993), using non-distributive propositional lattices to address quantum-like behavior in non-classical systems. Alternative approaches have been initiated by Khrennikov (1999), focusing on non-classical probabilities, and Atmanspacher et al. (2002), outlining an algebraic framework with non-commuting operations. The recent development of ideas within this framework of thinking is addressed in Section 4, “Quantum Mind”. Other lines of thinking are due to Primas (2007, 2017), addressing complementarity with partial Boolean algebras, and Filk and von Müller (2008), indicating links between basic conceptual categories in quantum physics and psychology. As an alternative to (A), it is possible to conceive mind-matter relations indirectly (B), via a third category: \[\begin{gather} [\mathbf{ma}] \quad [\mathbf{me}] \\ \searrow\nwarrow \swarrow\nearrow \\ [\mathbf{mame}] \end{gather}\] This third category, here denoted [mame], is often regarded as being neutral with respect to the distinction between [ma] and [me], i.e., psychophysically neutral. In scenario (B), issues of reduction and emergence concern the relation between the unseparated “background reality” [mame] and the distinguished aspects [ma] and [me]. Such “dual aspect” frameworks of thinking have received increasing attention in contempory discussion, and they have a long tradition reaching back as far as to Spinoza. In the early days of psychophysics, Fechner (1861) and Wundt (1911) advocated related views. Whitehead, the modern pioneer of process philosophy, referred to mental and physical poles of “actual occasions”, which themselves transcend their bipolar appearances (Whitehead 1978). Many approaches in the tradition of Feigl (1967) and Smart (1963), called “identity theories”, conceive mental and material states as essentially identical “central states”, yet considered from different perspectives. Other variants of this idea have been suggested by Jung and Pauli (1955) [see also Meier (2001)], involving Jung’s conception of a psychophysically neutral, archetypal order, or by Bohm and Hiley (Bohm 1990; Bohm and Hiley 1993; Hiley 2001), referring to an implicate order which unfolds into the different explicate domains of the mental and the material. They will be discussed in more detail in Section 5, “Brain and Mind as Dual Aspects”. Velmans (2002, 2009) has developed a similar approach, backed up with empirical material from psychology, and Strawson (2003) has proposed a “real materialism” which uses a closely related scheme. Another proponent of dual-aspect thinking is Chalmers (1996), who considers the possibility that the underlying, psychophysically neutral level of description could be best characterized in terms of information. Before proceeding further, it should be emphasized that many present-day approaches prefer to distinguish between first-person and third-person perspectives rather than mental and material states. This terminology serves to highlight the discrepancy between immediate conscious experiences (“qualia”) and their description, be it behavioral, neural, or biophysical. The notion of the “hard problem” of consciousness research refers to bridging the gap between first-person experience and third-person accounts of it. In the present contribution, mental conscious states are implicitly assumed to be related to first-person experience. This does not mean, however, that the problem of how to define consciousness precisely is considered as resolved. Ultimately, it will be (at least) as difficult to define a mental state in rigorous terms as it is to define a material state. 3. Quantum Brain In this section, some popular approaches for applying quantum theory to brain states will be surveyed and compared, most of them speculative, with varying degrees of elaboration and viability. Section 3.1 addresses three different neurophysiological levels of description, to which particular quantum approaches refer. Subsequently, the individual approaches themselves will be discussed — Section 3.2: Stapp, Section 3.3: Vitiello and Freeman, Section 3.4: Beck and Eccles, Section 3.5: Penrose and Hameroff. In the following, (some of) the better known and partly worked out approaches that use concepts of quantum theory for inquiries into the nature of consciousness will be presented and discussed. For this purpose, the philosophical distinctions A/B (Section 2) and the neurophysiological distinctions addressed in Section 3.1 will serve as guidelines to classify the respective quantum approaches in a systematic way. However, some preliminary qualifications concerning different ways to use quantum theory are in order. There are quite a number of accounts discussing quantum theory in relation to consciousness that adopt basic ideas of quantum theory in a purely metaphorical manner. Quantum theoretical terms such as entanglement, superposition, collapse, complementarity, and others are used without specific reference to how they are defined precisely and how they are applicable to specific situations. For instance, conscious acts are just postulated to be interpretable somehow analogously to physical acts of measurement, or correlations in psychological systems are just postulated to be interpretable somehow analogously to physical entanglement. Such accounts may provide fascinating science fiction, and they may even be important to inspire nuclei of ideas to be worked out in detail. But unless such detailed work leads beyond vague metaphors and analogies, they do not yet represent scientific progress. Approaches falling into this category will not be discussed in this contribution. A second category includes approaches that use the status quo of present-day quantum theory to describe neurophysiological and/or neuropsychological processes. Among these approaches, the one with the longest history was initiated by von Neumann in the 1930s, later taken up by Wigner, and currently championed by Stapp. It can be roughly characterized as the proposal to consider intentional conscious acts as intrinsically correlated with physical state reductions. Another fairly early idea dating back to Ricciardi and Umezawa in the 1960s is to treat mental states, particularly memory states, in terms of vacuum states of quantum fields. A prominent proponent of this approach at present is Vitiello. Finally, there is the idea suggested by Beck and Eccles in the 1990s, according to which quantum mechanical processes, relevant for the description of exocytosis at the synaptic cleft, can be influenced by mental intentions. The third category refers to further developments or generalizations of present-day quantum theory. An obvious candidate in this respect is the proposal by Penrose to relate elementary conscious acts to gravitation-induced reductions of quantum states. Ultimately, this requires the framework of a future theory of quantum gravity which is far from having been developed. Together with Penrose, Hameroff has argued that microtubuli might be the right place to look for such state reductions. 3.1 Neurophysiological Levels of Description A mental system can be in many different conscious, intentional, phenomenal mental states. In a hypothetical state space, a sequence of such states forms a trajectory representing what is often called the stream of consciousness. Since different subsets of the state space are typically associated with different stability properties, a mental state can be assumed to be more or less stable, depending on its position in the state space. Stable states are distinguished by a residence time at that position longer than that of metastable or unstable states. If a mental state is stable with respect to perturbations, it “activates” a mental representation encoding a content that is consciously perceived. Neural Assemblies Moving from this purely psychological, or cognitive, description to its neurophysiological counterpart leads us to the question: What is the neural correlate of a mental representation? According to standard accounts (cf. Noë and Thompson (2004) for discussion), mental representations are correlated with the activity of neuronal assemblies, i.e., ensembles of several thousands of coupled neurons. The neural correlate of a mental representation can be characterized by the fact that the connectivities, or couplings, among those neurons form an assembly confined with respect to its environment, to which connectivities are weaker than within the assembly. The neural correlate of a mental representation is activated if the neurons forming the assembly operate more actively, e.g., produce higher firing rates, than in their default mode. figure 1 Figure 1. Balance between inhibitory and excitatory connections among neurons. In order to achieve a stable operation of an activated neuronal assembly, there must be a subtle balance between inhibitory and excitatory connections among neurons (cf. Figure 1). If the transfer function of individual neurons is strictly monotonic, i.e., increasing input leads to increasing output, assemblies are difficult to stabilize. For this reason, results establishing a non-monotonic transfer function with a maximal output at intermediate input are of high significance for the modeling of neuronal assemblies (Kuhn et al. 2004). For instance, network models using lattices of coupled maps with quadratic maximum (Kaneko and Tsuda 2000) are paradigmatic examples of such behavior. These and other familiar models of neuronal assemblies (for an overview see Anderson and Rosenfeld 1988) are mostly formulated in a way not invoking well-defined elements of quantum theory. An explicit exception is the approach by Umezawa, Vitiello and others (see Section 3.3). Single Neurons and Synapses The fact that neuronal assemblies are mostly described in terms of classical behavior does not rule out that classically undescribable quantum effects may be significant if one focuses on individual constituents of assemblies, i.e., single neurons or interfaces between them. These interfaces, through which the signals between neurons propagate, are called synapses. There are electrical and chemical synapses, depending on whether they transmit a signal electrically or chemically. At electrical synapses, the current generated by the action potential at the presynaptic neuron flows directly into the postsynaptic cell, which is physically connected to the presynaptic terminal by a so-called gap junction. At chemical synapses, there is a cleft between pre- and postsynaptic cell. In order to propagate a signal, a chemical transmitter (glutamate) is released at the presynaptic terminal. This release process is called exocytosis. The transmitter diffuses across the synaptic cleft and binds to receptors at the postsynaptic membrane, thus opening an ion channel (Kandel et al. 2000, part III; see Fig. 2). Chemical transmission is slower than electric transmission. Figure 2 Figure 2. Release of neurotransmitters at the synaptic cleft (exocytosis). A model developed by Beck and Eccles applies concrete quantum mechanical features to describe details of the process of exocytosis. Their model proposes that quantum processes are relevant for exocytosis and, moreover, are tightly related to states of consciousness. This will be discussed in more detail in Section 3.4. At this point, another approach developed by Flohr (2000) should be mentioned, for which chemical synapses with a specific type of receptors, so-called NMDA receptors,[6] are of paramount significance. Briefly, Flohr observes that the specific plasticity of NMDA receptors is a necessary condition for the formation of extended stable neuronal assemblies correlated to (higher-order) mental representations which he identifies with conscious states. Moreover, he indicates a number of mechanisms caused by anaesthetic agents, which block NMDA receptors and consequently lead to a loss of consciousness. Flohr’s approach is physicalistic and reductive, and it is entirely independent of any specific quantum ideas. The lowest neurophysiological level, at which quantum processes have been proposed as a correlate to consciousness, is the level at which the interior of single neurons is considered: their cytoskeleton. It consists of protein networks essentially made up of two kinds of structures, neurofilaments and microtubuli (Fig. 3, left), which are essential for various transport processes within neurons (as well as other cells). Microtubuli are long polymers usually constructed of 13 longitudinal α and β-tubulin dimers arranged in a tubular array with an outside diameter of about 25 nm (Fig. 3, right). For more details see Kandel et al. (2000), Chap. II.4. Figure 3a    Figure 3b Figure 3. (left) microtubuli and neurofilaments, the width of the figure corresponds to approximately 700nm; (right) tubulin dimers, consisting of α- and β-monomers, constituting a microtubule. The tubulins in microtubuli are the substrate which, in Hameroff’s proposal, is used to embed Penrose’s theoretical framework neurophysiologically. As will be discussed in more detail in Section 3.5, tubulin states are assumed to depend on quantum events, so that quantum coherence among different tubulins is possible. Further, a crucial thesis in the scenario of Penrose and Hameroff is that the (gravitation-induced) collapse of such coherent tubulin states corresponds to elementary acts of consciousness. 3.2 Stapp: Quantum State Reductions and Conscious Acts The act of measurement is a crucial aspect in the framework of quantum theory, that has been the subject of controversy for more than eight decades now. In his monograph on the mathematical foundations of quantum mechanics, von Neumann (1955, Chap. V.1) introduced, in an ad hoc manner, the projection postulate as a mathematical tool for describing measurement in terms of a discontinuous, non-causal, instantaneous (irreversible) act given by (1) the transition of a quantum state to an eigenstate bj of the measured observable B (with a certain probability). This transition is often called the collapse or reduction of the wavefunction, as opposed to (2) the continuous, unitary (reversible) evolution of a system according to the Schrödinger equation. In Chapter VI, von Neumann (1955) discussed the conceptual distinction between observed and observing system. In this context, he applied (1) and (2) to the general situation of a measured object system (I), a measuring instrument (II), and (the brain of) a human observer (III). His conclusion was that it makes no difference for the result of measurements on (I) whether the boundary between observed and observing system is posited between I and (II & III) or between (I & II) and III. As a consequence, it is inessential whether a detector or the human brain is ultimately referred to as the “observer”.[7] By contrast to von Neumann’s fairly cautious stance, London and Bauer (1939) went further and proposed that it is indeed human consciousness which completes the quantum measurement process (see Jammer (1974, Sec. 11.3 or Shimony (1963) for a detailed account). In this way, they attributed a crucial role to consciousness in understanding quantum measurement in terms of an update of the observer’s knowledge. In the 1960s, Wigner (1967) radicalized this proposal,[8] by suggesting an impact of consciousness on the physical state of the measured system, not only an impact on observer knowledge. In order to describe measurement as a real dynamical process generating irreversible facts, Wigner called for some nonlinear modification of (2) to replace von Neumann’s projection (1).[9] Since the 1980s, Stapp has developed his own point of view on the background of von Neumann and Wigner. In particular, he tries to understand specific features of consciousness in relation to quantum theory. Inspired by von Neumann, Stapp uses the freedom to place the interface between observed and observing system and locates it in the observer’s brain. He does not suggest any formal modifications to present-day quantum theory (in particular, he stays essentially within the “orthodox” Hilbert space representation), but adds major interpretational extensions, in particular with respect to a detailed ontological framework. In his earlier work, Stapp (1993) started with Heisenberg’s distinction between the potential and the actual (Heisenberg 1958), thereby taking a decisive step beyond the operational Copenhagen interpretation of quantum mechanics. While Heisenberg’s notion of the actual is related to a measured event in the sense of the Copenhagen interpretation, his notion of the potential, of a tendency, relates to the situation before measurement, which expresses the idea of a reality independent of measurement.[10] Immediately after its actualization, each event holds the tendency for the impending actualization of another, subsequent actual event. Therefore, events are by definition ambiguous. With respect to their actualized aspect, Stapp’s essential move is to “attach to each Heisenberg actual event an experiential aspect. The latter is called the feel of this event, and it can be considered to be the aspect of the actual event that gives it its status as an intrinsic actuality” (Stapp 1993, p. 149). With respect to their tendency aspect, it is tempting to understand events in terms of scheme (B) of Section 2. This is related to Whitehead’s ontology, in which mental and physical poles of so-called “actual occasions” are considered as psychological and physical aspects of reality. The potential antecedents of actual occasions are psychophysically neutral and refer to a mode of existence at which mind and matter are unseparated. This is expressed, for instance, by Stapp’s notion of a “hybrid ontology” with “both idea-like and matter-like qualities” (Stapp 1999, 159). Similarities with a dual-aspect approach (B) (cf. Section 5) are evident. In an interview of 2006, Stapp (2006) specifies some ontological features of his approach with respect to Whitehead’s process thinking, where actual occasions rather than matter or mind are fundamental elements of reality. They are conceived as based on a processual rather than a substantial ontology (see the entry on process philosophy). Stapp relates the fundamentally processual nature of actual occasions to both the physical act of state reduction and the correlated psychological intentional act. Another significant aspect of his approach is the possibility that “conscious intentions of a human being can influence the activities of his brain” (Stapp 1999, p. 153). Different from the possibly misleading notion of a direct interaction, suggesting an interpretation in terms of scheme (A) of Section 2, he describes this feature in a more subtle manner. The requirement that the mental and material outcomes of an actual occasion must match, i.e. be correlated, acts as a constraint on the way in which these outcomes are formed within the actual occasion (cf. Stapp 2006). The notion of interaction is thus replaced by the notion of a constraint set by mind-matter correlations (see also Stapp 2007). At a level at which conscious mental states and material brain states are distinguished, each conscious experience, according to Stapp (1999, p. 153), has as its physical counterpart a quantum state reduction actualizing “the pattern of activity that is sometimes called the neural correlate of that conscious experience”. This pattern of activity may encode an intention and, thus, represent a “template for action”. An intentional decision for an action, preceding the action itself, is then the key for anything like free will in this picture. Stapp argues that the mental effort, i.e. attention devoted to such intentional acts, can protract the lifetime of the neuronal assemblies that represent the templates for action due to quantum Zeno-type effects. Concerning the neurophysiological implementation of this idea, intentional mental states are assumed to correspond to reductions of superposition states of neuronal assemblies. Additional commentary concerning the concepts of attention and intention in relation to James’ idea of a holistic stream of consciousness (James 1950 [1890]) was given by Stapp (1999). For further progress, it will be mandatory to develop a coherent formal framework for this approach and elaborate on concrete details. For instance, it is not yet worked out precisely how quantum superpositions and their collapses are supposed to occur in neural correlates of conscious events. Some indications are outlined by Schwartz et al. (2005). With these desiderata for future work, the overall conception is conservative insofar as the physical formalism remains unchanged. This is why Stapp insisted for years that his approach does not change what he calls “orthodox” quantum mechanics, which is essentially encoded in the statistical formulation by von Neumann (1955). From the point of view of standard present-day quantum physics, however, it is certainly unorthodox to include the mental state of observers in the theory. Although it is true that quantum measurement is not yet finally understood in terms of physical theory, introducing mental states as the essential missing link is highly speculative from a contemporary perspective. This link is a radical conceptual move. In what Stapp now denotes as a “semi-orthodox” approach (Stapp 2015), he proposes that the blind-chance kind of randomness of individual quantum events (“nature’s choices”) be reconceived as “not actually random but positively or negatively biased by the positive or negative values in the minds of the observers that are actualized by its (nature’s) choices” (p. 187). This hypothesis leads into mental influences on quantum physical processes which are widely unknown territory at present. 3.3 Vitiello and Freeman: Quantum Field Theory of Brain States In the 1960s, Ricciardi and Umezawa (1967) suggested to utilize the formalism of quantum field theory to describe brain states, with particular emphasis on memory. The basic idea is to conceive of memory states in terms of states of many-particle systems, as inequivalent representations of vacuum states of quantum fields.[11] This proposal has gone through several refinements (e.g., Stuart et al. 1978, 1979; Jibu and Yasue 1995). Major recent progress has been achieved by including effects of dissipation, chaos, fractals and quantum noise (Vitiello 1995; Pessa and Vitiello 2003; Vitiello 2012). For readable nontechnical accounts of the approach in its present form, embedded in quantum field theory as of today, see Vitiello (2001, 2002). Quantum field theory (see the entry on quantum field theory) deals with systems with infinitely many degrees of freedom. For such systems, the algebra of observables that results from imposing canonical commutation relations admits of multiple Hilbert-space representations that are not unitarily equivalent to each other. This differs from the case of standard quantum mechanics, which deals with systems with finitely many degrees of freedom. For such systems, the corresponding algebra of observables admits of unitarily equivalent Hilbert-space representations. The inequivalent representations of quantum field theory can be generated by spontaneous symmetry breaking (see the entry on symmetry and symmetry breaking), occurring when the ground state (or the vacuum state) of a system is not invariant under the full group of transformations providing the conservation laws for the system. If symmetry breaks down, collective modes are generated (so-called Nambu-Goldstone boson modes), which propagate over the system and introduce long-range correlations in it. These correlations are responsible for the emergence of ordered patterns. Unlike in standard thermal systems, a large number of bosons can be condensed in an ordered state in a highly stable fashion. Roughly speaking, this provides a quantum field theoretical derivation of ordered states in many-body systems described in terms of statistical physics. In the proposal by Umezawa these dynamically ordered states represent coherent activity in neuronal assemblies. The activation of a neuronal assembly is necessary to make the encoded content consciously accessible. This activation is considered to be initiated by external stimuli. Unless the assembly is activated, its content remains unconscious, unaccessed memory. According to Umezawa, coherent neuronal assemblies correlated to such memory states are regarded as vacuum states; their activation leads to excited states and enables a conscious recollection of the content encoded in the vacuum (ground) state. The stability of such states and the role of external stimuli have been investigated in detail by Stuart et al. (1978, 1979). A decisive further step in developing the approach has been achieved by taking dissipation into account. Dissipation is possible when the interaction of a system with its environment is considered. Vitiello (1995) describes how the system-environment interaction causes a doubling of the collective modes of the system in its environment. This yields infinitely many differently coded vacuum states, offering the possibility of many memory contents without overprinting. Moreover, dissipation leads to finite lifetimes of the vacuum states, thus representing temporally limited rather than unlimited memory (Alfinito and Vitiello 2000; Alfinito et al. 2001). Finally, dissipation generates a genuine arrow of time for the system, and its interaction with the environment induces entanglement. Pessa and Vitiello (2003) have addressed additional effects of chaos and quantum noise. Umezawa’s proposal addresses the brain as a many-particle system as a whole, where the “particles” are more or less neurons. In the language of Section 3.1, this refers to the level of neuronal assemblies, which correlate directly with mental activity. Another merit of the quantum field theory approach is that it avoids the restrictions of standard quantum mechanics in a formally sound way. Conceptually speaking, many of the pioneering presentations of the proposal nevertheless confused mental and material states (and their properties). This has been clarified by Freeman and Vitiello (2008): the model “describes the brain, not mental states.” For a corresponding description of brain states, Freeman and Vitiello 2006, 2008, 2010) studied neurobiologically relevant observables such as electric and magnetic field amplitudes and neurotransmitter concentration. They found evidence for non-equilibrium analogs of phase transitions (Vitiello 2015) and power-law distributions of spectral energy densities of electrocorticograms (Freeman and Vitiello 2010, Freeman and Quian Quiroga 2013). All these observables are classical, so that neurons, glia cells, “and other physiological units are not quantum objects in the many-body model of brain” (Freeman and Vitiello 2008). However, Vitiello (2012) also points out that the emergence of (self-similar, fractal) power-law distributions in general is intimately related to dissipative quantum coherent states (see also recent developments of the Penrose-Hameroff scenario, Section 3.5). The overall conclusion is that the application of quantum field theory describes why and how classical behavior emerges at the level of brain activity considered. The relevant brain states themselves are viewed as classical states. Similar to a classical thermodynamical description arising from quantum statistical mechanics, the idea is to identify different regimes of stable behavior (phases, attractors) and transitions between them. This way, quantum field theory provides formal elements from which a standard classical description of brain activity can be inferred, and this is its main role in large parts of the model. Only in their last joint paper, Freeman and Vitiello (2016) envision a way in which the mental can be explicitly included. For a recent review including technical background see Sabbadini and Vitiello (2019). 3.4 Beck and Eccles: Quantum Mechanics at the Synaptic Cleft Probably the most concrete suggestion of how quantum mechanics in its present-day appearance can play a role in brain processes is due to Beck and Eccles (1992), later refined by Beck (2001). It refers to particular mechanisms of information transfer at the synaptic cleft. However, ways in which these quantum processes might be relevant for mental activity, and in which their interactions with mental states are conceived, remain unclarified to the present day.[12] As presented in Section 3.1, the information flow between neurons in chemical synapses is initiated by the release of transmitters in the presynaptic terminal. This process is called exocytosis, and it is triggered by an arriving nerve impulse with some small probability. In order to describe the trigger mechanism in a statistical way, thermodynamics or quantum mechanics can be invoked. A look at the corresponding energy regimes shows (Beck and Eccles 1992) that quantum processes are distinguishable from thermal processes for energies higher than 10-2 eV (at room temperature). Assuming a typical length scale for biological microsites of the order of several nanometers, an effective mass below 10 electron masses is sufficient to ensure that quantum processes prevail over thermal processes. The upper limit of the time scale of such processes in the quantum regime is of the order of 10-12 sec. This is significantly shorter than the time scale of cellular processes, which is 10-9 sec and longer. The sensible difference between the two time scales makes it possible to treat the corresponding processes as decoupled from one another. The detailed trigger mechanism proposed by Beck and Eccles (1992) is based on the quantum concept of quasi-particles, reflecting the particle aspect of a collective mode. Skipping the details of the picture, the proposed trigger mechanism refers to tunneling processes of two-state quasi-particles, resulting in state collapses. It yields a probability of exocytosis in the range between 0 and 0.7, in agreement with empirical observations. Using a theoretical framework developed earlier (Marcus 1956; Jortner 1976), the quantum trigger can be concretely understood in terms of electron transfer between biomolecules. However, the question remains how the trigger may be relevant for conscious mental states. There are two aspects to this question. The first one refers to Eccles’ intention to utilize quantum processes in the brain as an entry point for mental causation. The idea, as indicated in Section 1, is that the fundamentally indeterministic nature of individual quantum state collapses offers room for the influence of mental powers on brain states. In the present picture, this is conceived in such a way that “mental intention (volition) becomes neurally effective by momentarily increasing the probability of exocytosis” (Beck and Eccles 1992, 11360). Further justification of this assumption is not given. The second aspect refers to the problem that processes at single synapses cannot be simply correlated to mental activity, whose neural correlates are coherent assemblies of neurons. Most plausibly, prima facie uncorrelated random processes at individual synapses would result in a stochastic network of neurons (Hepp 1999). Although Beck (2001) has indicated possibilities (such as quantum stochastic resonance) for achieving ordered patterns at the level of assemblies from fundamentally random synaptic processes, this remains an unsolved problem. With the exception of Eccles’ idea of mental causation, the approach by Beck and Eccles essentially focuses on brain states and brain dynamics. In this respect, Beck (2001, 109f) states explicitly that “science cannot, by its very nature, present any answer to […] questions related to the mind”. Nevertheless, their biophysical approach may open the door to controlled speculation about mind-matter relations. A more recent proposal targeting exocytosis processes at the synaptic cleft is due Fisher (2015, 2017). Similar to the quasi-particles by Beck and Eccles, Fisher refers to so-called Posner molecules, in particular to calcium phosphate, Ca\(_9\)(PO\(_4\))\(_6\). The nuclear spins of phosphate ions serve as entangled qubits within the molecules, which protect their coherent states against fast decoherence (resulting in extreme decoherence times in the range of hours or even days). If the Posner molecules are transported into presynaptic glutamatergic neurons, they will stimulate further glutamate release and amplify postsynaptic activity. Due to nonlocal quantum correlations this activity may be enhanced over multiple neurons (which would respond to Hepp’s concern). This is a sophisticated mechanism that calls for empirical tests. One of them would be to modify the phosphorus spin dynamics within the Posner molecules. For instance, replacing Ca by different Li isotopes with different nuclear spins gives rise to different decoherence times, affecting postsynaptic activity. Corresponding evidence has been shown in animals (Sechzer et al. 1986, Krug et al. 2019). In fact, lithium is known to be efficacious in tempering manic phases in patients with bipolar disorder. 3.5 Penrose and Hameroff: Quantum Gravity and Microtubuli In the scenario developed by Penrose and neurophysiologically augmented by Hameroff, quantum theory is claimed to be effective for consciousness, but the way this happens is quite sophisticated. It is argued that elementary acts of consciousness are non-algorithmic, i.e., non-computable, and they are neurophysiologically realized as gravitation-induced reductions of coherent superposition states in microtubuli. Unlike the approaches discussed so far, which are essentially based on (different features of) status quo quantum theory, the physical part of the scenario, proposed by Penrose, refers to future developments of quantum theory for a proper understanding of the physical process underlying quantum state reduction. The grander picture is that a full-blown theory of quantum gravity is required to ultimately understand quantum measurement (see the entry on quantum gravity). This is a far-reaching assumption. Penrose’s rationale for invoking state reduction is not that the corresponding randomness offers room for mental causation to become efficacious (although this is not excluded). His conceptual starting point, at length developed in two books (Penrose 1989, 1994), is that elementary conscious acts cannot be described algorithmically, hence cannot be computed. His background in this respect has a lot to do with the nature of creativity, mathematical insight, Gödel’s incompleteness theorems, and the idea of a Platonic reality beyond mind and matter. Penrose argues that a valid formulation of quantum state reduction replacing von Neumann’s projection postulate must faithfully describe an objective physical process that he calls objective reduction. As such a physical process remains empirically unconfirmed so far, Penrose proposes that effects not currently covered by quantum theory could play a role in state reduction. Ideal candidates for him are gravitational effects since gravitation is the only fundamental interaction which is not integrated into quantum theory so far. Rather than modifying elements of the theory of gravitation (i.e., general relativity) to achieve such an integration, Penrose discusses the reverse: that novel features have to be incorporated in quantum theory for this purpose. In this way, he arrives at the proposal of gravitation-induced objective state reduction. Why is such a version of state reduction non-computable? Initially one might think of objective state reduction in terms of a stochastic process, as most current proposals for such mechanisms indeed do (see the entry on collapse theories). This would certainly be indeterministic, but probabilistic and stochastic processes can be standardly implemented on a computer, hence they are definitely computable. Penrose (1994, Secs 7.8 and 7.10) sketches some ideas concerning genuinely non-computable, not only random, features of quantum gravity. In order for them to become viable candidates for explaining the non-computability of gravitation-induced state reduction, a long way still has to be gone. With respect to the neurophysiological implementation of Penrose’s proposal, his collaboration with Hameroff has been instrumental. With his background as an anaesthesiologist, Hameroff suggested to consider microtubules as an option for where reductions of quantum states can take place in an effective way, see e.g., Hameroff and Penrose (1996). The respective quantum states are assumed to be coherent superpositions of tubulin states, ultimately extending over many neurons. Their simultaneous gravitation-induced collapse is interpreted as an individual elementary act of consciousness. The proposed mechanism by which such superpositions are established includes a number of involved details that remain to be confirmed or disproven. The idea of focusing on microtubuli is partly motivated by the argument that special locations are required to ensure that quantum states can live long enough to become reduced by gravitational influence rather than by interactions with the warm and wet environment within the brain. Speculative remarks about how the non-computable aspects of the expected new physics mentioned above could be significant in this scenario[13] are given in Penrose (1994, Sec. 7.7). Influential criticism of the possibility that quantum states can in fact survive long enough in the thermal environment of the brain has been raised by Tegmark (2000). He estimates the decoherence time of tubulin superpositions due to interactions in the brain to be less than 10-12 sec. Compared to typical time scales of microtubular processes of the order of milliseconds and more, he concludes that the lifetime of tubulin superpositions is much too short to be significant for neurophysiological processes in the microtubuli. In a response to this criticism, Hagan et al. (2002) showed that a corrected version of Tegmark’s model provides decoherence times up to 10 to 100 μsec, and it has been argued that this can be extended up to the neurophysiologically relevant range of 10 to 100 msec under particular assumptions of the scenario by Penrose and Hameroff. More recently, a novel idea has entered this debate. Theoretical studies of interacting spins have shown that entangled states can be maintained in noisy open quantum systems at high temperature and far from thermal equilibrium. In these studies the effect of decoherence is counterbalanced by a simple “recoherence” mechanism (Hartmann et al. 2006, Li and Paraoanu 2009). This indicates that, under particular circumstances, entanglement may persist even in hot and noisy environments such as the brain. However, decoherence is just one piece in the debate about the overall picture suggested by Penrose and Hameroff. From another perspective, their proposal of microtubules as quantum computing devices has recently received support from work of Bandyopadhyay’s lab at Japan, showing evidence for vibrational resonances and conductivity features in microtubules that should be expected if they are macroscopic quantum systems (Sahu et al. 2013). Bandyopadhyay’s results initiated considerable attention and commentary (see Hameroff and Penrose 2014). In a well-informed in-depth analysis, Pitkänen (2014) raised concerns to the effect that the reported results alone may not be sufficient to confirm the approach proposed by Hameroff and Penrose with all its ramifications. In a different vein, Craddock et al. (2015, 2017) discussed in detail how microtubular processes (rather than, or in addition to, synaptic processes, see Flohr 2000) may be affected by anesthetics, and may also be responsible for neurodegenerative memory disorders. As the correlation between anesthetics and consciousness seems obvious at the phenomenological level, it is interesting to know the intricate mechanisms by which anesthetic drugs act on the cytoskeleton of neuronal cells,[14] and what role quantum mechanics plays in these mechanisms. Craddock et al. (2015, 2017) point out a number of possible quantum effects (including the power-law behavior addressed by Vitiello, cf. Section 3.3) which can be investigated using presently available technologies. Recent empirical results about quantum interactions of anesthetics are due to Li et al. (2018) and Burdick et al. (2019). From a philosophical perspective, the scenario of Penrose and Hameroff has occasionally received outspoken rejection, see e.g., Grush and Churchland (1995) and the reply by Penrose and Hameroff (1995). Indeed, their approach collects several top level mysteries, among them the relation between mind and matter itself, the ultimate unification of all physical interactions, the origin of mathematical truth, and the understanding of brain dynamics across hierarchical levels. Combining such deep and fascinating issues certainly needs further work to be substantiated, and should neither be too quickly celebrated nor offhandedly dismissed. After more than two decades since its inception one thing can be safely asserted: the approach has fruitfully inspired important innovative research on quantum effects on consciousness, both theoretical and empirical. 4. Quantum Mind 4.1 Applying Quantum Concepts to Mental Systems Today there is accumulating evidence in the study of consciousness that quantum concepts like complementarity, entanglement, dispersive states, and non-Boolean logic play significant roles in mental processes. Corresponding quantum-inspired approaches address purely mental (psychological) phenomena using formal features also employed in quantum physics, but without involving the full-fledged framework of quantum mechanics or quantum field theory. The term “quantum cognition” has been coined to refer to this new area of research. Perhaps a more appropriate characterization would be non-commutative structures in cognition. On the surface, this seems to imply that the brain activity correlated with those mental processes is in fact governed by quantum physics. The quantum brain approaches discussed in Section 3 represent attempts that have been proposed along these lines. But is it necessarily true that quantum features in psychology imply quantum physics in the brain? A formal move to incorporate quantum behavior in mental systems, without referring to quantum brain activity, is based on a state space description of mental systems. If mental states are defined on the basis of cells of a neural state space partition, then this partition needs to be well tailored to lead to robustly defined states. Ad hoc chosen partitions will generally create incompatible descriptions (Atmanspacher and beim Graben 2007) and states may become entangled (beim Graben et al. 2013). This implies that quantum brain dynamics is not the only possible explanation of quantum features in mental systems. Assuming that mental states arise from partitions of neural states in such a way that statistical neural states are co-extensive with individual mental states, the nature of mental processes depends strongly on the kind of partition chosen. If the partition is not properly constructed, it is likely that mental states and observables show features that resemble quantum behavior although the correlated brain activity may be entirely classical: quantum mind without quantum brain. Intuitively, it is not difficult to understand why non-commuting operations or non-Boolean logic should be relevant, even inevitable, for mental systems that have nothing to do with quantum physics. Simply speaking, the non-commutativity of operations means nothing else than that the sequence, in which operations are applied, matters for the final result. And non-Boolean logic refers to propositions that may have unsharp truth values beyond yes or no, shades of plausibility or credibility as it were. Both versions obviously abound in psychology and cognitive science (and in everyday life). Pylkkänen (2015) has even suggested to use this intuitive accessibility of mental quantum features for a better conceptual grasp of quantum physics. The particular strength of the idea of generalizing quantum theory beyond quantum physics is that it provides a formal framework which both yields a transparent well-defined link to conventional quantum physics and has been used to describe a number of concrete psychological applications with surprisingly detailed theoretical and empirical results. Corresponding approaches fall under the third category mentioned in Section 3: further developments or generalizations of quantum theory. One rationale for the focus on psychological phenomena is that their detailed study is a necessary precondition for further questions as to their neural correlates. Therefore, the investigation of mental quantum features resists the temptation to reduce them (within scenario A) all-too quickly to neural activity. There are several kinds of psychological phenomena which have been addressed in the spirit of mental quantum features so far: (i) decision processes, (ii) order effects, (iii) bistable perception, (iv) learning, (v) semantic networks, (vi) quantum agency,and (vii) super-quantum entanglement correlations. These topics will be outlined in some more detail in the following Section 4.2. It is a distinguishing aspect of these approaches that they have led to well-defined and specific theoretical models with empirical consequences and novel predictions. A second point worth mentioning is that by now there are a number of research groups worldwide (rather than solitary actors) studying quantum ideas in cognition, partly even in collaborative efforts. For about a decade there have been regular international conferences with proceedings for the exchange of new results and ideas, and target articles, special issues, and monographs have been devoted to basic frameworks and new developments (Khrennikov 1999, Atmanspacher et al. 2002, Busemeyer and Bruza 2012, Haven and Khrennikov 2013, Wendt 2015). 4.2 Concrete Applications Decision Processes An early precursor of work on decision processes is due to Aerts and Aerts (1994). However, the first detailed account appeared in a comprehensive publication by Busemeyer et al. (2006). The key idea is to define probabilities for decision outcomes and decision times in terms of quantum probability amplitudes. Busemeyer et al. found agreement of a suitable Hilbert space model (and disagreement of a classical alternative) with empirical data. Moreover, they were able to clarify the long-standing riddle of the so-called conjunction and disjunction effects (Tversky and Shafir 1992) in decision making (Pothos and Busemeyer 2009). Another application refers to the asymmetry of similarity judgments (Tversky 1977), which can be adequately understood by quantum approaches (see Aerts et al. 2011, Pothos et al. 2013). Order Effects Order effects in polls, surveys, and questionnaires, recognized for a long time (Schwarz and Sudman 1992), are still insufficiently understood today. Their study as contextual quantum features (Aerts and Aerts 1994, Busemeyer et al. 2011) offers the potential to unveil a lot more about such effects than the well-known fact that responses can drastically alter if questions are swapped. Atmanspacher and Römer (2012) proposed a complete classification of possible order effects (including uncertainty relations, and independent of Hilbert space representations), and Wang et al. (2014) discovered a fundamental covariance condition (called the QQ equation) for a wide class of order effects. An important issue for quantum mind approaches is the complexity or parsimony of Hilbert space models as compared to classical (Bayesian, Markov, etc.) models. Atmanspacher and Römer (2012) as well as Busemeyer and Wang (2018) addressed this issue for order effects, with the result that quantum approaches generally require less free variables than competing classical models and are, thus, more parsimonious and more stringent than those. Busemeyer and Wang (2017) studied how measuring incompatible observables sequentially induces uncertainties on the second measurement outcome. Bistable Perception The perception of a stimulus is bistable if the stimulus is ambiguous, such as the Necker cube. This bistable behavior has been modeled analagous to the physical quantum Zeno effect. (Note that this differs from the quantum Zeno effect as used in Section 3.2.) The resulting Necker-Zeno model predicts a quantitative relation between basic psychophysical time scales in bistable perception that has been confirmed experimentally (see Atmanspacher and Filk 2013 for review). Moreover, Atmanspacher and Filk (2010) showed that the Necker-Zeno model violates temporal Bell inqualitities for particular distinguished states in bistable perception.[15] This theoretical prediction is yet to be tested experimentally and would be a litmus test for quantum behavior in mental systems. Such states have been denoted as temporally nonlocal in the sense that they are not sharply (pointwise) localized along the time axis but appear to be stretched over an extended time interval (an extended present). Within this interval, relations such as “earlier” or “later” are illegitimate designators and, accordingly, causal connections are ill-defined. Learning Processes Another quite obvious arena for non-commutative behavior is learning behavior. In theoretical studies, Atmanspacher and Filk (2006) showed that in simple supervised learning tasks small recurrent networks not only learn the prescribed input-output relation but also the sequence in which inputs have been presented. This entails that the recognition of inputs is impaired if the sequence of presentation is changed. In very few exceptional cases, with special characteristics that remain to be explored, this impairment is avoided. Semantic Networks The difficult issue of meaning in natural languages is often explored in terms of semantic networks. Gabora and Aerts (2002) described the way in which concepts are evoked, used, and combined to generate meaning depending on contexts. Their ideas about concept association in evolution were further developed by Gabora and Aerts (2009). A particularly thrilling application is due to Bruza et al. (2015), who challenged a long-standing dogma in linguistics by proposing that the meaning of concept combinations (such as “apple chip”) is not uniquely separable into the meanings of the combined concepts (“apple” and “chip”). Bruza et al. (2015) refer to meaning relations in terms of entanglement-style features in quantum representations of concepts and reported first empirical results in this direction. Quantum Agency A quantum approach for understanding issues related to agency, intention, and other controversial topics in the philosophy of mind has been proposed by Briegel and Müller (2015), see also Müller and Briegel (2018). This proposal is based on work on quantum algorithms for reinforcement learning in neural networks (“projective simulation”, Paparo et al. 2012), which can be regarded as a variant of quantum machine learning (Wittek 2014). The gist of the idea is how agents can develop agency as a kind of independence from their environment and the deterministic laws governing it (Briegel 2012). The behavior of the agent itself is simulated as a non-deterministic quantum random walk in its memory space. Super-Quantum Correlations Quantum entanglement implies correlations exceeding standard classical correlations (by violating Bell-type inequalitites) but obeying the so-called Tsirelson bound. However, this bound does not exhaust the range by which Bell-type correlations can be violated in principle. Popescu and Rohrlich (1994) found such correlations for particular quantum measurements, and the study of such super-quantum correlations has become a vivid field of contemporary research, as the review by Popescu (2014) shows. One problem in assessing super-quantum correlations in mental systems is to delineate genuine (non-causal) quantum-type correlations from (causal) classical correlations that can be used for signaling. Dzhafarov and Kujala (2013) derived a compact way to do so and subtract classical context effects such as priming in mental systems so that true quantum correlations remain. See Cervantes and Dzhafarov (2018) for empirical applications, and Atmanspacher and Filk (2019) for further subtleties. 5. Mind and Matter as Dual Aspects 5.1 Compositional and Decompositional Approaches Dual-aspect approaches consider mental and material domains of reality as aspects, or manifestations, of one underlying reality in which mind and matter are unseparated. In such a framework, the distinction between mind and matter results from the application of a basic tool for achieving epistemic access to, i.e., gather knowledge about, both the separated domains and the underlying reality.[16] Consequently, the status of the underlying, psychophysically neutral domain is considered as ontic relative to the mind-matter distinction. As mentioned in Section 2, dual-aspect approaches have a long history, essentially starting with Spinoza as a most outspoken protagonist. Major directions in the 20th century have been described and compared to some detail by Atmanspacher (2014). An important distinction between two basic classes of dual-aspect thinking is the way in which the psychophysically neutral domain is related to the mental and the physical. For Russell and the neo-Russellians the compositional arrangements of psychophysically neutral elements decide how they differ with respect to mental or physical properties. As a consequence, the mental and the physical are reducible to the neutral domain. Chalmers’ (1996, Chap. 8) ideas on “consciousness and information” fall into this class. Tononi’s theoretical framework of “integrated information theory” (see Oizumi et al. 2014, Tononi and Koch 2015) can be seen as a concrete implementation of a number of features of Chalmers’ proposal. No quantum structures are involved in this work. The other class of dual-aspect thinking is decompositional rather than compositional. Here the basic metaphysics of the psychophysically neutral domain is holistic, and the mental and the physical (neither reducible to one another nor to the neutral) emerge by breaking the holistic symmetry or, in other words, by making distinctions. This framework is guided by the analogy to quantum holism, and the predominant versions of this picture are quantum theoretically inspired as, for instance, proposed by Pauli and Jung (Jung and Pauli 1955; Meier 2001) and by Bohm and Hiley (Bohm 1990; Bohm and Hiley 1993; Hiley 2001). They are based on speculations that clearly exceed the scope of contemporary quantum theory. In Bohm’s and Hiley’s approach, the notions of implicate and explicate order mirror the distinction between ontic and epistemic domains. Mental and physical states emerge by explication, or unfoldment, from an ultimately undivided and psychophysically neutral implicate, enfolded order. This order is called holomovement because it is not static but rather dynamic, as in Whitehead’s process philosophy. De Gosson and Hiley (2013) give a good introduction of how the holomovement can be addressed from a formal (algebraic) point of view. At the level of the implicate order, the term active information expresses that this level is capable of “informing” the epistemically distinguished, explicate domains of mind and matter. It should be emphasized that the usual notion of information is clearly an epistemic term. Nevertheless, there are quite a number of dual-aspect approaches addressing something like information at the ontic, psychophysically neutral level.[17] Using an information-like concept in a non-epistemic manner appears inconsistent if the common (syntactic) significance of Shannon-type information is intended, which requires distinctions in order to construct partitions, providing alternatives in the set of given events. Most information-based dual-aspect approaches do not sufficiently clarify their notion of information, so that misunderstandings easily arise. 5.2 Mind-Matter Correlations While the proposal by Bohm and Hiley essentially sketches a conceptual framework without further concrete details, particularly concerning the mental domain, the Pauli-Jung conjecture (Atmanspacher and Fuchs 2014) concerning dual-aspect monism offers some more material to discuss. An intuitively appealing way to represent their approach considers the distinction between epistemic and ontic domains of material reality due to quantum theory in parallel with the distinction between epistemic and ontic mental domains. On the physical side, the epistemic/ontic distinction refers to the distinction between a “local realism” of empirical facts obtained from classical measuring instruments and a “holistic realism” of entangled systems (Atmanspacher and Primas 2003). Essentially, these domains are connected by the process of measurement, thus far conceived as independent of conscious observers. The corresponding picture on the mental side refers to a distinction between conscious and unconscious domains.[18] In Jung’s depth psychological conceptions, these two domains are connected by the emergence of conscious mental states from the unconscious, analogous to physical measurement. In Jung’s depth psychology it is crucial that the unconscious has a collective component, unseparated between individuals and populated by so-called archetypes. They are regarded as constituting the psychophysically neutral level comprising both the collective unconscious and the holistic reality of quantum theory. At the same time they operate as “ordering factors”, being responsible for the arrangement of their psychical and physical manifestations in the epistemically distinguished domains of mind and matter. More details of this picture can be found in Jung and Pauli (1955), Meier (2001), Atmanspacher and Primas (2009), Atmanspacher and Fach (2013), and Atmanspacher and Fuchs (2014). This scheme is clearly related to scenario (B) of Sec. 2, combining an epistemically dualistic with an ontically monistic approach. Correlations between the mental and the physical are conceived as non-causal, thus respecting the causal closure of the physical against the mental. However, there is a causal relationship (in the sense of formal rather than efficient causation) between the psychophysically neutral, monistic level and the epistemically distinguished mental and material domains. In Pauli’s and Jung’s terms this kind of causation is expressed by the ordering operation of archetypes in the collective unconscious. In other words, this scenario offers the possibility that the mental and material manifestations may inherit mutual correlations due to the fact that they are jointly caused by the psychophysically neutral level. One might say that such correlations are remnants reflecting the lost holism of the underlying reality. They are not the result of any direct causal interaction between mental and material domains. Thus, they are not suitable for an explanation of direct efficient mental causation. Their existence would require some psychophysically neutral activity entailing correlation effects that would be misinterpreted as mental causation of physical events. Independently of quantum theory, a related move was suggested by Velmans (2002, 2009). But even without mental causation, scenario (B) is relevant to ubiquitous correlations between conscious mental states and physical brain states. 5.3 Further Developments In the Pauli-Jung conjecture, these correlations are called synchronistic and have been extended to psychosomatic relations (Meier 1975). A comprehensive typology of mind-matter correlations following from Pauli’s and Jung’s dual-aspect monism was proposed by Atmanspacher and Fach (2013). They found that a large body of empirical material concerning more than 2000 cases of so-called “exceptional experiences” can be classified according to their deviation from the conventional reality model of a subject and from the conventional relations between its components (see Atmanspacher and Fach 2019 for more details). Synchronistic events in the sense of Pauli and Jung appear as a special case of such relational deviations. An essential condition required for synchronistic correlations is that they are meaningful for those who experience them. It is tempting to interpret the use of meaning as an attempt to introduce semantic information as an alternative to syntactic information as addressed above. (Note the parallel to active information as in the approach by Bohm and Hiley.) Although this entails difficult problems concerning a clear-cut definition and operationalization, something akin to meaning, both explicitly and implicitly, might be a relevant informational currency for mind-matter relations within the framework of decompositional dual-aspect thinking (Atmanspacher 2014). Primas (2003, 2009, 2017) proposed a dual-aspect approach where the distinction of mental and material domains originates from the distinction between two different modes of time: tensed (mental) time, including nowness, on the one hand and tenseless (physical) time, viewed as an external parameter, on the other (see the entries on time and on being and becoming in modern physics). Regarding these two concepts of time as implied by a symmetry breaking of a timeless level of reality that is psychophysically neutral, Primas conceives the tensed time of the mental domain as quantum-correlated with the parameter time of physics via “time-entanglement”. This scenario has been formulated in a Hilbert space framework with appropriate time operators (Primas 2009, 2017), so it offers a formally elaborated dual-aspect quantum framework for basic aspects of the mind-matter problem. It shows some convergemce with the idea of temporally nonlocal mental states as addresed in Section 4.2. As indicated in Section 3.2, the approach by Stapp contains elements of dual-aspect thinking as well, although this is not much emphasized by its author. The dual-aspect quantum approaches discussed in the present section tend to focus on the issue of a generalized mind-matter “entanglement” more than on state reduction. The primary purpose here is to understand correlations between mental and material domains rather than direct causally efficacious interactions between them. A final issue of dual-aspect approaches in general refers to the problem of panpsychism or panexperientialism, respectively (see the review by Skrbina 2003, and the entry on panpsychism). In the limit of a universal symmetry breaking at the psychophysically neutral level, every system has both a mental and a material aspect. In such a situation it is important to understand “mentality” much more generally than “consciousness”. Unconscious or proto-mental acts as opposed to conscious mental acts are notions sometimes used to underline this difference. The special case of human consciousness within the mental domain might be regarded as special as its material correlate, the brain, within the material domain. 6. Conclusions The historical motivation for exploring quantum theory in trying to understand consciousness derived from the realization that collapse-type quantum events introduce an element of randomness, which is primary (ontic) rather than due to ignorance or missing information (epistemic). Approaches such as those of Stapp and of Beck and Eccles emphasize this (in different ways), insofar as the ontic randomness of quantum events is regarded to provide room for mental causation, i.e., the possibility that conscious mental acts can influence brain behavior. The approach by Penrose and Hameroff also focuses on state collapse, but with a significant move from mental causation to the non-computability of (particular) conscious acts. Any discussion of state collapse or state reduction (e.g. by measurement) refers, at least implicitly, to superposition states since those are the states that are reduced. Insofar as entangled systems remain in a quantum superposition as long as no measurement has occurred, entanglement is always co-addressed when state reduction is discussed. By contrast, some of the dual-aspect quantum approaches utilize the topic of entanglement differently, and independently of state reduction in the first place. Inspired by and analogous to entanglement-induced nonlocal correlations in quantum physics, mind-matter entanglement is conceived as the hypothetical origin of mind-matter correlations. This exhibits the highly speculative picture of a fundamentally holistic, psychophysically neutral level of reality from which correlated mental and material domains emerge. Each of the examples discussed in this overview has both promising and problematic aspects. The approach by Beck and Eccles is most detailed and concrete with respect to the application of standard quantum mechanics to the process of exocytosis. However, it does not solve the problem of how the activity of single synapses enters the dynamics of neural assemblies, and it leaves the mental causation of quantum processes as a mere claim. Stapp’s approach suggests a radically expanded ontological basis for both the mental domain and status-quo quantum theory as a theory of matter without essentially changing the formalism of quantum theory. Although related to inspiring philosophical and some psychological background, it still lacks empirical confirmation. The proposal by Penrose and Hameroff exceeds the domain of present-day quantum theory by far and is the most speculative example among those discussed. It is not easy to see how the picture as a whole can be formally worked out and put to empirical test. The approach initiated by Umezawa is embedded in the framework of quantum field theory, more broadly applicable and formally more sophisticated than standard quantum mechanics. It is used to describe the emergence of classical activity in neuronal assemblies on the basis of symmetry breakings in a quantum field theoretical framework. A clear conceptual distinction between brain states and mental states has often been missing. Their relation to mental states is has recently been indicated in the framework of a dual-aspect approach. The dual-aspect approaches of Pauli and Jung and of Bohm and Hiley are conceptually more transparent and more promising. Although there is now a huge body of empirically documented mind-matter correlations that supports the Pauli-Jung conjecture, it lacks a detailed formal basis so far. Hiley’s work offers an algebraic framework which may lead to theoretical progress. A novel dual-aspect quantum proposal by Primas, based on the distinction between tensed mental time and tenseless physical time, marks a significant step forward, particularly as concerns a consistent formal framework. Maybe the best prognosis for future success among the examples described in this overview, at least on foreseeable time scales, goes to the investigation of mental quantum features without focusing on associated brain activity to begin with. A number of corresponding approaches have been developed which include concrete models for concrete situations and have lead to successful empirical tests and further predictions. On the other hand, a coherent theory behind individual models and relating the different types of approaches is still to be settled in detail. With respect to scientific practice, a particularly promising aspect is the visible formation of a scientific community with conferences, mutual collaborations, and some perspicuous attraction for young scientists to join the field. • Aerts, D., Durt, T., Grib, A., Van Bogaert, B., and Zapatrin, A., 1993, “Quantum structures in macroscopical reality,” International Journal of Theoretical Physics, 32: 489–498. • Aerts, D. and Aerts, S., 1994, “Applications of quantum statistics in psychological studies of decision processes,” Foundations of Science, 1: 85–97. • Aerts, S., Kitto, K., and Sitbon, L., 2011, “Similarity metrics within a point of view,” in Quantum Interaction. 5th International Conference, D. Song, et al. (eds.), Berlin: Springer, pp. 13–24. • Alfinito, E., and Vitiello, G., 2000, “Formation and life-time of memory domains in the dissipative quantum model of brain,” International Journal of Modern Physics B, 14: 853–868. • Alfinito, E., Viglione, R.G., and Vitiello, G., 2001, “The decoherence criterion,” Modern Physics Letters B, 15: 127–135. • Anderson, J.A., and Rosenfeld, E. (eds.), 1988, Neurocomputing: Foundations of Research, Cambridge, MA: MIT Press. • Atmanspacher, H., 2014, “20th century variants of dual-aspect thinking (with commentaries and replies),” Mind and Matter, 12: 245–288. • Atmanspacher H., and Fach W., 2013, “A structural-phenomenological typology of mind-matter correlations,” Journal of Analytical Psychology, 58: 218–243. • –––, 2019, “Exceptional experiences of stable and unstable mental states, understood from a dual-aspect point of view,” Philosophies, 2019: 4,7. • Atmanspacher, H., and Filk, T., 2006, “Complexity and non-commutativity of learning operations on graphs,” BioSystems, 85: 84–93. • –––, 2010, “A proposed test of temporal nonlocality in bistable perception,” Journal of Mathematical Psychology, 54: 314–321. • –––, 2013, “The Necker-Zeno model for bistable perception,” Topics in Cognitive Science, 5: 800–817. • –––, 2019, “Contextuality revisited – Signaling may differ from communicating,” in Quanta and Mind, A. de Barros and C. Montemayor (eds.), Berlin: Springer. • Atmanspacher, H., and Fuchs, C., (eds.), 2014, The Pauli-Jung Conjecture and Its Impact Today, Exeter: Imprint Academic. • Atmanspacher, H. and beim Graben, P., 2007, “Contextual emergence of mental states from neurodynamics.” Chaos and Complexity Letters, 2: 151–168. • Atmanspacher, H. and Primas, H., (eds.), 2009, Recasting Reality. Wolfgang Pauli’s Philosophical Ideas and Contemporary Science, Berlin: Springer. • Atmanspacher, H., and Römer, H., 2012, “Order effects in sequential measurememts of non-commuting psychological observables,” Journal of Mathematical Psychology, 56: 274–280. • Atmanspacher, H., Römer, H., and Walach, H., 2002, “Weak quantum theory: Complementarity and entanglement in physics and beyond,” Foundations of Physics, 32: 379–406. • Beck, F., and Eccles, J., 1992, “Quantum aspects of brain activity and the role of consciousness.” Proceedings of the National Academy of Sciences of the USA, 89: 11357–11361. • Beck, F., 2001, “Quantum brain dynamics and consciousness,” in The Physical Nature of Consciousness, P. van Loocke (ed.), Amsterdam: Benjamins, pp. 83–116. • beim Graben, P., Filk, T., and Atmanspacher, H., 2013, “Epistemic entanglement due to non-generating partitions of classical dynamical systems,” International Journal of Theoretical Physics, 52: 723–734. • Bohm, D., 1990, “A new theory of the relationship of mind and matter,” Philosophical Psychology, 3: 271–286. • Bohm, D., and Hiley, B.J., 1993, The Undivided Universe, Routledge, London. See Chap. 15. • Briegel, H.-J., 2012, “On creative machines and the physical origins of freedom,” Scientific Reports, 2: 522. • Briegel, H.-J., and Müller, T., 2015, “A chance for attributable agency”, Minds and Machines, 25: 261–279. • Brukner, C., and Zeilinger, A., 2003, “Information and fundamental elements of the structure of quantum theory,” in Time, Quantum and Information, L. Castell and O. Ischebeck (ed.), Berlin: Springer, pp. 323–355. • Bruza, P.D., Kitto, K., Ramm, B.R., and Sitbon, L., 2015, “A probabilistic framework for analysing the compositionality of conceptual combinations”, Journal of Mathematical Psychology, 67: 26–38. • Burdick, R.K., Villabona-Monsalve, J.P., Mashour, G.A., and Goodson, T. III, 2019, “Modern anesthetic ethers demonstrate quantum interactions with entangled photons,” Scientific Reports , 9: 11351. • Busemeyer, J.R., and Bruza, P.D., 2012, Quantum Models of Cognition and Decision, Cambridge: University Press. • Busemeyer, J.R., and Wang, Z., 2017, “Is there a problem with quantum models of psychological measurements?,” PLoS ONE, 12(11): e0187733125. • –––, 2018, “Hilbert space multidimensional theory,” Psychological Review, 125: 572–591. • Busemeyer, J.R., Wang, Z., and Townsend, J.T., 2006, “Quantum dynamics of human decision making,” Journal of Mathematical Psychology, 50: 220–241. • Busemeyer, J.R., Pothos, E., Franco, R., and Trueblood, J.S., 2011, “A quantum theoretical explanation for probability judgment errors,” Psychological Review, 108: 193–218. • Butterfield, J., 1998, “Quantum curiosities of psychophysics,” in Consciousness and Human Identity, J. Cornwell (ed.), Oxford University Press, Oxford, pp. 122–157. • Cervantes, V.H., and Dzhafarov, E.N., 2018, “Snow Queen is evil and beautiful: Experimental evidence for probabilistic contextuality in human choices,” Decision, 5: 193–204. • Chalmers, D., 1995, “Facing up to the problem of consciousness,” Journal of Consciousness Studies, 2(3): 200–219. • –––, 1996, The Conscious Mind, Oxford: Oxford University Press. • Clifton, R., Bub, J., and Halvorson, H., 2003, “Characterizing quantum theory in terms of information-theoretic constraints,” Foundations of Physics, 33: 1561–1591. • Craddock, T.J.A., Hameroff, S.R., Ayoub, A.T., Klobukowski, M., and Tuszynski, J.A., 2015, “Anesthetics act in quantum channels in brain microtubules to prevent consciousness,” Current Topics in Medicinal Chemistry, 15: 523–533. • Craddock, T.J.A., Kurian, P., Preto, J., Sahu, K., Hameroff, S.R., Klobukowski, M., and Tuszynski, J.A., 2017, “Anesthetic alterations of collective terahertz oscillations in tubulin correlate with clinical potency: Implications for anesthetic action and post-operative cognitive dysfunction,” Scientific Reports, 7: 9877. • Cucu, A.C., and Pitts, J.B., 2019, “How dualists should (not) respond to the objection from energy conservation,” Mind and Matter, 17: 95–121. • de Gosson, M.A., and Hiley, B., 2013, “Hamiltonian flows and the holomovement,” Mind and Matter, 11: 205–221. • d’Espagnat, B., 1999, “Concepts of reality,” in On Quanta, Mind, and Matter, H. Atmanspacher, U. Müller-Herold, and A. Amann (eds.), Kluwer, Dordrecht, pp. 249–270. • Dzhafarov, E.N., and Kujala, J.V., 2013, “All-possible-couplings approach to measuring probabilistic context,” PLoS One, 8(5): e61712. • Ellis, G.F.R., Noble, D., and O’Connor T. (eds.), 2011, Top-Down Causation: An Integrating Theme Within and Across the Sciences?, Special Issue of Interface Focus 2(1). • Esfeld, M, 1999, “Wigner’s view of physical reality,” Studies in History and Philosophy of Modern Physics, 30B: 145–154. • Fechner, G., 1861, Über die Seelenfrage. Ein Gang durch die sichtbare Welt, um die unsichtbare zu finden, Amelang, Leipzig. Second edition: Leopold Voss, Hamburg, 1907. Reprinted Eschborn: Klotz, 1992. • Feigl, H., 1967, The ‘Mental’ and the ‘Physical’, Minneapolis: University of Minnesota Press. • Filk, T., and von Müller, A., 2009, “Quantum physics and consciousness: The quest for a common conceptual foundation,” Mind and Matter, 7(1): 59–79. • Fisher, M.P.A., 2015, “Quantum cognition: The possibility of processing with nuclear spins in the brain,” Annals of Physics, 362: 593–602. • –––, 2017, “Are we quantum computers, or merely clever robots?” Asia Pacific Physics Newsletter, 6(1): 39–45. • Flohr, H., 2000, “NMDA receptor-mediated computational processes and phenomenal consciousness,” in Neural Correlates of Consciousness. Empirical and Conceptual Questions, T. Metzinger (ed.), Cambridge: MIT Press, pp. 245–258. • Franck, G., 2004, “Mental presence and the temporal present,” in Brain and Being, G.G. Globus, K.H. Pribram, and G. Vitiello (eds.), Amsterdam: Benjamins, pp. 47–68. • –––, 2008, “Presence and reality: An option to specify panpsychism?” Mind and Matter, 6(1): 123–140. • Freeman, W.J., and Quian, Quiroga R., 2012, Imaging Brain Function with EEG, Berlin: Springer. • Freeman, W.J., and Vitiello, G., 2006, “Nonlinear brain dynamics as macroscopic manifestation of underlying many-body field dynamics,” Physics of Life Reviews, 3(2): 93–118. • –––, 2008, “Dissipation and spontaneous symmetry breaking in brain dynamics,” Journal of Physics A, 41: 304042. • –––, 2010, “Vortices in brain waves,” International Journal of Modern Physics B, 24: 3269–3295. • –––, 2016, “Matter and mind are entangled in two streams of images guiding behavior and informing the subject through awareness,” Mind and Matter, 14: 7–25. • Fröhlich, H., 1968, “Long range coherence and energy storage in biological systems,” International Journal of Quantum Chemistry, 2: 641–649. • Fuchs, C.A., 2002, “Quantum mechanics as quantum information (and only a little more),” n Quantum Theory: Reconsideration of Foundations, A. Yu. Khrennikov (ed.), Växjö: Växjö University Press, pp. 463–543. • Gabora L., and Aerts, D., 2002, “Contextualizing concepts using a mathematical generalization of the quantum formalism,” Journal of Experimental and Theoretical Artificial Intelligence, 14: 327–358. • –––, 2009, “A model of the emergence and evolution of integrated worldviews,” Journal of Mathematical Psychology, 53: 434–451. • Grush, R., and Churchland, P.S., 1995, “Gaps in Penrose’s toilings,” Journal of Consciousness Studies 2(1), 10–29. (See also the response by R. Penrose and S. Hameroff in Journal of Consciousness Studies 2(2) (1995): 98–111.) • Hagan, S., Hameroff, S.R., and Tuszynski, J.A., 2002, “Quantum computation in brain microtubules: Decoherence and biological feasibility,” Physical Review E, 65: 061901-1 to -11. • Hameroff, S.R., and Penrose, R., 1996, “Conscious events as orchestrated spacetime selections,” Journal of Consciousness Studies, 3(1): 36–53. • –––, 2014, “Consciousness in the universe: A review of the Orch OR theory (with commentaries and replies),” Physics of Life Reviews, 11: 39–112. • Hartmann, L., Düer, W., and Briegel, H.-J., 2006, “Steady state entanglement in open and noisy quantum systems at high temperature,” Physical Review A, 74: 052304. • Haven, E., and Khrennikov, A.Yu., 2013, Quantum Social Science, Cambridge: Cambridge University Press. • Heisenberg, W., 1958, Physics and Philosophy, New York: Harper and Row. • Hepp, K., 1999, “Toward the demolition of a computational quantum brain,” in Quantum Future, P. Blanchard and A. Jadczyk (eds.), Berlin: Springer, pp. 92–104. • Hiley, B.J., 2001, “Non-commutative geometry, the Bohm interpretation and the mind-matter relationship,” in Computing Anticipatory Systems—CASYS 2000, D. Dubois (ed.), Berlin: Springer, pp. 77–88. • Holton, G., 1970, “The roots of complementarity,” Daedalus, 99: 1015–1055. • Huelga, S.H., and Plenio, M.B., 2013, “Vibrations, quanta, and biology,” Contemporary Physics, 54: 181–207. • James, W., 1950 [1890], The Principles of Psychology (Volume 1), New York: Dover; originally published in 1890. • Jammer, M., 1974, The Philosophy of Quantum Mechanics, New York: Wiley. • Jibu, M., and Yasue, K., 1995, Quantum Brain Dynamics and Consciousness, Amsterdam: Benjamins. • Jortner, J., 1976, “Temperature dependent activation energy for electron transfer between biological molecules,” Journal of Chemical Physics, 64: 4860–4867. • Jung, C.G., and Pauli, W., 1955, The Interpretation of Nature and the Psyche, Pantheon, New York. Translated by P. Silz. German original Naturerklärung und Psyche, Zürich: Rascher, 1952. • Kandel, E.R., Schwartz, J.H., and Jessell, T.M., 2000, Principles of Neural Science, New York: McGraw Hill. • Kane, R., 1996, The Significance of Free Will, Oxford: Oxford University Press. • Kaneko, K., and Tsuda, I., 2000, Chaos and Beyond, Berlin: Springer. • Khrennikov, A.Yu., 1999, “Classical and quantum mechanics on information spaces with applications to cognitive, psychological, social and anomalous phenomena,” Foundations of Physics, 29: 1065–1098. • Kim, J., 1998, Mind in a Physical World, Cambridge, MA: MIT Press. • Krug, J.T., A.K. Klein, E.M. Purvis, K. Ayala, M.S. Mayes, L. Collins, M.P.A. Fisher, and A. Ettenberg, 2019, “Effects of chronic lithium exposure in a modified rodent ketamine-induced hyperactivity model of mania,” Pharmacology, Biochemistry and Behavior, 179: 150–156. • Kuhn, A., Aertsen, A., and Rotter, S., 2004, “Neuronal integration of synaptic input in the fluctuation-driven regime,” Journal of Neuroscience, 24: 2345–2356. • Li, J., and Paraoanu, G.S., 2009, “Generation and propagation of entanglement in driven coupled-qubit systems,” New Journal of Physics, 11: 113020. • Li, N., Lu, D., Yang, L., Tao, H., Xu, Y., Wang, C., Fu, L., Liu, H., Chummum, Y., and Zhang, S., 2018: “Nuclear spin attenuates the anesthetic potency of xenon isotopes in mice: Implications for the mechanisms of anesthesia and consciousness,”. Anesthesiology, 129: 271–277. • London, F., and Bauer, E., 1939, La théorie de l’observation en mécanique quantique, Paris: Hermann; English translation, “The theory of observation in quantum mechanics,” in Quantum Theory and Measurement, J.A. Wheeler and W.H. Zurek (eds.), Princeton: Princeton University Press, 1983, pp. 217–259. • Mahler, G., 2015, “Temporal non-locality: Fact or fiction?,” in Quantum Interaction. 8th International Conference, H. Atmanspacher, et al. (eds.), Berlin: Springer, pp. 243–254. • Marcus, R.A., 1956, “On the theory of oxydation-reduction reactions involving electron transfer,” Journal of Chemical Physics, 24: 966–978. • Margenau, H., 1950, The Nature of Physical Reality, New York: McGraw Hill. • Meier, C.A., 1975, “Psychosomatik in Jungscher Sicht,” in Experiment und Symbol, C.A. Meier (ed.), Olten: Walter Verlag, pp. 138–156. • ––– (eds.), 2001, Atom and Archetype: The Pauli/Jung Letters 1932–1958, Princeton University Press, Princeton. Translated by D. Roscoe. German original Wolfgang Pauli und C.G. Jung: ein Briefwechsel, Berlin: Springer, 1992. • Müller, T., and Briegel, H.-J., 2018, “A stochastic process model for free agency under indeterminism,” Dialectica, 72: 219–252. • Nagel, T., 1974, “What is it like to be a bat?,” The Philosophical Review, LXXXIII: 435–450. • Neumann, J. von, 1955, Mathematical Foundations of Quantum Mechanics, Princeton University Press, Princeton. German original Die mathematischen Grundlagen der Quantenmechanik, Berlin: Springer, 1932. • Noë, A., and Thompson, E., 2004, “Are there neural correlates of consciousness? (with commentaries and replies),” Journal of Consciousness Studies, 11: 3–98. • Oizumi, M., Albantakis, L., and Tononi, G., 2014, “From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0,” PLoS Computational Biology, 10(5): e1003588. • Paparo, G.D., Dunjko, V., Makmal, A., Martin-Delgado, M.A., and Briegel, H.-J., 2012, “Quantum speedup for active learning agents,” Physical Review X, 4: 031002. • Papaseit, C., Pochon, N., and Tabony, J., 2000, “Microtubule self-organization is gravity-dependent,” Proceedings of the National Academy of Sciences of the USA, 97: 8364–8368. • Pauen, M., 2001, Grundprobleme der Philosophie des Geistes, Frankfurt, Fischer. • Penrose, R., 1989, The Emperor’s New Mind, Oxford: Oxford University Press. • –––, 1994, Shadows of the Mind, Oxford: Oxford University Press. • Penrose, R., and Hameroff, S., 1995, “What gaps? Reply to Grush and Churchland,” Journal of Consciousness Studies, 2(2): 98–111. • Pessa, E., and Vitiello, G., 2003, “Quantum noise, entanglement and chaos in the quantum field theory of mind/brain states,” Mind and Matter, 1: 59–79. • Pitkänen, M., 2014, “New results about microtubuli as quantum systems,” Journal of Nonlocality, 3(1): available online. • Popescu, S., 2014, “Nonlocality beyind quantum mechanics, ” Nature Physics, 10 (April): 264–270. • Popescu, S., and Rohrlich, D., 1994, “Nonlocality as an axiom,” Foundations of Physics, 24: 379–385. • Popper, K.R., and Eccles, J.C., 1977, The Self and Its Brain, Berlin: Springer. • Pothos, E.M., and Busemeyer, J.R., 2009, “A quantum probability model explanation for violations of rational decision theory,” Proceedings of the Royal Society B, 276: 2171–2178. • –––, 2013, “Can quantum probability provide a new direction for cognitive modeling?” Behavioral and Brain Sciences, 36: 255–274. • Pothos, E.M., Busemeyer, J.R., and Trueblood, J.S., 2013, “A quantum geometric model of similarity,” Psychological Review, 120: 679–696. • Pribram, K., 1971, Languages of the Brain, Englewood Cliffs: Prentice-Hall. • Primas, H., 2002, “Hidden determinism, probability, and time’s arrow,” in Between Chance and Choice, H. Atmanspacher and R.C. Bishop (eds.), Exeter: Imprint Academic, pp. 89–113. • –––, 2003, “Time-entanglement between mind and matter,” Mind and Matter, 1: 81–119. • –––, 2007, “Non-Boolean descriptions for mind-matter systems,” Mind and Matter, 5(1): 7–44. • –––, 2009, “Complementarity of mind and matter,” in Recasting Reality, H. Atmanspacher and H. Primas (eds.), Berlin: Springer, pp. 171–209. • –––, 2017, Knowledge and Time, Berlin: Springer. • Pylkkänen, P., 2015, “Fundamental physics and the mind – Is there a connection?,” in Quantum Interaction. 8th International Conference, H. Atmanspacher, et al. (eds.), Berlin: Springer, pp. 3–11. • Ricciardi, L.M., and Umezawa, H., 1967, “Brain and physics of many-body problems,” Kybernetik, 4: 44–48. • Sabbadini, S.A., and Vitiello, G., 2019, “Entanglement and phase-mediated correlations in quantum field theory. Application to brain-mind states,” Applied Sciences, 9: 3203. • Sahu, S., Ghosh, S., Hirata, K., Fujita, D., and Bandyopadhyay, A., 2013, “Multi-level memory-switching properties of a single brain microtubule,” Applied Physics Letters, 102: 123701. • Schwartz, J.M., Stapp, H.P., and Beauregard, M., 2005, “Quantum theory in neuroscience and psychology: a neurophysical model of mind/brain interaction,” Philosophical Transactions of the Royal Society B, 360: 1309–1327. • Schwarz, N., and Sudman, S. (eds.), 1992, Context Effects in Social and Psychological Research, Berlin: Springer. • Sechzer, J. A., K. W. Lieberman, G. J. Alexander, D. Weidman, and P. E. Stokes, 1986, “Aberrant parenting and delayed offspring development in rats exposed to lithium,” Biological Psychiatry, 21: 1258–1266. • Shimony, A., 1963, “Role of the observer in quantum theory,” American Journal of Physics, 31: 755–773. • Skrbina, D., 2003, “Panpsychism in Western philosophy,” Journal of Consciousness Studies, 10(3): 4–46. • Smart, J.J.C., 1963, Philosophy and Scientific Realism, London: Routledge & Kegan Paul. • Spencer, Brown G., 1969, Laws of Form, London: George Allen and Unwin. • Squires, E., 1990, Conscious Mind in the Physical World, Bristol: Adam Hilger. • Stapp, H.P., 1993, “A quantum theory of the mind-brain interface,” in Mind, Matter, and Quantum Mechanics, Berlin: Springer, pp. 145–172. • –––, 1999, “Attention, intention, and will in quantum physics,” Journal of Consciousness Studies, 6(8/9): 143–164. • –––, 2006,“Clarifications and specifications. Conversation with Harald Atmanspacher,” Journal of Consciousness Studies, 13(9): 67–85. • –––, 2007, Mindful Universe, Berlin: Springer. • –––, 2015, “A quantum-mechanical theory of the mind-brain connection,” in Beyond Physicalism, E.F. Kelly et al. (eds.), Lanham: Rowman and Littlefield, pp. 157–193. • Stephan, A., 1999, Emergenz, Dresden: Dresden University Press. • Strawson, G., 2003, “Real materialism,” in Chomsky and His Critics, L. Anthony and N. Hornstein (eds.), Oxford: Blackwell, pp. 49–88. • Stuart, C.I.J., Takahashi, Y., and Umezawa, H., 1978, “On the stability and non-local properties of memory,” Journal of Theoretical Biology, 71: 605–618. • –––, 1979, “Mixed system brain dynamics: neural memory as a macroscopic ordered state,” Foundations of Physics, 9: 301–327. • Suarez, A., and Adams, P. (eds.), 2013, Is Science Compatible with Free Will?, Berlin: Springer. • Tegmark, M., 2000, “Importance of quantum decoherence in brain processes,” Physical Review E 61, 4194–4206. • Tononi, G., and Koch, C., 2015, “Consciousness: Here, there and everywhere?” Philosophical Transactions of the Royal Society B, 370: 20140167. • Tversky, A., 1977, “Features of similarity,” Psychological Review, 84: 327–352. • Tversky, A., and Shafir, E., 1992, “The disjunction effect in choice under uncertainty,” Psychological Science, 3: 305–309. • Velmans, M., 2002, “How could conscious experiences affect brains?” Journal of Consciousness Studies, 9(11): 3–29. Commentaries to this article by various authors and Velman’s response in the same issue, pp. 30–95. See also Journal of Consciousness Studies, 10(12): 24–61 (2003), for the continuation of the debate. • –––, 2009, Understanding Consciousness, Routledge, London. • Vitiello, G., 1995, “Dissipation and memory capacity in the quantum brain model,” International Journal of Modern Physics B, 9: 973–989. • –––, 2001, My Double Unveiled, Amsterdam: Benjamins. • –––, 2002, “Dissipative quantum brain dynamics,” in No Matter, Never Mind, K. Yasue, M. Jibu, and T. Della Senta (eds.), Amsterdam: Benjamins, pp. 43–61. • –––, 2012, “Fractals as macroscopic manifestation of squeezed coherent states and brain dynamics,” Journal of Physics, 380: 012021. • –––, 2015, “The use of many-body physics and thermodynamics to describe the dynamics of rhythmic generators in sensory cortices engaged in memory and learning,” Current Opinion in Neurobiology, 31: 7–12. • Wang, Z., Busemeyer, J., Atmanspacher, H., and Pothos, E., 2013, “The potential of quantum theory to build models of cognition,” Topics in Cognitive Science, 5: 672–688. • Wang, Z., Solloway, T., Shiffrin, R.M., and Busemeyer, J.R., 2014, “Context effects produced by question orders reveal quantum nature of human judgments,” Proceedings of the National Academy of Sciences of the USA, 111: 9431–9436. • Weizsäcker, C.F. von, 1985, Aufbau der Physik, München: Hanser. • Wendt, A., 2015, Quantum Mind and Social Science, Cambridge: Cambridge University Press. • Wheeler, J.A., 1994, “It from bit,” in At Home in the Universe, Woodbury: American Institute of Physics, pp. 295–311, references pp. 127–133. • Whitehead, A.N., 1978, Process and Reality, New York: Free Press. • Wigner, E.P., 1967, “Remarks on the mind-body question,” in Symmetries and Reflections, Bloomington: Indiana University Press, pp. 171–184. • –––, 1977, “Physics and its relation to human knowledge,” Hellenike Anthropostike Heaireia, Athens, pp. 283–294. Reprinted in Wigner’s Collected Works Vol. VI, edited by J. Mehra, Berlin: Springer, 1995, pp. 584–593. • Wittek, P., 2014, Quantum Machine Learning: What Quantum Computing Means to Data Mining, New York: Academic Press. • Wundt, W., 1911, Grundzüge der physiologischen Psychologie, dritter Band, Leipzig: Wilhelm Engelmann. Other Internet Resources [Please contact the author with suggestions.] Inspiring discussions on numerous topics treated in this paper with Guido Bacciagaluppi, Thomas Filk, Hans Flohr, Stuart Hameroff, Hans Primas, Stefan Rotter, Henry Stapp, Giuseppe Vitiello, and Max Velmans are gratefully acknowledged. Copyright © 2020 by Harald Atmanspacher <atmanspacher@collegium.ethz.ch> The Encyclopedia Now Needs Your Support Please Read How You Can Help Keep the Encyclopedia Free
18790f1d253fcc00
Statistical physics Statistical physics is a branch of physics that evolved from a foundation of statistical mechanics, which uses methods of probability theory and statistics, and particularly the mathematical tools for dealing with large populations and approximations, in solving physical problems. It can describe a wide variety of fields with an inherently stochastic nature. Its applications include many problems in the fields of physics, biology, chemistry, neuroscience. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.[1] Statistical mechanics develops the phenomenological results of thermodynamics from a probabilistic examination of the underlying microscopic systems. Historically, one of the first topics in physics where statistical methods were applied was the field of classical mechanics, which is concerned with the motion of particles or objects when subjected to a force. Statistical physics explains and quantitatively describes superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics. In solid state physics, statistical physics aids the study of liquid crystals, phase transitions, and critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases). Statistical mechanics Statistical mechanics provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk properties of materials that can be observed in everyday life, therefore explaining thermodynamics as a natural result of statistics, classical mechanics, and quantum mechanics at the microscopic level. Because of this history, statistical physics is often considered synonymous with statistical mechanics or statistical thermodynamics.[note 1] One of the most important equations in statistical mechanics (akin to in Newtonian mechanics, or the Schrödinger equation in quantum mechanics) is the definition of the partition function , which is essentially a weighted sum of all possible states available to a system. where is the Boltzmann constant, is temperature and is energy of state . Furthermore, the probability of a given state, , occurring is given by Here we see that very-high-energy states have little probability of occurring, a result that is consistent with intuition. A statistical approach can work well in classical systems when the number of degrees of freedom (and so the number of variables) is so large that the exact solution is not possible, or not really useful. Statistical mechanics can also describe work in non-linear dynamics, chaos theory, thermal physics, fluid dynamics (particularly at high Knudsen numbers), or plasma physics. Quantum statistical mechanics Monte Carlo method Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system. Monte Carlo methods are important in computational physics, physical chemistry, and related fields, and have diverse applications including medical physics, where they are used to model radiation transport for radiation dosimetry calculations.[2][3][4] See also 1. ^ This article presents a broader sense of the definition of statistical physics. 1. ^ Huang, Kerson (2009-09-21). Introduction to Statistical Physics (2nd ed.). CRC Press. p. 15. ISBN 978-1-4200-7902-9. Further reading • Reif, F. (2009). Fundamentals of Statistical and Thermal Physics. Waveland Press. ISBN 978-1-4786-1005-2. • Müller-Kirsten, Harald J.W. (2013). Basics of Statistical Physics (2nd ed.). World Scientific. doi:10.1142/8709. ISBN 978-981-4449-55-7. • Kadanoff, Leo P. "Statistical Physics and other resources". • Kadanoff, Leo P. (2000). Statistical Physics: Statics, Dynamics and Renormalization. World Scientific. ISBN 978-981-02-3764-6. • Flamm, Dieter (1998). "History and outlook of statistical physics". arXiv:physics/9803005.
23282a1b56835a1a
Saturday, June 12, 2021 New Website I have a new personal website, and new blog posts will appear there. Please visit and let me know if you have any feedback about its format or readability. Thanks! Tuesday, June 08, 2021 RQM and Molecular Composition According to the last post, the constitution of complex natural systems should be understood using a theory of composite causal processes. Composite causal processes are formed from a pattern of discrete causal interactions among a group of smaller sub-processes. When the latter sustains a higher rate of in-group versus out-group interactions, they form a composite. While this account has intuitive appeal in the case of macroscopic systems, what about more basic building blocks of nature? Can the same approach work in the microscopic realm? In this post, I will make the case that it does, focusing on molecules.  A key to reaching this conclusion will be the use of the conceptual resources of relational quantum mechanics (RQM). Background: The Problem of Molecular Structure In approaching the question of molecular composition, we need to reckon with a long-standing problem regarding how the structure of molecules—the spatial organization of component atoms we are all familiar with from chemistry—relates to quantum theory.[1]  Modern chemistry uses QM models to successfully calculate the value of molecular properties: one starts by solving for the molecular wave function and associated energies using the time-independent Schrödinger equation Ĥ ψ=Eψ.[2] But there are several issues in connecting the quantum formalism to molecular structure. First and most simply, the quantum description of a multiple particle system does not “reside” in space at all. The wave function assigns (complex) numbers to points in a multi-dimensional configuration space (3N dimensions where N is the number of particles in the system). How do we get from this to a spatially organized molecule? In addition to this puzzle, some of the methods used to estimate ψ in practice raise additional issues. Something to keep in mind in what follows is that multi-particle atomic and molecular wave equations are generally computationally intractable.  So, simplifying assumptions of some sort will always be needed. One important strategy normally used is to assume that the nuclei are stationary in space, and then proceed to estimate the electronic wave function.[3] Where do we get the assumption for the particular configuration for the nuclei in the case of a molecule?  This is typically informed by experimental evidence and/or candidates can be evaluated iteratively, seeking the lowest equilibrium energy configuration. I’ll discuss the implications of this assumption shortly. Next, there are different techniques used to estimate the electronic wave function. For multi-electron atoms, one adds additional electrons using hydrogen-like wave functions (called orbitals) of increasing energy. Chemistry textbooks offer visualizations of these orbitals for various atoms and we can form some intuitions for how they overlap to form bonded molecules (but strictly speaking remember the wave functions are not in 3D space). One approach to molecular wave functions uses hybrid orbitals based on these overlaps in its calculations.  Another approach skips this process and just proceeds by incrementally adding the requisite electrons to orbitals calculated for whole molecule at once.[4] In this method, the notion of localized atoms linked by bonds is much more elusive, but this intuitive departure interestingly has no impact on the effectiveness of the calculation method (this method is frequently more efficient). Once we have molecular wave functions, we have an estimate of energies and can derive other properties of interest.  We can also use the wave function to calculate the electron density distribution for the system (usually designated by ρ): this gives the number of electrons one would expect to find at various spatial locations upon measurement.  This is the counterpart of the process we use to probabilistically predict the outcome of a measurement for any quantum system by multiplying the wave function ψ by its complex conjugate ψ* (the Born rule). Interestingly, another popular technique quantum chemists (and condensed matter physicists) use to estimate electronic properties uses ρ instead of ψ as a starting point (called Density Functional Theory).[5] Notably, the electron density seems to offer a more promising way to depict molecular structure in our familiar space, letting us visualize molecular shape, and pictures of these density distributions are also featured in textbooks. Theorists have also developed sophisticated ways to correlate features of ρ with chemical concepts, including bonding relationships.[6] However, here we still need to be careful in our interpretation:  while ρ is a function that assigns numbers to points in our familiar 3D space, it should not be taken to represent an object simply located in space.  I’ll have more to say about interpreting ρ below. Still, this might all sound pretty good: we understand that the ball and stick molecules of our school days don’t actually exist, but we have ways to approximate the classical picture using the correct (quantum) physics.  But this would be too quick—in particular, remember that in performing our physical calculations we put the most important ingredient of a molecule’s spatial structure in by hand! As mentioned above, the fixed nuclei spatial configuration was an assumption, not a derivation.  If one tries to calculate wave functions for molecules from scratch with the appropriate number of nuclei and electrons, one does not recover the particular asymmetries that distinguish most polyatomic molecules and that are crucial for understanding their chemical behavior.  This problem is often brought into focus by highlighting the many examples of molecules with the same atomic constituents (isomers) that differ crucially in their geometric structure (some even have the same bonding structure but different geometry). Molecular wave functions would generally not distinguish these from each other unless the configuration is brutely added as an assumption. Getting from QM Models to Molecular Structure So how does spatial molecular structure arise from a purely quantum world? It seems that two additional ingredients are needed. The first is to incorporate the role of intra-and extra-molecular interactions. The second is to go beyond the quantum formalism and incorporate an interpretation of quantum mechanics. With regard to the first step, note that the discussion thus far focused on quantum modeling of isolated molecules in equilibrium. This is an idealization, since in the actual world, molecules are usually constantly interacting with other systems in their environment, as well as always being subject to ongoing internal dynamics.  Recognizing this, but staying within orthodox QM, there is research indicating that applications of decoherence theory can go some way to accounting for the emergence of molecular shape. Most of this work explores models featuring interactions between a molecule and an assumed environment. Recently, there has been some innovative research extending decoherence analysis to include consideration of the internal environment of the molecule (interaction between the electrons and the nuclei -- see links in the footnote).[7] More work needs to be done, but there is definitely some prospect that the study of interactions withing the QM-decoherence framework will shed light on show molecular structure comes about. However, we can say already that decoherence will not solve the problem by itself.[8]  It can go some way toward accounting for the suppression of interference and the emergence of classical like-states (“preferred pointer states”), but multiple possible configurations will remain. These, of course, also continue be defined in the high-D configuration space context of QM. To fully account for the actual existence of a particular observed structures in 3D space requires grappling with the question of interpreting QM.  There is a 100-year-old debate centered on the problem of how definite values of a system’s properties are realized upon measurement when the formalism of QM would indicate the existence of a superposition of multiple possibilities (aka the “measurement problem”). Alexander Franklin & Vanessa Seifert have a new paper (preprint) that does an excellent job arguing that the problem of molecular structure is an instance of the measurement problem. It includes a brief look at how three common interpretations of QM (the Everett interpretation, Bohmian mechanics, and the spontaneous collapse approach) would address the issue.  The authors do not conclude in this paper that the consideration of molecular structure has any bearing on deciding between rival QM interpretations.  In contrast, I think the best interpretation is RQM in part because of the way it accounts for molecular structure: it does so in a way that also allows for these quantum systems to fit into an independently attractive general theory of how natural systems are composed (see the last post). How RQM Explains Spatial Structure To discuss how to approach the problem using RQM, let’s first return to the interpretation of the electron density distribution (ρ). As mentioned above, chemistry textbooks include pictures of ρ, and, because it is a function assigning (real) numbers to points in 3D space, there is a temptation to view ρ as depicting the molecule as a spatial object.  The ability to construct an image of ρ for actual molecules using X-ray crystallography may encourage this as well. But viewing ρ as a static extended object in space is clearly inconsistent with its usual statistical meaning in a QM context. As an alternative intuition, textbooks will point out that if you imagine a repeated series of position measurements on the molecular electrons, then one can think of ρ as describing a time-extended pattern of these localizing “hits”. But this doesn’t give us a reason to think molecules have spatial structure in the absence of our interventions. For this, we would want an interpretation that sees spatial localization as resulting from naturally occurring interactions involving a molecule’s internal and external environment (like those explored in decoherence models). We want to envision measurement-like interactions occurring whenever systems interact, without assuming human agents or macroscopic measuring devices need to be involved. This is the picture envisioned by RQM.[9] It is a “democratic” interpretation, where the same rules apply universally. In particular, all interactions between physical systems are “measurement-like” for those systems directly involved. Assuming these interactions are fairly elastic (not disruptive) and relatively transitory, then a molecule would naturally incur a pattern of localizing hits over time. These form its shape in 3D space. It would be nice if we could take ρ, as usually estimated, to represent this shape, but this is technically problematic. Per RQM, the quantum formalism cannot be taken as offering an objective (“view from nowhere”) representation of a system. Both wave functions and interaction events are perspectival.  So, strictly speaking, we cannot use ρ (derived from a particular ψ) to represent a pattern of hits resulting from interactions involving multiple partners. However, given a high level of stability in molecular properties across different contexts, I believe this view of ρ can still offer a useful approximation of what is happening. It gives a sense of how, given RQM, a molecule acquires a structure in 3D space as a result of a natural pattern of internal and environmental interactions. Putting it All Together What this conclusion also allows us to do is fit microscopic quantum systems into the broader framework discussed in the prior post, where patterns of discrete causal interactions are the raw material of composition. Like complex macroscopic systems, atoms and molecules are individuated by these patterns, and RQM offers a bridge from this causal account to our physical representations. Our usual QM models of atoms and molecules describe entangled composite systems, with details determined by the energy profiles of the constituents. Such models of isolated systems can be complimented by decoherence analyses involving additional systems in a theorized environment. RQM tells us that that these models represent the systems from an external perspective, which co-exists side-by-side with another picture: the internal perspective. This is one that infers the occurence of repeated measurement-like interactions among the constituents, a pattern that is also influenced in part by periodic measurement-like interactions with other systems in its neighborhood. The theory of composite causal processes connects with this latter perspective. The composition of atoms and molecules, like that of macroscopic systems, is based on a sustained pattern of causal interactions among sub-systems, occurring in a larger environmental context. Stepping back, the causal process account presented in these last three posts certainly leaves a number of traditional ontological questions open.  In part, this is because my starting point comes from the philosophy of scientific explanation. I believe the main virtue of this theory of a causal world-wide-web is that it can provide a unified underpinning for explanations across a wide range of disciplines, despite huge variation in research approaches and representational formats. Scientific understanding is based on our grasp of these explanations, and uncovering a consistent causal framework that helps enable this achievement is a good way to approach ontology. Bacciagaluppi, G. (2020). The Role of Decoherence in Quantum Mechanics. In E.N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 Edition). Esser, S. (2019). The Quantum Theory of Atoms in Molecules and the Interactive Conception of Chemical Bonding. Philosophy of Science, 86(5), 1307-1317. Franklin, A., & Seifert, V.A. (forthcoming). The Problem of Molecular Structure Just Is the Measurement Problem. The British Journal of the Philosophy of Science. Mátyus, E. (2019). Pre-Born-Oppenheimer Molecular Structure Theory. Molecular Physics, 117(5), 590-609. Weisberg, M., Needham, P., & Hendry, R. (2019). Philosophy of Chemistry. In E. N. Zalta, (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2019 Edition). [1] For background, see sections 4 and 6 of the Stanford Encyclopedia article “Philosophy of Chemistry”. Also, see the nice presentation of the problem of molecular structure in Franklin & Seifert (forthcoming) (preprint); this paper is discussed later in this post. For a perspective from a theoretical quantum chemist, see the recent paper from Edit Mátyus, which also features a good discussion of the background: Mátyus (2019) (preprint). [2] Here ψ is the wave function, E is the energy, and Ĥ is the Hamiltonian operator appropriate for the system. For example, the Hamiltonian for an atom will contain a kinetic energy term and a potential energy term that is based on the electrostatic attraction between the electrons and the nucleus (along with repulsion between electrons). [3] This assumption is justified by the vast difference in velocity between speedy electrons and the slower nuclei (an adiabatic approximation). For molecules, this is typically referred to as the “clamped nuclei” or Born-Oppenheimer approximation. [4] These methods are known as the valence bond (VB) and molecular orbital (MO) techniques. [5] The rationale behind DFT is that it can be demonstrated that for molecules the ground state energy and other properties can be derived directly from ρ (Hohenberg-Kohn theorems).  This kind of equivalence between ψ and its associated density is clearly not generally true for quantum systems, but in this case the existence of a minimum energy solution allows for the result to be established. [6] Of particular note here is the Quantum Theory of Atoms in Molecules (QTAIM) research program, initiated by R.W.F. Bader. QTAIM finds links to bonding and other chemical features via a detailed topological analysis of ρ. I discuss this in a 2019 paper (preprint). [7] For decoherence studies involving the external environment, see the references cited in section 3.2 of Mátyus (2019) (preprint). Two recent ArXiv papers from Mátyus & Cassam-Chennai explore the contribution of internal dccoherence (see here and here). [8] The present discussion is a specific instance of a more general point that now seems widely accepted in discussions of the QM interpretations: decoherence helps explain why quantum interference effects are suppressed when systems interact with their environments, but it does not solve the quantum measurement problem (which seeks to understand why definite outcomes are observed upon measurement). See the excellent SEP article by Bacciagaluppi. [9] For more, see my earlier post, which lists a number of good RQM references. Monday, May 31, 2021 Composing Natural Systems An interesting feature of Relational Quantum Mechanics (RQM) is its implication that discrete measurement-like interaction events are going on between natural systems (unobserved by us) all the time.  It turns out that this offers a way to incorporate quantum phenomena into an attractive account of how smaller natural systems causally compose larger ones.  In this post I will discuss the general approach, including a brief discussion of its implications for the ideas of reduction and emergence. In a follow-up post, I will discuss the quantum case in more detail with a focus on molecules. Composite Causal Processes The ontological framework I’m using (discussed in the last section of the prior post) is a modified version of Wesley Salmon’s causal process account (Salmon, 1984). The basic entities are called causal processes, and these comprise a network characterized by two dimensions of causation, called propagation and production. Propagation refers to the way an isolated causal process bears dispositions or propensities toward potential interactions with other processes--aka its disposition profile. Production refers to how these profiles are altered in causal interactions with each other (this is the mutual manifestation of the relevant dispositions). The entities and properties described by science correspond to features of this causal web. For example, an electron corresponds to a causal process, and its properties describe its dispositions to produce change in interactions with other systems. Given this picture, we can go on to form an account of how composite causal processes are formed.  What is exciting about the resulting view is that it can provide a framework for systems spanning the microscopic-macroscopic divide. For background, I note that neither Salmon nor others who have explored causal process views provide a detailed account of composition. Recall that Salmon’s intent was to give a causal theory in service of underpinning scientific explanations.  In this context, he did outline a pertinent distinction between etiological explanations and constitutive explanations. Etiological explanations trace the relevant preceding processes and interactions leading up to a phenomenon. A constitutive explanation, on the other hand, is one that cites the interactions and processes that compose the phenomenon: A constitutive explanation is thoroughly causal, but it does not explain particular facts or general regularities in terms of causal antecedents. The explanation shows, instead, that the fact-to-be-explained is constituted by underlying causal mechanisms. (Salmon, 1984, 270) However, while Salmon sketches how one would divide a causal network into etiological and constitutive elements, he doesn’t provide a recipe for marking off the boundaries that define which processes/interactions are “internal” to what is to be explained by the constitutive explanation (see Salmon 1984, p. 275). Going beyond Salmon, and drawing on the work of others, we can offer an account of composition for causal processes.  They key idea is to propose that a coherent structure at a higher scale arises from patterns of repeated interactions at a lower scale. We should pick out composite causal processes and their interactions by attending to such patterns at the lower scale. In Herbert Simon’s discussion of complex systems, he notes that complexity often “takes the form of hierarchy (Simon, 1962, 468)” and notes the role interactions play in this context: In hierarchic systems we can distinguish between interactions among subsystems, on the one hand, and the interactions within subsystems—that is, among the parts of those subsystems—on the other. (Simon, 1996, p.197, emphasis original) The suggestion to take from this is that differential interaction rates give rise to a hierarchy of causal processes. When a group of processes interacts more with each other than with “outsiders” then it can form a composite. For example, a social group like a family or a business can be marked off from others (at a first approximation) by the differential intensity with which its members interact within vs. outside the group. As part of his discussion of analyzing complex systems, Bill Wimsatt also explores the idea of decomposition based on interactions, i.e., breaking down a system into subsystems based on the relative strength of intra vs extra-system interactions. (Wimsatt, 2007, 184-6).  And while he describes how different theoretical concerns lead us to utilize a variety of analytical strategies, Wimsatt makes it clear that patterns of causal connections are the ultimate basis for understanding complex systems: Ontologically, one could take the primary working matter of the world to be causal relationships, which are connected to one another in a variety of ways—and together make up patterns of causal networks…Under some conditions, these networks are organized into larger patterns that comprise levels of organization (Wimsatt, 2007, 200, emphasis original).[1] Wimsatt explains that levels of organization are “compositional levels”, characterized by hierarchical part-whole relations (201). This notion of composition includes not just the idea of parts, but of parts engaged in certain patterns of causal interactions, consistent with the approach to composite causal processes suggested above. To summarize: a composite causal process consists of two or more sub-processes (the constituting group) that interact with a greater frequency than each does with other processes.  Just like any causal process, a composite process carries its own disposition profile: here the pattern of interacting sub-processes accounts for how composite processes will themselves interact (what this means for the concepts of reduction and emergence will be discussed below). Consider social groups again, perhaps taking the example of smaller, pre-industrial societies. Each may have its own distinctive dispositions to mutually interact with other, similarly sized groups (e.g., to share a resource, trade, or to engage in raids or battle). These would be composed from the dispositions of their constituent members as they are shaped in the course of structured patterns of in-group interaction. We can also envision here that the higher scale environmental interactions also impact the evolution of the composite entity, but its stability is due to maintaining its characteristic higher-frequency internal processes. Let me add a couple of further comments about composite processes.  First, as already indicated, a group of constituting sub-processes may be themselves composite, allowing for a nested hierarchy. Second, the impact of larger scale external interactions can vary.  Some may have negligible impact. Other interactions (especially if regular in nature) can contribute to shaping the ongoing nature of the composite. At the other extreme, there will be some external interactions that could disrupt or destroy it. The persistence of a composite would seem to require a certain robustness in the internal interaction pattern of its components. Achieving stability (and the associated ability to propagate a characteristic higher scale disposition profile) may require the differential between intra-process and extra-process interactions to be particularly high, or else there may need to have a particular pattern to the repeated interactions.  There will clearly be vague or boundary cases as well. Why go to all this trouble of fairly abstract theorizing about a web of causal processes?  Because this account fleshes out the notions that underwrite the causal explanations scientists formulate in a variety of domains. In the physical sciences, the familiar hierarchy of entities, including atoms, molecules, and condensed matter, all correspond to composite causal processes. Of course, in physical models, what marks out a composite system might be described in a number of ways (for example, in terms of the relative strength of forces or energy-minimizing equilibrium configurations).  But I argue this is consistent with the key being the relative frequency of recurring discrete interactions in-system vs. out-system. (This will be explored further in the companion post.) In biology, the complexity of systems may sometimes defy the easy identification of the boundaries of composites. Also, a researcher’s explanatory aims will sometimes warrant taking different perspectives on phenomena. In these cases, scientists will describe theoretical entities that do not necessarily follow a simple quantitative accounting of intra-process vs. extra-process interactions. On the one hand, the case of a cell provides a pretty clear paradigm case meeting the definition of a composite process. On the other hand, many organisms and groups of organisms present difficult cases that have given rise to a rich debate in the literature regarding biological individuality. Still, a causal account of constitution is a useful starting point, as noted here by ElliottSober: The individuality of organisms involves a distinction between self and other—between inside and outside. This distinction is defined by characteristic causal relations. Parts of the same organism influence each other in ways that differ from the way that outside entities influence the organism’s parts. (Sober, 1993, 150) The way parts “influence each other”, of course, might involve considerations beyond a mere quantitative view of interactions, and connotes an entry point where theoretical concerns can create distance from the basic conception of the composite causal process. In a biological context, sub-processes and interactions related to survival and reproduction may, for example, receive disproportionate attention in creating boundaries around composite entities. Notably, Roberta Millstein has proposed a definition of a biological population based on just this kind of causal interaction-based concept (Millstein 2009). It is also worth mentioning that constitutive explanations in science will rarely attempt to explain the entire entity. This would mean accounting all of its causal properties (aka its entire dispositional profile) in terms of its interacting sub-processes. It is more common for a scientific explanation to target one property corresponding to a behavior of interest (corresponding to one of many features of a disposition profile). Reduction and Emergence I want to make a few remarks about how this approach to composites sheds light on the topics of ontological reduction and emergence. In a nutshell, the causal composition model discussed here gives a straightforward account of these notions that sidesteps some common confusions and controversies, such as the “causal exclusion problem.” When considering the relationship between phenomena characterized at larger scales and smaller ones, the key observation is that a larger entity’s properties do not only depend not only on the properties of smaller composing entities. They also depend on their pattern of interaction.  This is in contrast to the usual static framing that posits a metaphysical relationship (whether expressed in terms of composition or “realization”) between higher-level properties and lower-level properties at some instant of time.  This picture is conceptually confused (if taken seriously as opposed to a being a deliberate simplifying idealization): there is no reason to think such synchronic relationships characterize our world. Recall that, in the present account, a property describes a regular feature of the disposition profile of a causal process. A composite causal process is made up of a pattern of interacting sub-processes.  The disposition profiles of the sub-processes are changing during these interactions: they are not static.  The dispositions of the composite depend on this matrix of changing sub-processes.  Note that both the forming of a higher-scale disposition (and its manifestation in a higher-scale interaction) takes more time than the equivalents at the smaller scale.  No composite entity or property exists at an instant:  this is a fiction concocted by us facilitate our understanding. Unfortunately, contemporary metaphysicians have taken this notion seriously.  It is perhaps easiest to see the problem in the case of a biological system:  nothing is literally “alive” at an instant of time.  Living things are sustained by temporally extended processes.  Less intuitively, the same is true of inanimate objects. Emergence and reduction are clearer, unmysterious notions when based on this dynamic conception of the composition relationship.  Properties of larger things “emerge” from the interacting group of smaller things. The “reduction base” includes the interaction pattern of the components and their (changing) properties.  The exclusion problem says that since higher-level properties are realized by lower-level properties at any arbitrary instant of time, they cannot have causal force of their own (on pain of overdetermination). We can see why this is a pseudo-problem once a better understanding of composition is in place. Causal production occurs at multiple scales. This take on reduction and emergence is obviously not unique to the causal process model discussed here.  It is implied by any approach that recognizes that properties of composites depend on interacting parts. For example, Wimsatt discusses at some length how notions of reduction and emergence should be understood given his understanding of complex systems. He offers a definition of reductive explanation that shows a similarity to the causal process view of constitutive explanation: A reductive explanation of a behavior or a property of a system is one that shows it to be mechanistically explicable in terms of the properties of and interactions among the parts of the system. (Wimsatt, 207, 275) This approach to reductive explanation is perfectly consistent with a form of emergence, in the sense that the properties of the whole are intuitively “more than the sum of its parts (277).” The key idea here, again, is that composition includes the interactions between the parts. For comparison, Wimsatt introduces the notion of “aggregativity”, where the properties of the whole are “mere” aggregates of the properties of its parts. For this to happen, “the system property would have to depend on the parts’ properties in a very strongly atomistic manner, under all physically possible decompositions (277-280)”. He analyzes the conditions needed for this to occur and concludes they are nearly never met outside of the case of conserved quantities in (idealized) physical theories. Simon had introduced similar notions, describing hypothetical idealized systems where there are no interactions between parts as “decomposable,” which are then contrasted to “nearly decomposable systems, in which the interactions among the subsystems are weak but not negligible (Simon, 1996, 197, emphasis original).” To highlight this distinguishing feature, Simon considers a boundary case: that of gases. Ideal gases, which assume interactions between molecules are negligible, are, for Simon, decomposable systems. In the causal process account, we would similarly point out that an ideal gas doesn’t have a clearly defined constituting group: the molecules do not have a characteristic pattern of interacting with each other at any greater frequency than they do with the external system (the container). An actual, non-ideal gas, on the other hand, with weak but non-negligible interactions between constituent molecules, would correspond to the idea of a composite causal process. Some contemporary work in metaphysics, focused on dispositions/powers and their role in causation, has incorporated similar views about composition and emergence. Rani Lill Anjum and Stephen Mumford describe a “dynamic view” of emergence:  The idea is that emergent properties are sustained through the ongoing activity; that is, through the causal process of interaction of the parts. A static instantaneous constitution view wouldn't provide this (Anjum & Mumford 2017, 101) In their view, higher scale properties are emergent because they depend on lower-level parts whose causal properties are undergoing transformation as they interact, consistent with the view discussed here.  Most recently, R. D. Ingthorsson's new book, while not discussing emergence and reduction explicitly, also presents a view of composition based on the causal interaction of parts which is in the same spirit (Ingthorsson, 2021, Ch. 6). I think composite causal processes provide a good framework for understanding how natural systems are constituted.  A puzzle for the view, however, might arise via its use of patterns of discrete causal interactions to define composites. How would this work in physics, where the forces binding together composites, such as the Coulomb (electrostatic) force, are continuous?  One possible answer is to point out that physical models employ idealizations, and claim their depictions can still correspond to the “deeper” ontological picture of causal processes. But I believe we can find a better and more comprehensive answer than this.  To do so, we must look more carefully at physical accounts of nature’s building blocks, atoms and molecules, and see if we can uncover a correspondence with the causal theory. I think we can, assuming we utilize the RQM interpretation.  This is the subject of the next post. Anjum, R., & Mumford, S. (2017). Emergence and Demergence. In M. Paolini Paoletti, & F. Orilia (Eds.), Philosophical and Scientific Perspectives on Downward Causation (pp. 92-109). New York: Routledge. Ingthorsson, R.D. (2021). A Powerful Particulars View of Causation. New York: Routledge. Millstein, R. L. (2009). Populations as Individuals. Biological Theory, 4(3), 267-273. Simon, H. (1962). The Architecture of Complexity. Proceedings of the American Philosophical Society, 106(6), 467-482. Wimsatt, W. C. (2007). Re-Engineering Philosophy for Limited Beings. Cambridge, Massachusetts: Harvard University Press. Photo: Alina Grubnyak via Unsplash [1] This passage goes on to mention other, less neat, network patterns: “Under somewhat different conditions they yield the kinds of systematic slices across which I have called perspectives. Under some conditions they are so richly connected that neither perspectives nor levels seem to capture their organization, and for this condition, I have coined the term causal thickets (Wimsatt, 2007, 200).” Thursday, January 28, 2021 Why I Favor Relational Quantum Mechanics I think Relational Quantum Mechanics (RQM), initially proposed by Carlo Rovelli, is the best interpretation of quantum mechanics.1 It is important to note right away, however, that I depart from Rovelli’s thinking in one important respect. He takes an anti-realist view of the wave function (or quantum state). As I will discuss below, I endorse a view that sees the wave function as representing something real (even if imperfectly and incompletely). There are two reasons I prefer RQM. First, I think it makes better sense of QM as a successful scientific endeavor compared to other interpretations. Second, it fits neatly with an attractive ontology for our world. Quick Introduction Orthodox or “textbook” QM features a closely knit family of mathematical models and recipes for their use. The models describe the state of a microscopic system characterized by certain physical quantities (typically given in the form of a wave function). It gives a formula for calculating how the system evolves in time (the Schrödinger equation). Notably, because of the nature of the mathematical formalism, one typically cannot ascribe definite values to the physical quantities of interest. However, QM also offers a procedure for calculating (probabilistically) the outcomes of particular measurements of these quantities. The problem with taking orthodox QM as a universally applicable physical theory can be described in several ways (this is usually called the measurement problem). One simple way is to note an inconsistency arising from the presence of what appears to be two completely different kinds of interaction. In the absence of any interaction, a system evolves in time as described by the Schrödinger equation. But interactions are handled in two different ways. On the one hand, we have the measurement process (utilizing the Born rule), that is, an interaction between the quantum system under investigation and a scientist’s experimental apparatus. On the other hand, we can also describe an interaction between two systems that are not subject to measurement. In the first kind of interaction, a definite value of a system’s physical quantity is found (we say the wave function of the system collapses). In the second kind of interaction, we represent two (or more) systems, previously considered isolated, as now correlated in a composite system (we say they become entangled). This system evolves in the same fashion as any isolated system. And as such the composite system may be in a superposition of states where no definite values for a given quantity can be ascribed. In a nutshell, the RQM solution is to stipulate that a physical interaction is a measurement-style event. However, this is only true for those systems directly involved: the systems are merely entangled from the standpoint of other “third-party” systems. The appearance of two sorts of interaction arises from a difference in perspective. This is weird, of course, since particular values of the physical quantities revealed in an interaction are manifest only relative to the interaction partner(s) involved. They don’t exist in a fully objective way. All interpretations of QM ask us to accept something unintuitive or revisionary. This is the “ask” made by RQM. Reason One: RQM Validates Quantum Theory as Successful Science Before discussing the interpretation further, I can quickly outline a reason to prefer RQM to many competing approaches. This point is primarily a negative one. In contrast to other approaches, RQM is an interpretation that delivers a satisfying account of QM as a successful scientific theory: one that draws appropriate connections between the results of our experimental investigations and a meaningful picture of the world around us. I obviously won’t be doing a deep dive into all the options, but will quickly sketch why I think RQM is superior. First, for a quick cut in the number of alternatives, I eliminate views that are merely pragmatic, or see QM as only describing what agents experience, believe, or know. I insist that alongside its other aims (such as prediction and practical control), a scientific theory should contribute to our understanding of nature. To do so, the theory should offer successful explanations of worldly phenomena, that is, ones that tell us (broadly speaking) what kind of things are out there and how they hang together. This means, in turn, that at least some of the elements of the mathematical models that we use should represent features of the world (allowing that the fidelity of any given representation is significantly constrained by reasons having to do with the aims of the scientist and the tools employed). I will outline in the next section of this post how I think this works in the case of RQM. As for the remaining alternatives, I will limit the conversation to the three most prominent broadly realist approaches to thinking about QM: Everett-style interpretations, Bohmian mechanics, and objective collapse approaches, such as Ghirardi-Rimini-Weber (GRW) theory (the implied ontology of these approaches might be fleshed out in more than one way, but I will not pursue the details here.) For these alternatives, a different issue rises to the fore. An interpretation should not just consider how the features of formal QM models might correspond to reality. It should also respect the status of quantum theory as a hugely successful experimental science. Orthodox or “textbook” QM includes not just the mathematical formalism, but also the recipes for how it is used by investigators and how it connects to our experiences in the laboratory. And here is where I think Everettians and Bohmians in particular fall short. Note first that all three of the alternative approaches depart from orthodox QM by adding to, subtracting from, or modifying its basic elements.2 GRW changes things by replacing the Schrödinger equation with a new formula that attempts to encompass both continuous evolution and the apparent collapse onto particular outcomes observed in measurement. Bohmian mechanics adds new elements to the picture by associating the quantum state with a configuration of particles in 3D space and adding a new guidance equation for them. Everettian approaches just drop the measurement process and seek to reinterpret what is going on without it. For the Everett framework in particular, I’m not sure the extent of its departure from orthodox QM is always appreciated. It is sometimes claimed to be the simplest version of QM. This is since it works by simply removing what is often seen as a problematic element of the theory. But in doing so it divorces QM from its basis in experimental practice. This is a drastic departure indeed. To see this, note that to endorse Everett is to conclude that the very experiments that prompted the development of QM and have repeatedly corroborated it over nearly a century are illusory. For the Everettian, to take one example, no experimental measurement of the spin of an electron has ever or will ever have a particular outcome (all outcomes happen, even though we’ll never perceive that). Bohmian mechanics also turns our experiments into fictions. For the Bohmian, there is actually no electron and no spin involved in the measurement of an electron’s spin. Rather, there is an orchestrated movement of spinless point particles comprising the system and the laboratory (and the rest of the universe) into the correct spatial positions. GRW-style approaches are different, in that they are testable alternatives to QM. Unfortunately, researchers have been busy gradually ruling them out as empirically adequate alternatives (see, e.g., Vinante, 2020). It is also worth noting, however, that GRW also distorts the usual interpretation of experimental results by stipulating that all collapses are in the position basis. Unlike these approaches, RQM is truly an interpretation, rather than a modification, of orthodox QM, a successful theory that was motivated by experimental findings and is extremely well supported by decades of further testing. The measurement process, in particular, is not some problematic add-on to quantum theory – it is at the heart of it. Human beings and our experiences and interventions are part of the natural world. RQM does justice to this fact by explaining that measurements—the connections between quantum systems and ourselves—are just like any other physical interaction. Reason Two: RQM Offers an Attractive Ontological Picture Laudisa and Rovelli (in the SEP article) describe RQM’s ontology as a “sparse” one, comprised of the relational interaction events between systems. This event ontology has attractive features (akin to the “flash” ontology sometimes discussed in conjunction with objective collapse interpretations). There is no question of strange higher-dimensional spaces or other worlds: the events happen in spacetime. Also, one of the goals of science-inspired metaphysical work is to foster the potential unification of scientific theories. Importantly, a QM interpretation that features an event ontology offers at least the promise of establishing a rapport with relativity theory, which is typically seen as putting events in the leading role (see a recent discussion by Maccone, 2019). But does giving this role to interaction events preclude a representational role for the wave function? Given that physical properties of systems only take definite values when these events occur, perhaps systems should not be accorded any reality apart from this context. And, in fact, Carlo Rovelli has consistently taken a hard anti-realist stance toward the wave function/quantum state. In his original presentation of RQM he gave it a role only as record of information about one system from the point of view of another, and thought it was possible to reformulate quantum theory using an information-based framework. This conflicts with my insistence above that such anti-realism was inconsistent with the aims of a good scientific theory. Thankfully, there is no need to follow Rovelli on this point. Instead, I concur with a view outlined by Mauro Dorato recently. He suggests that rather than view non-interacting systems as simply having no real properties, they can be characterized as having dispositions: In other words, such systems S have intrinsic dispositions to correlate with other systems/observers O, which manifest themselves as the possession of definite properties q relative to those Os. (Dorato, 2016, 239; emphasis original) As he points out, referencing ideas due to philosopher C.B. Martin, such manifestations only occur as mutual manifestations involving dispositions characterizing two or more systems.3 Since these manifestations have a probabilistic aspect to them, the dispositions might also be referred to as propensities. So, here the wave function has a representational role to play. It represents a systems’s propensities toward interaction with a specified partner system(s). The Schrödinger equation would show how propensities can be described across time in the absence of interaction. Now, it is true that the QM formalism does not offer a full or absolute accounting for a system’s properties, given its relational limitations. But here we should recall that models across the sciences are typically incomplete and imperfect. In addition to employing approximations and idealizations, they approach phenomena from a certain perspective dictated by the nature of the research program. But we can say the wave function represents something real (if incompletely and in an idealized way). Reality has two aspects, non-interacting systems with propensities, and the interaction events that occur in spacetime. The idea that properties are dispositional in nature is one that has been pursued increasingly by philosophers in recent years. It fits well with physics, since both state dependent and independent properties (like mass and charge) are only known via their manifestations in interactions.4 While advocates disagree about the details, the idea that the basic ontology of the world features objects that bear dispositions/propensities has also been used more widely to address a number of difficult philosophical topics (like modality). Most importantly, perhaps, dispositions and their manifestations provide a good basis for theorizing about causation.5 Fitting Both Quantum Systems and Scientists Into the Causal Web To conclude, I’ll briefly describe how I would flesh out this ontological picture, putting an emphasis on causation. I mentioned above the role representational models play in explanation. To be more specific, scientific explanations are typically causal explanations: they seek to place a phenomenon in a structured causal context. When successful explanations feature models, then, these models represent features of the world’s causal structure. The suggestions above on how to view the ontology associated with RQM fit into a particularly attractive theory of this structure. This is a modified version of Wesley Salmon’s causal process account (Salmon, 1984). Here the basic entity or object is labeled a causal process, and there are two dimensions of causation: propagation and production. Propagation refers to the evolution of a causal process in the absence of interaction, while production refers to the change that causal processes undergo when an interaction occurs. As described by Ladyman & Ross: The metaphysic suggested by process views is effectively one in which the entire universe is a graph of real processes, where the edges are uninterrupted processes, and the vertices the interactions between them (Ladyman & Ross, 2007, 263). According to Salmon, a propagating causal process carries or “transmits” causal influence from one spacetime point to another. The character of this causal influence is then altered by interactions. I theorize that this causal influence takes the form of a cluster of dispositions or propensities toward mutual interactions (aka a propensity profile). The interactions produce a change in this profile.6 To summarize: 1. The web of nature has two aspects: the persisting causal process and the causal interaction event (a discrete change-making interaction between processes). 2. The quantum formalism offers a partial representation of the propensity profile of a causal process. It is partial because these are only the propensities toward manifestations that take place in interactions with (one or more) designated reference systems. The Schrödinger equation represents the propagation of these propensities from one interaction to the next. 3. All manifestations are mutual, and take the form of a change in the profile of each process involved in the interaction. A quantum measurement is an interaction like any other. Humans may treat the wave function as representing the phenomena we are tracking, but we are also causal processes, as are our measuring devices. It is just that the changes manifest in us in an interaction (our altered propensity profiles) are conceptualized as epistemic. 4. Per RQM, when two physical systems interact, they are represented as an entangled composite system from the perspective of a third system. This relational representation of the composite system might in practice be thought of as a limitation on what the third system “knows.” Under certain conditions, however, this entanglement can have a distinctive indirect impact on the third system—interference effects—revealing it is not only epistemic (as always, decoherence explains why we rarely experience these effects). There is much more to flesh out, of course. I would add to this summary an account of how composite systems form higher-level propensities of their own, based on the pattern of repeated interactions of their constituents. Also, there is an interesting question of how serious of a (relational or perspectival) scientific realist to be about the properties identified in quantum theory. My preference is to be a realist about the (singular) causal network, but view the formalism as offering only an idealized depiction of regularities in the propensity profiles of the underlying causal processes. 1 For background, see the Stanford Encyclopedia article (Laudisa & Rovelli, 2019). Rovelli’s original paper is (Rovelli, 1996 - arXiv:quant-ph/9609002). Good philosophical discussions include Brown (2009; link via, Van Fraassen (2010; link via Van Fraassen website), Dorato (2016; preprint here, but note final version has significant changes), and Ruyant (2018; preprint here). 2 For a recent attempt to carefully describe the principles of orthodox QM, see Poinat (2020); link (researchgate). 3 What Martin calls “reciprocal disposition partners.” See Martin (2008), especially Ch. 5.  4 In addition to contemporary work by Dorato and others, there have been a handful of theorists over the decades since QM was formulated who have employed dispositions/propensities to interpret QM. See Suárez (2007) for a survey of some of these. 5 Important work here includes Chakravartty (2007) and Mumford & Anjum (2011). 6 The main changes from Salmon’s own work are as follows. The first is to be a realist about dispositions/propensities, whereas Salmon’s version of empiricism drove him to reject any suggestion of causal powers. He characterized causal processes in terms of their transmission of an observable “mark” or, in a subsequent version of the theory, the transmission of a conserved physical quantity. The second change is that causal processes cannot be said to propagate in spacetime, as Salmon envisioned, since this would be inconsistent with the non-local character of quantum systems. Brown, M. J. (2009). Relational Quantum Mechanics and the Determinacy Problem. The British Journal for the Philosophy of Science, 60(4), 679-695. Chakravartty, A. (2007). A Metaphysics for Scientific Realism. Cambridge: Cambridge University Press. Dorato, M. (2016). Rovelli's Relational Quantum Mechanics, Anti-Monism, and Quantum Becoming. In A. Marmodoro, & D. Yates (Eds.), The Metaphysics of Relations (pp. 235-262). Oxford: Oxford University Press. Ladyman, J., & Ross, D. (2007). Everything Must Go. Oxford: Oxford University Press. Laudisa, F., & Rovelli, C. (2019). Relational Quantum Mechanics. Retrieved from The Stanford Encyclopedia of Philosophy, Winter 2019 Edition: Maccone, L. (2019). A Fundamental Problem in Quantizing General Relativity. Foundations of Physics, 49, 1394-1403. Martin, C. (2008). The Mind in Nature. Oxford: Oxford University Press. Mumford, S., & Anjum, R. L. (2011). Getting Causes from Powers. Oxford: Oxford University Press. Poinat, S. (2020). Quantum Mechanics and Its Interpretations: A Defense of the Quantum Principles. Foundations of Physics, 1-18. Rovelli, C. (1996). Relational Quantum Mechanics. International Journal of Theoretical Physics, 35, 1637-1678. Ruyant, Q. (2018). Can We Make Sense of Relational Quantum Mechanics. Foundations of Physics, 48, 440-455. Suárez, M. (2007). Quantum Propensities. Studies in History and Philosophy of Modern Physics, 38, 418-438. Van Fraassen, B. (2010). Rovelli's World. Foundations of Physics, 40, 390-417. Vinante, A., (2020) Narrowing the Parameter Space of Collapse Models with Ultracold Layered Force Sensors. Physical Review Letters, 125, 100401-100401. Thursday, April 16, 2020 Metaphysics and the Problem of Consciousness Against “vertical” metaphysical relations Solve the problem with a new metaphysics of causation? 2 Here’s the SEP article on grounding. 3 Also, check out Daniel Stoljar’s review.
3b77db17fb39b03c
Evolution of perturbations and spectra in multi-component ultralight axionic universes Yi-Hsiung Hsu1*, Tzihong Chiueh1,2,3 1Institute of Astrophysics, National Taiwan University, Taipei, Taiwan 2Department of Physics, National Taiwan University, Taipei, Taiwan 3Center for Theoretical Physics, National Taiwan University, Taipei, Taiwan * Presenter:Yi-Hsiung Hsu, email:arthurhsu3388@gmail.com The spectra for multi-component ultralight axion dark matter universes are computed and analyzed. To achieve power spectra, evaluating the evolution of axions for each wavenumber k mode is required. Due to the stiff Klein-Gordon equation and strong Thomson scattering, this computation is formidable. Our program applies the Schrödinger equation and diffusion approximation, which reduces real-time from about 30 minutes to about 1 second while integrating the same interval and retaining the accuracy higher than 95%. This improved scheme enables the construction of initial conditions for the future generation of simulations. In the two-component universe, we evaluate different configurations among cold dark matter, free particle axions, and extreme axions with different particle masses. With equal background energy for two components, it generally shows a less drastic spectral cutoff for the larger mass difference. Keywords: Dark Matter, Multi-component Universe, Power Spectra
3dc2b46a1c0d1e4a
Why vacations are essential for physics Vacations are good for your health, they allow you to get away from the daily grind and let yourself unwind. They are vital in enabling you to recharge your batteries and get your psyche away from work, work, work. While they allow us to forget about the office for a bit, they can also help to stimulate us to create new and innovative ideas. Such an event occurred for a young German physicist struggling to make the breakthrough he was so very close to in 1925. Werner Heisenberg needed a break. Something’s got to give He had experienced a mental block and to make matters worse, he was suffering from a horrendous bout of hayfever. Heisenberg resided in Göttingen, and during one summer he was tortured by persistent allergic reactions, so something had to change. So he went on vacation to Helgoland, a tiny island in the middle of the North Sea to give his sinuses a rest more than anything else. A eureka moment Changing location really helped him as the change of scenery allowed him to breathe and think more clearly. Finding inspiration for his research, he realized he was employing a technique that would not permit him to measure his results, so he reformatted the mathematics into a quantity he could measure. Upon his return to Göttingen, his research partner managed to connect the dots, and the German research team took the tentative steps into what is now known as modern quantum mechanics. Hey! I’m tired too The strategic taking of a vacation was repeated with similar progress in the same year, 1925, with an equal measure of success. Erwin Schrödinger was working on his own quantum problem, regarding states within atoms. Desperately trying to make a breakthrough in his equations, he kept finding himself confronted with mathematical hurdles. After months of working and not getting anywhere fast, Schrödinger took a skiing vacation with one of his lady friends. This bout of rest and recreation was just the ticket, and, after hitting the slopes during the day and working over a desk in the evenings, he had found the equation he was so desperately looking for. That equation is now known as the Schrödinger equation, and it allows us to describe states of electrons found in hydrogen using de Broglie’s terms of electron waves. Mental fatigue is not your friend Clearly, physicists have demanding jobs and affording themselves a break every so often will enable them to refresh and reboot, something which can boost the creative process. Instead of slogging away for 12 hours a day in a lab, it would be useful to understand you aren’t necessarily getting anywhere and you need to come back in the next day or two with a fresher pair of eyes. Please, boss, it’ll help you too While not every physicist will figure out such groundbreaking theories, using the examples of the two scientists above shows that even the most brilliant mind needs time to stop working and chill out. Often bosses want more and more from their employees but think that going at it for hours upon hours will get the job done, sometimes you need to take a step, or two, backward to move forward. So if you’re stuck on a problem in your work, see if your boss will give you a little break, it might prove to be the best solution to yours and their problem.
8d9d32d4c9987357
Technology & Digital - Where Will Quantum Computers Create Value—and When? Related Expertise: Big Data & Advanced Analytics, Digital, Technology, and Data, Quantum Computing Where Will Quantum Computers Create Value—and When? By Matt LangioneCorban Tillemann-DickAmit Kumar, and Vikas Taneja Master the New Logic of Competition Learn more Despite the relentless pace of progress over the last half-century, there are still many problems that today’s computers can’t solve. Some simply await the next generation of semiconductors rounding the bend on the assembly line. Others will likely remain beyond the reach of classical computers forever. It is the prospect of finally finding a solution to these “classically intractable” problems that has CIOs, CTOs, heads of R&D, hedge fund managers, and others abuzz at the dawn of the era of quantum computing. Their enthusiasm is not misplaced. In the coming decades, we expect productivity gains by end users of quantum computing, in the form of both cost savings and revenue opportunities, to surpass $450 billion annually. Gains will accrue first to firms in industries with complex simulation and optimization requirements. It will be a slow build for the next few years: we anticipate value for end users in these sectors to reach a relatively modest $2 billion to $5 billion by 2024. But value will then increase rapidly as the technology and its commercial viability mature. When they do, the opportunity will not be evenly distributed—far from it. Since quantum computing is a step-change technology with substantial barriers to adoption, early movers will seize a large share of the total value, as laggards struggle with integration, talent, and IP. Based on interviews and workshops involving more than 100 experts, a review of some 150 peer-reviewed publications, and analysis of more than 35 potential use cases, this report assesses how and where quantum computing will create business value, the likely progression, and what steps executives should take now to put their firms in the best position to capture that value. Who Benefits? If quantum computing’s transformative value is at least five to ten years away, why should enterprises consider investing now? The simple answer is that this is a radical technology that presents formidable ramp-up challenges, even for companies with advanced supercomputing capabilities. Both quantum programming and the quantum tech stack bear little resemblance to their classical counterparts (although the two technologies might learn to work together quite closely). Early adopters stand to gain expertise, visibility into knowledge and technological gaps, and even intellectual property that will put them at a structural advantage as quantum computing gains commercial traction. More important, many experts believe that progress toward maturity in quantum computing will not follow a smooth, continuous curve. Instead, quantum computing is a candidate for a precipitous breakthrough that may come at any time. Companies that have invested to integrate quantum computing into the workflow are far more likely to be in a position to capitalize—and the leads they open will be difficult for others to close. This will confer substantial advantage in industries in which classically intractable computational problems lead to bottlenecks and missed revenue opportunities. We have explored previously the likely development of quantum computing over the next ten years as well as The Coming Quantum Leap in Computing. (You can also take our quiz to test your own quantum IQ.) The assessment of future business value begins with the question of what kinds of problems quantum computers can solve more efficiently than binary machines. It’s far from a simple answer, but two indicators are the size and complexity of the calculations that need to be done. Take drug discovery, for example. For scientists trying to design a compound that will attach itself to, and modify, a target disease pathway, the critical first step is to determine the electronic structure of the molecule. But modeling the structure of a molecule of an everyday drug such as penicillin, which has 41 atoms at ground state, requires a classical computer with some 1086 bits—more transistors than there are atoms in the observable universe. Such a machine is a physical impossibility. But for quantum computers, this type of simulation is well within the realm of possibility, requiring a processor with 286 quantum bits, or qubits. This radical advantage in information density is why many experts believe that quantum computers will one day demonstrate superiority, or quantum advantage, over classical computers in solving four types of computational problems that typically impede efforts to address numerous business and scientific challenges. (See Exhibit 1.) These four problem types cover a large application landscape in a growing number of industries, which we will explore below. Three Phases of Progress Quantum computing is coming. But when? How will this sea change play out? What will the impact look like early on, and how long will it take before quantum computers are delivering on the full promise of quantum advantage? We see applications (and business income) developing over three phases. (See Exhibit 2.) The NISQ Era The next three to five years are expected to be characterized by so-called NISQ (Noisy Intermediate-Scale Quantum) devices, which are increasingly capable of performing useful, discrete functions but are characterized by high error rates that limit functionality. One area in which digital computers will retain advantage for some time is accuracy: they experience fewer than one error in 1024 operations at the bit level, while today’s qubits destabilize much too quickly for the kinds of calculations necessary for quantum-advantaged molecular simulation or portfolio optimization. Experts believe that error correction will remain quantum computing’s biggest challenge for the better part of a decade. That said, research underway at multiple major companies and startups, among them IBM, Google, and Rigetti, has led to a series of technological breakthroughs in error mitigation techniques to maximize the usefulness of NISQ-era devices. These efforts increase the chances that the near to medium term will see the development of medium-sized, if still error-prone, quantum computers that can be used to produce the first quantum-advantaged experimental discoveries in simulation and combinatorial optimization. Broad Quantum Advantage In 10 to 20 years, the period that will witness broad quantum advantage, quantum computers are expected to achieve superior performance in tasks of genuine industrial significance. This will provide step-change improvements over the speed, cost, or quality of a binary machine. But it will require overcoming significant technical hurdles in error correction and other areas, as well as continuing increases in the power and reliability of quantum processors. Quantum advantage has major implications. Consider the case of chemicals R&D. If quantum simulation enables researchers to model interactions among materials as they grow in size—without the coarse, distorting heuristic techniques used today—companies will be able to reduce, or even eliminate, expensive and lengthy lab processes such as in situ testing. Already, companies such as Zapata Computing are betting that quantum-advantaged molecular simulation will drive not only significant cost savings but the development of better products that reach the market sooner. The story is similar for automakers, airplane manufacturers, and others whose products are, or could be, designed according to computational fluid dynamics. These simulations are currently hindered by the inability of classical computers to model fluid behavior on large surfaces (or at least to do so in practical amounts of time), necessitating expensive and laborious physical prototyping of components. Airbus, among others, is betting on quantum computing to produce a solution. The company launched a challenge in 2019 “to assess how [quantum computing] could be included or even replace other high-performance computational tools that, today, form the cornerstone of aircraft design.” Full-Scale Fault Tolerance The third phase is still decades away. Achieving full-scale fault tolerance will require makers of quantum technology to overcome additional technical constraints, including problems related to scale and stability. But once they arrive, we expect fault-tolerant quantum computers to affect a broad array of industries. They have the potential to vastly reduce trial and error and improve automation in the specialty-chemicals market, enable tail-event defensive trading and risk-driven high-frequency trading strategies in finance, and even promote in silico drug discovery, which has major implications for personalized medicine. With all this promise, it’s little surprise that the value creation numbers get very big over time. In the industries we analyzed, we foresee quantum computing leading to incremental operating income of $450 billion to $850 billion by 2050 (with a nearly even split between incremental annual revenues streams and recurring cost efficiencies). (See Exhibit 3.) While that’s a big carrot, it comes at the end of a long stick. More important for today’s decision makers is understanding the potential ramifications in their industries: what problems quantum computers will solve, where and how the value will be realized, and how they can put their organizations on the path to value ahead of the competition. How to Benefit But what should companies do today to get ready? A good first step is performing a diagnostic assessment to determine the potential impact of quantum computing on the company or industry and then, if appropriate, developing a partnership strategy, ideally with a full-stack technology provider, to start the process of integrating capabilities and solutions. The first part of the diagnostic is a self-assessment of the company’s technical challenges and use of computing resources, ideally involving people from R&D and other functions, such as operations, finance, and strategy, to push boundaries and bring a full perspective to what will ultimately be highly technical discussions. The key questions to ask are: • Are you currently spending a lot of money or other resources to tackle problems with a high-performance computer? If so, do these efforts yield low-impact, delayed, or piecemeal results that leave value on the table? • Does the presumed difficulty of solving simulation or optimization problems prevent you from trying high-performance computing or other computational solutions? • Are you spending resources on inefficient trial-and-error alternatives, such as wet-lab experiments or physical prototyping? • Are any of the problems you work on rooted in the quantum-advantaged problem archetypes identified above? If the answer to any of these questions is yes, the next step is an “impact of quantum” (IQ) diagnostic that has two components. The first is sizing a company’s unsolved technical challenges and the potential quantum computing solutions as they are expected to develop and mature over time. The goal is to visualize the potential value of solutions that address real missed revenue opportunities, delays in time to market, and cost inefficiencies. This analysis requires combining domain-specific knowledge (of molecular simulation, for example) with expertise in quantum computing and then assessing potential future value. (We demonstrate how this is done at the industry level in the next section.) The second component of the IQ assessment is a vendor assessment. Given the ever-changing nature of the quantum computing ecosystem, it is critical to find the right partner or partner providers, meaning companies that have expertise across the broadest set of technical challenges that you face. Some form of partnership will likely be the best play for enterprises wishing to get a head start on building a capability in the near term. A low-risk, low-cost strategy, it enables companies to understand how the technology will affect their industry, determine what skills and IT gaps they need to fill, and even play a role in shaping the future of quantum computing by providing technology providers with the industry-specific skills and expertise necessary to produce solutions for critical near-term applications. Partnerships have already become the model of choice for most of the commercial activity in the field to date. Among the collaborations formed so far are JPMorgan Chase and IBM’s joint development of solutions related to risk assessment and portfolio optimization, Volkswagen and Google’s work to develop batteries for electric vehicles, and the Dubai Electricity and Water Authority’s alliance with Microsoft to develop energy optimization solutions. High-Impact Applications One way to assess where quantum computing will have an early or outsized impact is to connect the quantum-advantaged problem types shown in Exhibit 1 with discrete pain points in particular industries. Behind each pain point is a bottleneck for which there may be multiple solutions or a latent pool of income that can be tapped in many ways, so the mapping must account for solutions rooted in other technologies—machine learning, for example—that may arrive on the scene sooner or at lower cost, or that may be integrated more easily into existing workflows. Establishing a valuation for quantum computing in a given industry (or for a given firm) over time—charting what we call a path to value—therefore requires gathering and synthesizing expertise from a number of sources, including: • Industry business leaders who can attest to the business value of addressing a given pain point • Industry technical experts who can assess the limits of current and future nonquantum solutions to the pain point • Quantum computing experts who can confirm that quantum computers will be able to solve the problem and when Using this methodology, we sized up the impact of quantum advantage on a number of sectors, with an emphasis on the early opportunities. Here are the results. Materials Design and Drug Discovery On the face of things, no two fields of R&D more naturally lend themselves to quantum advantage than materials design and drug discovery. Even if some experts dispute whether quantum computers will have an advantage in modeling the properties of quantum systems, there is no question that the shortcomings of classical computers limit R&D in these areas. Materials design, in particular, is a slow lab process characterized by trial and error. According to R&D Magazine, for specialty materials alone, global firms spend upwards of $40 billion a year on candidate material selection, material synthesis, and performance testing. Improvements to this workflow will yield not only cost savings through efficiencies in design and reduced time to market, but revenue uplift through net new materials and enhancements to existing materials. The benefits of design improvements yielding optimal synthetic routes also would, in all likelihood, flow downstream, affecting the estimated $460 billion spent annually on industrial synthesis. The biggest benefit quantum computing offers is the potential for simulation, which for many materials requires computing power that binary machines do not possess. Reducing trial-and-error lab processes and accelerating discovery of new materials are only possible if materials scientists can derive higher-level spectral, thermodynamic, and other properties from ground-state energy levels described by the Schrödinger equation. The problem is that none of today’s approximate solutions—from Hartree-Fock to density functional theory—can account for the quantized nature of the electromagnetic field. Current computational approximations only apply to a subset of materials for which interactions between electrons can effectively be ignored or easily approximated, and there remains a well-defined set of problems in want of simulation-based solutions—as well as outsized rewards for the companies that manage to solve them first. These problems include simulations of strongly correlated electron systems (for high-temperature superconductors), manganites with colossal magnetoresistance (for high-efficiency data storage and transfer), multiferroics (for high-absorbency solar panels), and high-density electrochemical systems (for lithium air batteries). All of the major players in quantum computing, including IBM, Google, and Microsoft, have established partnerships or offerings in materials science and chemistry in the last year. Google’s partnership with Volkswagen, for example, is aimed at simulations for high-performance batteries and other materials. Microsoft released a new chemical simulation library developed in collaboration with Pacific Northwest National Laboratory. IBM, having run the largest-ever molecular simulation on a quantum computer in 2017, released an end-to-end stack for quantum chemistry in 2018. Potential end users of the technology are embracing these efforts. One researcher at a leading global materials manufacturer believes that quantum computing “will be able to make a quality improvement on classical simulations in less than five years,” during which period value to end users approaching some $500 million is expected to come in the form of design efficiencies (measured in terms of reduced expenditures across the R&D workflow). As error correction enables functional simulations of more complex materials, “you’ll start to unlock new materials and it won’t just be about efficiency anymore,” a professor of chemistry told us. During the period of broad quantum advantage, we estimate that upwards of $5 billion to $15 billion in value (which we measure in terms of increased R&D productivity) will accrue to end users, principally through development of new and enhanced materials. Once full-scale fault-tolerant quantum computers become available, value could reach the range of $30 billion to $60 billion, principally through new materials and extensions of in-market patent life as time-to-market is reduced. As the head of business development at a major materials manufacturer put it, “If unknown chemical relationships are unlocked, the current specialty market [currently $51 billion in operating income annually] could double.” Quantum advantage in drug discovery will be later to arrive given the maturity of existing simulation methods for “established” small molecules. Nonetheless, in the long run, as quantum computers unlock simulation capabilities for molecules of increasing size and complexity, experts believe that drug discovery will be among the most valuable of all industry applications. In terms of cost savings, the drug discovery workflow is expected to become more efficient, with in silico modeling increasingly replacing expensive in vitro and in vivo screening. But there is good reason to believe that there will be major top-line implications as well. Experts expect more powerful simulations not only to promote the discovery of new drugs but also to generate replacement value over today’s generics as larger molecules produce drugs with fewer off-target effects. Between reducing the $35 billion in annual R&D spending on drug discovery and boosting the $920 billion in yearly branded pharmaceutical revenues, quantum computing is expected to yield $35 billion to $75 billion in annual operating income for end users once companies have access to fault-tolerant machines. Financial Services In recent history, few if any industries have been faster to adopt vanguard technologies than financial services. There is good reason to believe that the industry will quickly ramp up investments in quantum computing, which can be expected to address a clearly defined set of simulation and optimization problems—in particular, portfolio optimization in the short term and risk analytics in the long term. Investment money has already started to flow to startups, with Goldman Sachs and Fidelity investing in full-stack companies such as D-Wave, while RBS and Citigroup have invested in software players such as 1QBit and QC Ware. Our discussions with quantitative investors about the pain points in portfolio optimization, arbitrage strategy, and trading costs make it easy to understand why. While investors use classical computers for all these problems today, the capabilities of these machines are limited—not so much by the number of assets or the number of constraints introduced into the model as by the type of constraints. For example, adding noncontinuous, nonconvex functions such as interest rate yield curves, trading lots, buy-in thresholds, and transaction costs to investment models makes the optimization “surface” so complex that classical optimizers often crash, simply take too long to compute, or, worse yet, mistake a local optimum for the global optimum. To get around this problem, analysts often simplify or exclude such constraints, sacrificing the fidelity of the calculation for reliability and speed. Such tradeoffs, many experts believe, would be unnecessary with quantum combinatorial optimization. Exploiting the probability amplitudes of quantum states is expected to dramatically accelerate portfolio optimization, enabling a full complement of realistic constraints and reducing portfolio turnover and transaction costs—which one head of portfolio risk at a major US bank estimates to represent as much as 2% to 3% of assets under management. We calculate that income gains from portfolio optimization should reach $200 million to $500 million in the next three to five years and accelerate swiftly with the advent of enhanced error correction during the period of broad quantum advantage. The resulting improvements in risk analytics and forecasting will drive value creation beyond $5 billion. As the brute-force Monte Carlo simulations used for risk assessment today give way to more powerful “quantum walk algorithms,” faster simulations will give banks more time to react to negative market risk (with estimated returns of as much as 12 basis points). The expected benefits include better intraday risk analytics for banks and near-real-time risk assessment for quantitative hedge funds. “Brute-force Monte Carlo simulations for economic spikes and disasters took a whole month to run,” complained one former quantitative analyst at a leading US hedge fund. Bankers and hedge fund managers hope that, with the kind of whole-market simulations theoretically possible on full-scale fault-tolerant quantum computers, they will be able to better predict black-swan events and even develop risk-driven high-frequency trading. “Moving risk management from positioning defensively to an offensive trading strategy is a whole new paradigm,” noted one former trader at a US hedge fund. Coupled with enhanced model accuracy and positioning against extreme tail events, reductions in capital reserves (by as much as 15% in some estimates) will position quantum computing to deliver $40 billion to $70 billion in operating income to banks and other financial services companies as the technology matures. Computational Fluid Dynamics Simulating the precise flow of liquids and gases in changing conditions on a computer, known as computational fluid dynamics, is a critical but costly undertaking for companies in a range of industries. Spending on simulation software by companies using CFD to design airplanes, spacecraft, cars, medical devices, and wind turbines exceeded $4 billion in 2017, but the costs that weigh most heavily on decision makers in these industries are those related to expensive trial-and-error testing such as wind tunnel and wing flex tests. These direct costs, together with the revenue potential of energy-optimized design, have many experts excited by the prospect of introducing quantum simulation into the workflow. The governing equations behind CFD, known as the Navier-Stokes equations, are nonlinear partial differential equations and thus a natural fit for quantum computing. The first bottleneck in the CFD workflow is actually an optimization problem in the preprocessing stage that precedes any fluid dynamics algorithms. Because of the computational complexity involved in these algorithms, designers create a mesh to simulate the surface of an object—say, an airplane wing. The mesh is composed of geometric primitives whose vertices form a constellation of nodes. Most classic optimizers impose a limit on the number of nodes in a mesh that can be simulated efficiently to 109. This forces the designer into a tradeoff between how fine-grained and how large a surface can be simulated. Quantum optimization is expected to relieve the designer of that constraint so that bigger pieces of the puzzle can be solved at once and more accurately—from the spoiler, for example, to the entire wing. Improving this preprocessing stage of the design process is expected to lead to operating-income gains of between $1 billion and $2 billion across industries through reduced costs and faster revenue realization. As quantum computers mature, we expect the benefits of improved mesh optimization to be surpassed by those from accelerated and improved simulations. As with mesh optimization, the tradeoff in fluid simulations is between speed and accuracy. “For large simulations with more than 100 million cells,” one of our own experts told us, “run times could be weeks even on very powerful supercomputers.” And that is with the use of simplifying heuristics, such as approximate turbulence models. During the period of broad quantum advantage, experts believe that quantum simulation could enable designers to reduce the number of heuristics required to run Navier-Stokes solvers in manageable time periods, resulting in the replacement of expensive physical testing with accurate moving-ground aerodynamic models, unsteady aerodynamics, and turbulent-flow simulations. The benefits to end users in terms of cost reductions are expected to start at $1 billion to $2 billion during this period. With full-scale fault tolerance, value creation could as much as triple, as experts anticipate that quantum linear solvers will unlock predictive simulations that not only obviate physical testing requirements but lead to product improvements (such as improved fuel economy) and manufacturing yield optimization as well. We expect value creation in the phase of full-scale fault tolerance to range from $19 billion to $37 billion in operating income. Other Industries During the NISQ era, we expect more than 40% of the value created in quantum computing to come from materials design, drug discovery, financial services, and applications related to CFD. But applications in other industries will show early promise as well. Examples include: • Transportation and Logistics. Using quantum computers to address inveterate optimization challenges (such as the traveling salesman problem and the minimum spanning tree problem) is expected to lead to efficiencies in route optimization, fleet management, network scheduling, and supply chain optimization. • Energy. With the era of easy-to-find oil and gas coming to an end, companies are increasingly reliant on wave-based geophysical processing to locate new drilling sites. Quantum computing could not only accelerate the discovery process but also contribute to drilling optimizations for both greenfield and brownfield operations. • Meteorology. Many experts believe that quantum simulation will improve large-scale weather and climate forecasting technologies, which would not only enable earlier storm and severe-weather warnings but also bring speed and accuracy gains to industries that depend on weather-sensitive pricing and trading strategies. Should quantum computing become integrated into machine learning workflows, the list of affected industries would expand dramatically, with salient applications wherever predictive capabilities (supervised learning and deep learning), principal component analysis (dimension reduction), and clustering analysis (for anomaly detection) provide an advantage. While experts are divided on the timing of quantum computing’s impact on machine learning, the stakes are so high that many of the leading players are already putting significant resources against it today, with promising early results. For example, in conjunction with researchers from Oxford and MIT, a group from IBM recently proposed a set of methods for optimizing and accelerating support vector machines, which are applicable to a wide range of classification problems but have fallen out of favor in recent years because they quickly become inefficient as the number of predictor variables rises and the feature space expands. The eventual role of quantum computing in machine learning is still being defined, but early theoretical work, at least for optimizing current methods in linear algebra and support vector machines, shows promise. While it may be years before investments in a quantum strategy begin to pay off, failure to understand the coming impact of quantum computing in one’s industry is at best a missed opportunity, at worst an existential mistake. Companies that stay on the sidelines, assuming they can buy their way into the game later on, are likely to find themselves playing catchup—and with a lot of ground to cover. protected by reCaptcha Where Will Quantum Computers Create Value—and When?
65638d4c9f81f493
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Lawrence Cahoone Joseph Keim Campbell Rudolf Carnap Nancy Cartwright Gregg Caruso Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Austin Farrer Herbert Feigl Arthur Fine John Martin Fischer Frederic Fitch Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Bas van Fraassen Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki Frank Jackson William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Walter Kaufmann Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Thomas Kuhn Andrea Lavazza Christoph Lehner Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin Joseph Levine George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood Arthur O. Lovejoy E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford C.F. von Weizsäcker William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf David Albert Michael Arbib Walter Baade Bernard Baars Jeffrey Bada Leslie Ballentine Gregory Bateson John S. Bell Mara Beller Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Jean Bricmont Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Melvin Calvin Donald Campbell Sadi Carnot Anthony Cashmore Eric Chaisson Gregory Chaitin Jean-Pierre Changeux Rudolf Clausius Arthur Holly Compton John Conway Jerry Coyne John Cramer Francis Crick E. P. Culverwell Antonio Damasio Olivier Darrigol Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Stanislas Dehaene Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Manfred Eigen Albert Einstein George F. R. Ellis Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher David Foster Joseph Fourier Philipp Frank Steven Frautschi Edward Fredkin Lila Gatlin Michael Gazzaniga Nicholas Georgescu-Roegen GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold A. O. Gomes Brian Goodwin Joshua Greene Dirk ter Haar Jacques Hadamard Mark Hadley Patrick Haggard J. B. S. Haldane Stuart Hameroff Augustin Hamon Sam Harris Ralph Hartley Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Basil Hiley Art Hobson Jesper Hoffmeyer Don Howard William Stanley Jevons Roman Jakobson E. T. Jaynes Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein William R. Klemm Christof Koch Simon Kochen Hans Kornhuber Stephen Kosslyn Daniel Koshland Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Joseph LeDoux Gilbert Lewis Benjamin Libet David Lindley Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau Owen Maroney Humberto Maturana James Clerk Maxwell Ernst Mayr John McCarthy Warren McCulloch N. David Mermin George Miller Stanley Miller Ulrich Mohrhoff Jacques Monod Emmy Noether Alexander Oparin Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Henry Quastler Adolphe Quételet Lord Rayleigh Jürgen Renn Juan Roederer Jerome Rothstein David Ruelle Tilman Sauer Jürgen Schmidhuber Erwin Schrödinger Aaron Schurger Sebastian Seung Thomas Sebeok Claude Shannon David Shiang Abner Shimony Herbert Simon Dean Keith Simonton B. F. Skinner Lee Smolin Ray Solomonoff Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark Libb Thims William Thomson (Kelvin) Giulio Tononi Peter Tse Francisco Varela Vlatko Vedral Mikhail Volkenstein Heinz von Foerster Richard von Mises John von Neumann Jakob von Uexküll John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss Herman Weyl John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Stephen Wolfram H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium John Stewart Bell In 1964 John Bell analyzed David Bohm's 1952 suggestion for "hidden variables" added to the 1935 "thought experiments" of Einstein, Podolsky, and Rosen (EPR) which could make them into real experiments. Bell put limits on local "hidden variables" in the form of what he called an "inequality," the violation of which would confirm standard quantum mechanics. Some thinkers, mostly philosophers of science rather than working quantum physicists, think that the work of Bohm and Bell has restored the determinism in physics that Einstein had wanted and that Bohm and/or Bell had discovered the "local elements of reality" that Einstein hoped for in EPR. But Bell himself came to the conclusion that local "hidden variables" will never be found that give the same results as quantum mechanics. This has come to be known as Bell's Theorem. All theories that reproduce the predictions of quantum mechanics will be "nonlocal," Bell concluded. Nonlocality is an element of physical reality and it has produced some remarkable new applications of quantum physics, including quantum cryptography and quantum computing. Bohm proposed an improvement on the original EPR experiment (which measured continuous position and momentum variables). Bohm's reformulation of quantum mechanics postulates (undetectable) deterministic positions and trajectories for atomic particles, where the instantaneous collapse happens in a new "quantum potential" field that can move faster than light speed. But it is still a "nonlocal" theory. So Bohm (and Bell) believed that nonlocal "hidden variables" might exist, and that new information can come into existence at remote "space-like separations" at speeds faster then light, if not instantaneously. This is the idea of entanglement. The original EPR paper was based on a question of Einstein's about two electrons fired in opposite directions from a central source with equal velocities. Einstein imagined them starting from a distance at t0 and approaching one another with high velocities, then for a short time interval from t1 to t1 + Δt in contact with one another, where experimental measurements could be made on the momenta, after which they separate. Now at a later time t2 it would be possible to make a measurement of electron 1's position and would therefore know the position of electron 2 without measuring it explicitly. Einstein used the conservation of linear momentum to "know" the symmetric position of the other electron. This knowledge implies information about the remote electron that is available instantly. Einstein called this "spooky action-at-a-distance." It might better be called "knowledge-at-a-distance." Bohm and his colleague Yakir Aharonov in 1957 proposed a new EPR-like thought experiment using two electrons that are prepared in an initial state of known total spin zero. Instead of measuring continuous variables position and momentum as in EPR, Bohm measures the discrete property of electron spin. If one electron spin is 1/2 in the up direction and the other is spin down or -1/2, the total spin is zero. The underlying physical law of importance is still a conservation law, in this case the conservation of spin angular momentum. Until the moment that one electron spin is measured, the two-particle quantum state is spherically symmetric (rotationally invariant). There is no preferred spatial direction. It is described as a superposition (a linear combination) of particle 1 up, particle 2 down plus particle 1 down with particle 2 up... ψ12 = (1/√2) [ ψ+ (1) ψ- (2) - ψ- (1) ψ+ ]      (2) We can simplify the notation | ψ12 > = 1/√2) | + - > - 1/√2) | - + >         (2a) Note that this combination preserves the total electron spin as zero and it offers no preferred spatial direction. Note also that under exchange of the two indistinguishable fermions, the antisymmetric wave function changes its sign, thus the minus sign in the above equations. The spherical symmetry is broken when one observer (freely) chooses a direction to measure a spin component of either particle. Erwin Schrödinger described this moment as "disentangling" the particles. In his 1964 paper "On the Einstein-Podolsky-Rosen Paradox," Bell made the case for nonlocality. The paradox of Einstein, Podolsky and Rosen was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts to show that even without such a separability or locality requirement no 'hidden variable' interpretation of quantum mechanics is possible. These attempts have been examined [by Bell] elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed [by Bohm]. That particular interpretation has indeed a gross non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions. "pre-determination" is too strong a term. The first measurement just "determines" the later measurement. We shall see that the "second" measurement is synchronous with the "first" in a "special" frame. The two measurements have a "common cause." Bell describes explicitly how the "measurement of the component σ1a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2a must yield the value — 1 and vice versa." But Schrödinger, who knew more about two-particle wave functions than anyone, explains that while the two particles are entangled (with total spin 0), any measurement disentangles them, while it conserves the total spin zero in the measurement direction. If Alice measures the electron spin of particle 1 in the x-direction as +ℏ/2, then Bob will measure a perfectly anti-correlated -ℏ/2 for particle 2. Note that since it was quantum random whether the two particle state would be projected into | + - > or into - + >, successive measurements by Alice and Bob will generate two perfectly anti-correlated strings of + and - or 0 and 1. This is exactly what is needed for the keys needed in quantum cryptography. Each individual string is random, independent and identically distributed random variables. And the strings have been generated in separated locations over a secure communications channel that cannot be eavesdropped, the ideal for quantum key distribution (QKD). A decade later, Bell titled his 1976 review of the first tests of his theorem about his predicted inequalities, "Einstein-Podolsky-Rosen Experiments." He described his talk as about the "foundations of quantum mechanics," and it was the early days of a movement by a few scientists and many philosophers of science to challenge the "orthodox" quantum mechanics. They particularly attacked the Copenhagen Interpretation, with its notorious speculations about the role of the "conscious observer" and its attacks on physical reality, especially the claim that objects have no properties until they are measured. From the earliest presentations in the late 1920's of the ideas of the supposed "founders" of quantum mechanics, Einstein had deep misgivings of the work going on in Copenhagen, although he never doubted the calculating power of their new mathematical methods, and he came to accept the statistical (indeterministic) nature of quantum physics, which he himself had reluctantly discovered in his 1916 study of the atomic emission of light quanta. He described their work as "incomplete" because it is based on the statistical results of many experiments so it can only make probabilistic predictions about individual experiments. Nevertheless, Einstein hoped to visualize what is going on in an underlying "objective reality." Bell was deeply sympathetic to Einstein's hopes for a return to the "local reality" of classical physics. He identified the EPR paper's title, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" as a search for new variables to provide the completeness. Bell thought David Bohm's "hidden variables' were one way to achieve this, though Einstein had called Bohm's approach "too cheap," probably because Bohm included "quantum potentials" traveling faster than light speed, an obvious violation of Einstein's special theory of relativity. In his 1976 review, Bell wrote... I have been invited to speak on “foundations of quantum mechanics”... The area in question is that of Einstein, Podolsky, and Rosen. Suppose for example, that protons of a few MeV energy are incident on a hydrogen target. Occasionally one will scatter, causing a target proton to recoil. Suppose (Fig. 1) that we have counter telescopes T1 and T2 which register when suitable protons are going towards distant counters C1 and C2. With ideal arrangements, registering of both T1 and T2 will then imply registering of both C1 and C2 after appropriate time decays [delays?]. Suppose next that C1 and C2 are preceded by filters that pass only particles of given polarization, say those with spin projection +1 along the z axis. Then one or both of C1and C2 may fail to register. Indeed for protons of suitable energy one and only one of these counters will register on almost every suitable occasion — i.e., those occasions certified as suitable by telescopes T1 and T2. This is because proton-proton scattering at large angle and low energy, say a few MeV, goes mainly in S wave. But the antisymmetry of the final wave function then requires the antisymmetric singlet spin state. In this state, when one spin is found “up” the other is found “down”. This follows formally from the quantum expectation value <singlet|σz(1)σz(2)|singlet> = -1 where ½σz(1) and ½σz(2) are the z component spin operators for the two particles. Suppose now the source-counter distances are such that the proton going towards C1 arrives there before the other proton arrives at C2. Someone looking at counter C1 will not know in advance whether it will or will not register. But once he has noted what happens to C1 at the appropriate time, he immediately knows what will happen subsequently to C2, however far away C2 may be.[Bell again uses conservation of total spin to get "knowledge-at-a-distance."] Some people find this situation paradoxical. They may, for example, have come to think of quantum mechanics as fundamentally indeterministic. In particular they may have come to think of the result of a spin measurement on an unpolarized particle (and each particle, considered separately, is unpolarized here) as utterly indefinite until it has happened. And yet here is a situation where the result of such a measurement is perfectly definitely known in advance. It did become determined (but it was not predetermined beforehand) by the measurement at C1, which collapses the entangled two-particle wave function Did it only become determined at the instant when the distant particle passed the distant filter? But how could what happens a long way off change the situation here? Is it not more reasonable to assume that the result was somehow predetermined all along? Since Bell's original work, many other physicists have defined other "Bell inequalities" and developed increasingly sophisticated experiments to test them. Most recent tests have used oppositely polarized photons coming from a central source. Here, it is the total photon spin of zero that is conserved. The first experiments that confirmed Bell's Theorem were done by John Clauser and Stuart Freedman in 1971, Clauser and Abner Shimony described the first few experiments in a 1978 review. There they agreed with Bell about measurements on two spin 1/2 particles, as suggested by David Bohm. Clauser and Shimony wrote... A variant of EPR’s argument was given by Bohm and Aharonov (1957), formulated in terms of discrete states. He considered a pair of spatially separated spin-1/2 particles produced somehow in a singlet state, for example, by dissociation of the spin-0 system... Suppose that one measures the spin of particle 1 along the x axis. The outcome is not predetermined by the description [wave function] Ψ12. But from it, one can predict that if particle 1 is found to have its spin parallel to the x axis, then particle 2 will be found to have its spin antiparallel to the x axis if the x component of its spin is also measured. Thus, an experimenter can arrange the apparatus in such a way that he can predict the value of the x component of spin of particle 2 presumably without interacting with it (if there is no action-at-a-distance). When the x component is measured, it disrupts the y and z components, rendering them indeterminate. All the spin components are not simultaneously definite! See Dirac's three polarizers. Likewise, he can arrange the apparatus so that he can predict any other component of the spin of particle 2. The conclusion of the argument is that all components of spin of each particle are definite, which of course is not so in the quantum-mechanical description. Hence, a hidden-variables theory seems to be required. Clauser and Shimony are wrong to conclude that measuring one spin component would render spin components in all directions definite. If all three x, y, z components of spin had definite values of 1/2, the resultant vector (the diagonal of a cube with side 1/2) would be 3½/2. This is impossible. Spin is always quantized at ℏ/2. The unmeasured components are in a linear combination of + ℏ/2 and - ℏ/2 (with average value zero!). Although Bell's Theorem is one of the foundational documents in the "Foundations of Quantum Mechanics," it is cited much more often than the confirming experiments are explained, because they are quite complicated. The most famous explanations are given in terms of analogies, with flashing lights, dice throws, or card games. See David Mermin. What is needed is an explanation describing exactly what happens to the quantum particles and their statistics. The most important experiments were likely those done by John Clauser, Michael Horne, Abner Shimony, and Richard Holt (known collectively as CHSH) and later by Alain Aspect, who did even more sophisticated tests. When the two particles reach the polarizers a and b they are always found in opposite spin states (one up or +, the other down or -). This is consistent with Einstein's "objective reality" that the particles have had those values since their mutual state were prepared (entangled) at t=0. Now this is true whether σx or σy is measured (assuming the transmission axis is along the z direction). But keep in mind that if σx is measured, σy is then indeterminate. This is why we say that the outcome of a measurement depends on the "free choice" of the experimenter. A choice to measure in the x direction gives us a value of the spin-component in the x direction, σx. Did the spin in the x direction exist before the measurement? No. Did the spins in the two orthogonal directions exist before the measurement? No. Those orthogonal spins definitely do not exist after the measurement, since the measurement is also a state preparation. σx now exists, σy and σz do not. All three potential spins are latent in the rotationally invariant state with total spin 0, in the sense that whichever direction is chosen for a measurement, if the same direction is chosen for the other particle it will be found to have opposite spin (by conservation of angular momentum). If a different direction is chosen for the other particle, it will no longer be perfectly correlated with the first particle spin. When photons are used, their boson spins are ±1, not ±1/2. But if photons are entangled with opposite spins so the total spin is zero, the results of Bell tests will be the same. Experimental Results With the exception of some of Holt's early results that were found to be erroneous, no evidence has so far been found of any failure of standard quantum mechanics. And as experimental accuracy has improved by orders of magnitude, quantum physics has correspondingly been confirmed to one part in 1018, and the speed of the any information transfer between particles has a lower limit of 106 times the speed of light. There has been no evidence for local "hidden variables." Bell Theorem tests usually add what Bell called "filters," polarization analyzers whose polarization angles can be set, sometimes at high speeds between the so-called "first" and "second" measurements. On David Bohm's "Impossible" Pilot Wave John Bell reflected on Bohm's Pilot Wave in 1987... Bohm’s 1952 papers on quantum mechanics were for me a revelation. The elimination of indeterminism was very striking. But more important, it seemed to me, was the elimination of any need for a vague division of the world into “system” on the one hand, and “apparatus” or “observer” on the other. I have always felt since that people who have not grasped the ideas of those papers ... and unfortunately they remain the majority ... are handicapped in any discussion of the meaning of quantum mechanics. A preliminary account of these notions was entitled “Quantum field theory without observers, or observables, or measurements, or systems, or apparatus, or wavefunction collapse, or anything like that”. This could suggest to some that the issue in question is a philosophical one. But I insist that my concern is strictly professional. I think that conventional formulations of quantum theory, and of quantum field theory in particular, are unprofessionally vague and ambiguous. Professional theoretical physicists ought to be able to do better. Bohm has shown us a way. Following John Bell's idea, Nicholas Gisin and Antoine Suarez argue that something might be coming from "outside space and time" to correlate results in their own experimental tests of Bell's Theorem. Roger Penrose and Stuart Hameroff have proposed causes coming "backward in time" to achieve the perfect EPR correlations, as has philosopher Huw Price. A Preferred Frame? Back in the 1960's, C. W. Rietdijk and Hilary Putnam argued that physical determinism could be proved to be true by considering the experiments and observers A and B in a "spacelike" separation and moving at high speed with respect to one another. Roger Penrose developed a similar argument in his book The Emperor's New Mind. It is called the Andromeda Paradox. If there is a preferred frame of reference, surely it is the one in which the origin of the two entangled particles is at rest. Assuming that Alice and Bob are also at rest in this frame and equidistant from the origin, we arrive at the simple picture in which any measurement that causes the two-particle wave function to collapse makes both particles appear simultaneously at determinate places (just what is needed to conserve energy, momentum, angular momentum, and spin). Because a "preferred frame" has an important use in special relativity, where all inertial frames are equivalent, we might call this frame a "special frame." The EPR "paradox" is the result of a naive non-relativistic description of events. Although the two events (measurements of particles A and B) are simultaneous in our special frame, the space-like separation of the events means that from Alice's point of view, any knowledge of event B is out in her future. Bob likewise sees Alice's event A out in his future. These both cannot be true. Yet they are both true (and in some sense neither is true). Thus the paradox. Instead of just one particle making an appearance in the collapse of a single-particle wave function, in the two-particle case, when either particle is measured, we know instantly those properties of the other particle that satisfy the conservation laws, including its location equidistant from, but on the opposite side of, the source, and its other properties such as spin. You can compare the collapse of the two-particle probability amplitude above to the single-particle collapse here. We can enhance our visualization of what might be happening between the time two entangled electrons are emitted with opposite spins and the time one or both electrons are detected. Quantum mechanics describes the state of the two electrons as in a linear combination of | + - > and | - + > states. We can visualize the electron moving left to be both spin up | + > and spin down | - >. And the electron moving right would be both spin down | - > and spin up | + >. We could require that when the left electron is spin up | + >, the right electron must be spin down | - >, so that total spin is always conserved. Notice that if you move the animation frame by frame by dragging the dot in the timeline, you will see that total spin = 0 is conserved. When one electron is spin up the other is always spin down. Note that we can challenge the idea that spins are oscillating. Would a force of some kind be needed to change the spins in sync? Perhaps we can see the rapid changes like resonance phenomena in molecular bonds? Standard quantum mechanics says we cannot know the spin until it is measured, our minimal information estimate is a 50/50 probability between up and down. Despite accepting that a particular value of an "observable" can only be known by a measurement (knowledge is an epistemological problem, Einstein asked whether the particle actually (really, ontologically) has a path and position before we measure it? His answer was yes. Below is an animation that illustrates the assumption that the two electrons are randomly produced in states that have latent components that conserve spin momentum, and that they remain in those states no matter how far they separate, provided neither interacts with anything else before the measurement. Since each electron has only one unit of electron spin (a magnetic moment equal to one Bohr magneton), we can only say that if measured in a given direction, the spin will be projected into that direction for the left electron, into the opposite direction for the right electron. Werner Heisenberg and later Paul Dirac and others refer to the "free choice" of the experimenter as to which direction is chosen to measure. But then Dirac adds that nature makes a random choice as to whether to find the electron spin is up or down in that chosen direction. Entanglement adds the nonlocality and non-separability that is caused by the (single) two-particle wave function collapsing symmetrically and simultaneously in our special frame. How Mysterious Is Entanglement? Schrödinger knew that his two-particle wave function Ψ12 could not have the same simple interpretation as the single particle, which can be visualized in ordinary 3-dimensional configuration space. And he is right that entanglement apparently exhibits a richer form of the apparent "action-at-a-distance" and nonlocality that Einstein had already identified in the collapse of the single particle wave function. But the main difference is that two particles acquire new properties instead of one, and they appear to do it instantaneously (at faster than light speeds), just as in the case of a single-particle measurement, the probability of finding that particular single particle anywhere else is instantaneously zero. Nonlocality and entanglement are thus just another manifestation of Richard Feynman's "only" mystery. In both single-particle and two-particle cases paradoxes appear only when we attempt to describe individual particles following specific paths to measurement by observer A (and/or observer B). We cannot know the specific paths at every instant without measurements. But Einstein has told us that at every instant the particles are conserving momentum, despite our lack of knowledge between individual experiments. We can ask what happens if Bob is not at the same distance from the origin as Alice, but farther away. When Alice detects the particle (with say spin up), at that instant the other particle also becomes determinate (with spin down) at the same distance on the other side of the origin. It now continues, in that determinate state, to Bob's measuring apparatus. Recall Bell's description of the process (quoted above), with its bias toward assuming first one measurement is made, and the other measurement is made later. Since the collapse of the two-particle wave function is indeterminate, nothing is pre-determined, although σ2 is indeed determined to have opposite sign (to conserve spin momentum) once σ1 is measured. Here Bell is describing the "following" measurement to be in the same direction as the "previous" measurement. In Bell's description, Bob is measuring "the same component" as Alice, meaning that he measures at the same angle as Alice. If Bob should measure in a different spin direction from Alice (a different spin component), his measurements will lose their perfect correlation, slowly at first for a small angle. As the angle between their measurements increases, the correlation falls off as the square of the cosine of the angle. Oddly, Bell's inequality for local hidden variables predicts a linear falloff with angle. In our case, the entangled particles have been prepared in a superposition of states, but both of them have total spin zero. So whichever of these two states is created by the preparation, it will put the two particles in opposite spin states, randomly + - or - + , but still supporting Bell's view, that they will be perfectly (anti-)correlated when measured at exactly the same angle (measuring the same spin component). Wolfgang Pauli called it a "measurement of the first kind" when a system is prepared in a state and if measured again, will be certainly found in the same state. (This is the basis for the quantum zeno effect.) Since our two electrons have been prepared with one spin up and the other down, what could possibly cause them to change, for example, to both spins in the same direction, or as Copenhagen claims, simply to have both spins no longer definite until the next measurement? As long as nothing interferes with either entangled particle as they travel to the distant detectors, they will be found to be still perfectly correlated, if (and only if) they are measured at the same angle. Otherwise, the correlations should fall off as the square of the cosine of the angle difference. We can illustrate the straight-line predictions of Bell's inequalities for local hidden variables, the cosine curves predicted by quantum mechanics and conservation of angular momentum, and the odd "kinks" at angles 0°, 90°, 180°, and 270°, with what is called a "Popescu-Rorhlich box." In his famous 1981 article on "Bertlmann's Socks," Bell explains that the predictions for his "ad hoc" model are linear in the angle difference |a - b|, and he notes the fact that his inequality only agrees with the quantum predictions at the corners of the square of linear predictions above, and not at intermediate angles. To account then for the Einstein-Podolsky-Rosen-Bohm correlations we have only to assume that the two particles emitted by the source have oppositely directed magnetic axes. Then if the magnetic axis of one particle is more nearly along (than against) one Stern-Gerlach field, the magnetic axes of the other particle will be more nearly against (than along) a parallel Stern- Gerlach field. So when one particle is deflected up, the other is deflected down, and vice versa. There is nothing whatever problematic or mind-boggling about these correlations, with parallel Stern-Gerlach analyzers, from the Einsteinian point of view. So far so good. But now go a little further than before, and consider non-parallel Stern-Gerlach magnets. Let the first be rotated away from some standard position, about the particle line of flight, by an angle a. Let the second be rotated likewise by an angle b. Then if the magnetic axis of either particle separately is randomly oriented, but if the axes of the particles of a given pair are always oppositely oriented, a short calculation gives for the probabilities of the various possible results, in the ad hoc model,... P(up, down) = P(down, up) = 1/2 - |a-b|/2π where ‘up’ and ‘down’ are defined with respect to the magnetic fields of the two magnets. However, a quantum mechanical calculation gives P(up, down) = P(down, up) = 1/2 - 1/2(sin(a - b)/2)2 [= 1/2(cos(a - b)/2)2] Thus the ad hoc model does what is required of it (i.e., reproduces quantum mechanical results) only at (a — b) = 0, (a - b) = π/2 and (a — b) = π, but not at intermediate angles. The dependence on the square of the cosine is the so-called "law of Malus" for crossed polarizers as pointed out by Abner Shimony in his Stanford Encyclopedia article on Bell's Theorem. Paul Dirac taught his "principle of superposition" with crossed polarizers in his 1930 textbook The Principles of Quantum Mechanics. Can Perfect Correlations Be Explained by Conservation Laws? We find that David Bohm, Eugene Wigner, and even John Bell used conservation of angular momentum (or particle spin) to tell us that if one spin-1/2 electron is measured up, the other must be down. Just as Albert Einstein used conservation of linear momentum in his development of the EPR Paradox. David Bohm and Yakir Aharonov wrote in 1957, Eugene Wigner wrote in 1962 Writing a few years after Bohm, and one year before Bell, Wigner explicitly describes Einstein's conservation of momentum example as well as the conservation of angular momentum (spin) that explains perfect correlations between angular momentum (spin) components measured in the same direction John Bell wrote in 1964, Just like Bohm and Wigner, Bell is implicitly using the conservation of total spin. Albert Einstein made the same argument in 1933, shortly before EPR, though with conservation of linear momentum, asking Leon Rosenfeld, And in our case, quantum mechanics describes the entangled particles as prepared in a superposition of two-particle states, but note that both of the states have total spin zero. Now this initial entangled state is spherically symmetric and rotationally invariant. It has no preferred spin direction that could "pre-determine" the directions that will be found by Alice and Bob, as Bell described. The preferred direction is created by Alice's measurement, or by Bob's should he measure first in the "special frame" in which Alice and Bob are "at rest" and equidistant from the location of the initial entanglement. Let's assume that Alice measures first and gets spin +1/2. The prepared state has been projected (randomly) into ψ+(1) ψ-(2). But most important, Alice's measurement establishes the angle of her spin measurement - the angle of her Stern-Gerlach magnet in the x,y plane. Werner Heisenberg says it is her free choice to measure the x-component. As the Copenhagen Interpretation describes this , Alice brings this x-component property into existence. (This was Pascual Jordan's contribution to the interpretation.) There was no x- or y-component in the rotationally invariant prepared entanglement. Paul Dirac pointed out that the actual value for the property depend's on what he calls "Nature's choice." The initial prepared state (1) might equally have collapsed into ψ-(2). This is the source of the quantum randomness which is critically important for quantum encryption. Whichever of the two states is projected by Alice's measurement, it breaks the original symmetry, and puts the two particles in opposite spin states, randomly + - or - +, supporting the views of Bohm, Wigner, and Bell, that particles will be perfectly (anti-)correlated when measured. In our example, since Alice measured the x-component of spin as +1/2, Bob will necessarily (and because of conservation of angular momentum) measure the x-component as -1/2. As we saw above, Wolfgang Pauli called it a "measurement of the first kind" when a system is prepared in a state, so that when measured, it will certainly be found in the same state. As long as nothing interferes with either entangled particle as they travel to the distant detectors (though perhaps decoherence?), they will be found to be perfectly correlated if (and only if) they are measured at the same angle (in our case, the x-component). Otherwise. the correlations should fall off as the square of the cosine of the angle difference. It is strange that Bell accepted an inequality that predicts correlations fall off with angle as a non-physical straight-line function with "kinks." In any case, conservation laws tell us that when either particle is measured, we know instantly those properties of the other particle, including its location equidistant from, but on the opposite side of, the entangling interaction, and all other conserved properties such as spin. But this is not "action-at-a-distance." It's just "knowledge-at-a-distance." A more recent (2005) study showing that correlations in Bell tests is the result of conservation of angular momentum is "Correlation functions, Bell's inequalities and the fundamental conservation laws," by C. S. Unnikrishnan of the Tata Institute in India. He also discusses the odd "kinks" in Bell's linear predictions of correlations compared to the conservation law curve. No "Hidden Variables," but Perhaps "Hidden Constants?" We find no need for "hidden variables," whether local or non-local. But we might say that the conservation laws give us "hidden constants." Conservation of a particular property is often described as a "constant of the motion." These constants might be viewed as "local," in that they travel along with particles at all times, or as "global," in that they are a property of the two-particle probability amplitude wave function Ψ12 as it spreads out in space. This agrees with Bohm, and especially with Bell, who says that the spin of particle 2 is "predetermined" to be found up if particle 1 is measured to be down. But recall that the Copenhagen Interpretation says we cannot know a spin property until it is measured. So some claim that the spins are in an unknown combination of spin down and spin up until the measurements. It is this that suggests the possibility that both spins might be found in the same direction, violating conservation laws. Although electron spins in this situation are never found to be the same when measured in the same direction, the Copenhagen view gave rise to the idea of a hidden variable as some sort of signal that could travel to particle 2 after the measurement of particle 1, causing it to change its spin to be opposite that of particle 1. What sort of signal might this be? And what mechanism exists in a bare electron that receives the signal and then causes the electron to change its spin without an external force of some kind? Clearly, Wigner's explicit conservation of angular momentum, and the implicit claims of Bohm and Bell that the electron spins were prepared (entangled) in opposite states, give us the simplest and clearest explanations of the entanglement mystery. The intuitive idea that the particles were prepared with spins opposite can be interpreted as the "common cause" of the correlations. Despite accepting that a particular value of some "observables" can only be "known" by a measurement (knowledge is an epistemological problem) Einstein asked whether the particle actually (really, ontologically) has a path and position, even other properties, before we measure it? His answer was yes. So Einstein would likely agree with Wigner, Bohm, and with Bell to assume that the two particles have opposite spins from the time of their entangling interaction. Two "hidden constants" of the motion, one spin up, one down, completely explain the fact of perfect correlations of opposing spins. That "Nature's" initial choice of up-down versus down-up is quantum random explains why the bit strings created by Alice and Bob can be used in quantum encryption. Quantum keys are distributed over a secure communications channel that cannot be "tapped" by an eavesdropper without destroying the perfect correlation of the pair of bit strings. Principle Theories and Constructivist Theories In his 1933 essay, "On the Method of Theoretical Physics," Albert Einstein argued that the greatest physical theories would be built on "principles," not on constructions derived from physical experience. His theory of special relativity was based on the principle of relativity, that the laws of physics are the same in all inertial frames, along with the constant velocity of light in all frames. Our explanation of entanglement as the result of "hidden constants" of the motion is based on conservation principles, which, as Emmy Noether showed, are based on still deeper principles of symmetry. This principle theory explaining entanglement is also supported by the empirical evidence that entangled electron spins are always found in opposite directions, conserving the angular momentum. Einstein would have approved. In 1987, Bell contributed an article entitled Are There Quantum Jumps? to a centenary volume for Erwin Schrödinger. Schrödinger strenuously denied quantum jumps or collapses of the wave function. Bell's title was inspired by two articles with the same title written by Schrödinger in 1952 (Part I, Part II). Just a year before Bell's death in 1990, physicists assembled for a conference on 62 Years of Uncertainty (referring to Werner Heisenberg's 1927 principle of indeterminacy). John Bell's contribution to this conference was an article called "Against Measurement." In it he attacked Max Born's statistical interpretation of quantum mechanics (which Born acknowledged was based on an original suggestion of Albert Einstein). And Bell praised the new ideas of GianCarlo Ghirardi and his colleagues, Alberto Rimini and Tomaso Weber: In the beginning, Schrödinger tried to interpret his wavefunction as giving somehow the density of the stuff of which the world is made. He tried to think of an electron as represented by a wavepacket — a wave-function appreciably different from zero only over a small region in space. The extension of that region he thought of as the actual size of the electron — his electron was a bit fuzzy. At first he thought that small wavepackets, evolving according to the Schrödinger equation, would remain small. But that was wrong. Wavepackets diffuse, and with the passage of time become indefinitely extended, according to the Schrödinger equation. But however far the wavefunction has extended, the reaction of a detector to an electron remains spotty. So Schrödinger's 'realistic' interpretation of his wavefunction did not survive. Then came the Born interpretation. The wavefunction gives not the density of stuff, but gives rather (on squaring its modulus) the density of probability. Probability of what exactly? Not of the electron being there, but of the electron being found there, if its position is 'measured.' Why this aversion to 'being' and insistence on 'finding'? The founding fathers were unable to form a clear picture of things on the remote atomic scale. They became very aware of the intervening apparatus, and of the need for a 'classical' base from which to intervene on the quantum system. And so the shifty split. The kinematics of the world, in this orthodox picture, is given a wavefunction (maybe more than one?) for the quantum part, and classical variables — variables which have values — for the classical part: (Ψ(t, q, ...), X(t),...). The Xs are somehow macroscopic. This is not spelled out very explicitly. The dynamics is not very precisely formulated either. It includes a Schrödinger equation for the quantum part, and some sort of classical mechanics for the classical part, and 'collapse' recipes for their interaction. It seems to me that the only hope of precision with the dual (Ψ, X) kinematics is to omit completely the shifty split, and let both Ψ and x refer to the world as a whole. Then the Xs must not be confined to some vague macroscopic scale, but must extend to all scales. In the picture of de Broglie and Bohm, every particle is attributed a position x(t). Then instrument pointers — assemblies of particles have positions, and experiments have results. The dynamics is given by the world Schrödinger equation plus precise 'guiding' equations prescribing how the x(t)s move under the influence of Ψ. Particles are not attributed angular momenta, energies, etc., but only positions as functions of time. Peculiar 'measurement' results for angular momenta, energies, and so on, emerge as pointer positions in appropriate experimental setups. Considerations of KG [Kurt Gottfried] and vK [N. G. van Kampen] type, on the absence (FAPP) [For All Practical Purposes] of macroscopic interference, take their place here, and an important one, is showing how usually we do not have (FAPP) to pay attention to the whole world, but only to some subsystem and can simplify the wave-function... FAPP. The Born-type kinematics (Ψ, X) has a duality that the original 'density of stuff' picture of Schrödinger did not. The position of the particle there was just a feature of the wavepacket, not something in addition. The Landau—Lifshitz approach can be seen as maintaining this simple non-dual kinematics, but with the wavefunction compact on a macroscopic rather than microscopic scale. We know, they seem to say, that macroscopic pointers have definite positions. And we think there is nothing but the wavefunction. So the wavefunction must be narrow as regards macroscopic variables. The Schrödinger equation does not preserve such narrowness (as Schrödinger himself dramatised with his cat). So there must be some kind of 'collapse' going on in addition, to enforce macroscopic narrowness. In the same way, if we had modified Schrödinger's evolution somehow we might have prevented the spreading of his wavepacket electrons. But actually the idea that an electron in a ground-state hydrogen atom is as big as the atom (which is then perfectly spherical) is perfectly tolerable — and maybe even attractive. The idea that a macroscopic pointer can point simultaneously in different directions, or that a cat can have several of its nine lives at the same time, is harder to swallow. And if we have no extra variables X to express macroscopic definiteness, the wavefunction itself must be narrow in macroscopic directions in the configuration space. This the Landau—Lifshitz collapse brings about. It does so in a rather vague way, at rather vaguely specified times. In the Ghirardi—Rimini—Weber scheme (see the contributions of Ghirardi, Rimini, Weber, Pearle, Gisin and Diosi presented at 62 Years of Uncertainty, Erice, Italy, 5-14 August 1989) this vagueness is replaced by mathematical precision. The Schrödinger wavefunction even for a single particle, is supposed to be unstable, with a prescribed mean life per particle, against spontaneous collapse of a prescribed form. The lifetime and collapsed extension are such that departures of the Schrödinger equation show up very rarely and very weakly in few-particle systems. But in macroscopic systems, as a consequence of the prescribed equations, pointers very rapidly point, and cats are very quickly killed or spared. The orthodox approaches, whether the authors think they have made derivations or assumptions, are just fine FAPP — when used with the good taste and discretion picked up from exposure to good examples. At least two roads are open from there towards a precise theory, it seems to me. Both eliminate the shifty split. The de Broglie—Bohm-type theories retain, exactly, the linear wave equation, and so necessarily add complementary variables to express the non-waviness of the world on the macroscopic scale. The GRW-type theories have nothing in the kinematics but the wavefunction. It gives the density (in a multidimensional configuration space!) of stuff. To account for the narrowness of that stuff in macroscopic dimensions, the linear Schrödinger equation has to be modified, in this GRW picture by a mathematically prescribed spontaneous collapse mechanism. The big question, in my opinion, is which, if either, of these two precise pictures can be redeveloped in a Lorentz invariant way. ...All historical experience confirms that men might not achieve the possible if they had not, time and time again, reached out for the impossible. (Max Weber) ...we do not know where we are stupid until we stick our necks out. (R. P. Feynman) On the 22nd of January 1990, Bell gave a talk explaining his theorem at CERN in Geneva organized by Antoine Suarez, director of the Center for Quantum Philosophy. There are links on the CERN website to the video of this talk, and to a transcription. In this talk, Bell summarizes the situation as follows: It just is a fact that quantum mechanical predictions and experiments, in so far as they have been done, do not agree with [my] inequality. And that's just a brutal fact of nature...that's just the fact of the situation; the Einstein program fails, that's too bad for Einstein, but should we worry about that? I cannot say that action at a distance is required in physics. But I can say that you cannot get away with no action at a distance. You cannot separate off what happens in one place and what happens in another. Somehow they have to be described and explained jointly. Bell gives three reasons for not worrying. 1. Nonlocality is unavoidable, even if it looks like "action at a distance." [It does not, with a proper understanding of quantum physics. See our EPR page.] 2. Because the events are in a spacelike separation, either one can occur before the other in some relativistic frame, so no "causal" connection can exist between them. 3. No faster-than-light signals can be sent using entanglement and nonlocality. He concludes: So as a solution of this situation, I think we cannot just say 'Oh oh, nature is not like that.' I think you must find a picture in which perfect correlations are natural, without implying determinism, because that leads you back to nonlocality. And also in this independence as far as our individual experiences goes, our independence of the rest of the world is also natural. So the connections have to be very subtle, and I have told you all that I know about them. Thank you. The work of GianCarlo Ghirardi that Bell endorsed is a scheme that makes the wave function collapse by adding small (order of 10-24) nonlinear and stochastic terms to the linear Schrödinger equation. GRW can not predict when and where their collapse occurs (it is simply random), but the contact with macroscopic objects such as a measuring apparatus (with the order of 1024 atoms) makes the probability of collapse of order unity. Information physics removes Bell's "shifty split" without "hidden variables" or making ad hoc non-linear additions like those of Ghirardi-Rimini-Weber to the linear Schrödinger equation. The "moment" at which the boundary between quantum and classical worlds occurs is the moment that irreversible observable information enters the universe. So we can now look at John Bell's drawing of possible locations for his "shifty split" and identify the correct moment - when irreversible information enters the universe. Against Measurement (PDF) Beables for Quantum Field Theory (PDF) On the Einstein-Podolsky-Rosen Paradox (PDF) On the Impossible Pilot Wave (PDF) Are There Quantum Jumps? (PDF, Excerpt) BBC Interview (PDF, Excerpt) Epistemological Letters by C. S. Unnikrishnan, Tata Institute, 2005 For Teachers For Scholars Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge Home Part Two - Knowledge Normal | Teacher | Scholar
52d53ad53a1d3f00
Imaginary numbers can be essential in describing reality The mathematicians were disturbed, centuries ago, to find that calculating the properties of certain curves required what seemed impossible: numbers which, multiplied by themselves, become negative. All the numbers on the number line, when squared, give a positive number; 22 = 4 and (-2)2 = 4. Mathematicians began to call these familiar numbers “real” and the seemingly impossible breed of “imaginary” numbers. Imaginary numbers, labeled with units of I (where, for example, (2I)2 = -4), have gradually become essential in the abstract field of mathematics. For physicists, however, real numbers were enough to quantify reality. Sometimes what are called complex numbers, with both real and imaginary parts, such as 2 + 3I, have simplified the calculations, but in an apparently optional manner. No instrument has ever returned a reading with a I. Yet physicists may have just shown for the first time that imaginary numbers are, in a sense, real. A group of quantum theorists have devised an experiment whose outcome depends on whether nature has an imaginary side. Provided quantum mechanics are correct – a hypothesis few could complain about – the team’s argument essentially ensures that complex numbers are an inevitable part of our description of the physical universe. “These complex numbers are usually just a practical tool, but here it turns out that they really have a physical meaning,” said Tamás Vértesi, a physicist at the Institute for Nuclear Research of the Hungarian Academy of Sciences. who, years ago, argued otherwise. “The world is such that it really needs these complex numbers,” he said. In quantum mechanics, the behavior of a particle or group of particles is encapsulated by a wave entity known as a wave function, or ψ. The wave function predicts possible results of measurements, such as the position or possible momentum of an electron. The so-called Schrödinger equation describes how the wave function changes over time – and this equation has a I. Physicists have never been quite sure what to think about it. When Erwin Schrödinger derived the equation which now bears his name, he hoped to rub the I outside. “What is unpleasant here, and in fact directly questionable, is the use of complex numbers,” he wrote to Hendrik Lorentz in 1926. “ψ is surely a fundamentally real function. “ Schrödinger’s desire was certainly mathematically plausible: any property of complex numbers can be captured by combinations of real numbers plus new rules to keep them in line, opening up the mathematical possibility of an entirely real version of quantum mechanics. Indeed, the translation turned out to be simple enough that Schrödinger almost immediately discovered what he believed to be the “true equation of the wave”, the one that avoided I. “Another heavy stone has been rolled away from my heart,” he wrote to Max Planck less than a week after his letter to Lorentz. “Everything came out exactly as we would have liked.” But using real numbers to simulate complex quantum mechanics is a clunky and abstract exercise, and Schrödinger admitted that his fully real equation was too heavy for everyday use. Within a year, he was describing wave functions as complex, just as physicists think today. “Anyone who wants to work uses the complex description,” said Matthew McKague, quantum computer scientist at Queensland University of Technology in Australia. Yet the true formulation of quantum mechanics has remained proof that the complex version is simply optional. Teams such as Vértesi and McKague, for example, have shown 2008 and 2009 that – without a I in sight – they could perfectly predict the outcome of a famous quantum physics experiment known as the Bell test. The new research, which was posted on the scientific preprint server in January, finds that these early Bell test proposals simply did not go far enough to shatter the real-number version of quantum physics. It offers a more complex Bell experiment which seems to require complex numbers. Previous research has led people to conclude that “in quantum theory, complex numbers are only practical, but not necessary,” wrote the authors, including Marc-Olivier Renou from the Institute of Photonic Sciences in Spain and Nicolas Gisin from the University of Geneva. “Here we prove that this conclusion is wrong.” Leave a Reply
6e69e0daafe7394b
e, π and the Exponential Function Throughout mathematics and its applications, we often encounter the numbers e and π. But what do they actually mean, what makes them so prevalent, and how are they related? Both numbers are deeply intertwined with the exponential function, denoted exp, which can be described simply as “the function which is its own derivative”. (Or, in slightly less simple but more accurate terms – exp is the only function f:\mathbb{R}\to\mathbb{R} which is differentiable everywhere and satisfies f'(x)=f(x) for every x\in\mathbb{R}, and f(0)=1. You can also use \mathbb{C} instead of \mathbb{R}). Another way to say this, is that exp is a solution to the simple differential equation y'=y. As such, it is a building block for solutions to differential equations of all kinds. Differential equations describe how the change in some quantity relates to the quantity itself. They describe how the universe works at all levels – from the most microscopic and fundamental, such as • Electromagnetism (Maxwell’s equations), • Gravity (Einstein’s field equations), • Quantum mechanics (Schrödinger equation), to the macrosopic – • The motion of springs, pendulums, projectiles and planets, • Waves – be it sea waves, sound waves or radio waves, • Electronic circuits, • Radioactive decay, • Structural integrity of buildings, • Rockets and space launches, • The growth of populations, be it humans, animals, bacteria in a petri dish, viruses in a human host, or people sick with COVID-19, • Financial dynamics, like money in a bank account, stock prices, the revenues of a company, or the exchange rate of currencies such as Bitcoin, • Adoption of new technologies, • Social phenomena, like memes and viral videos, • And much more – including purely abstract mathematical concepts which have no direct ties to phenomena in the physical universe. So it is no surprise that the function which is the building block for solving differential equations comes up very often. In fact, some dub it “the most important function in mathematics”. Because the function is so important, we want to know more about it. One question of interest is – what is the value of \exp(1)? This is useful, because one of the properties of exp (which we can prove using the definition we started with) is that \exp(x+y)=\exp(x)\exp(y). Using this, we can show that \exp(n)=\exp(1)^n for every integer n (where taking a power is a simple repeated multiplication). In other words, knowing the value of the function at 1 allows us to find its value for every integer. So we give the value of \exp(1) a name. The name we choose is e. That’s what e is – the value of the exponential function at 1. The importance of e can be understood by understanding the importance of the exponential function, which itself can be understood by understanding the importance of differential equations. That understanding can come from some experience with their applications; the examples I gave above might help. In fact, if we extend a bit the definition of taking a power, we will find that for every real number x, we have \exp(x) = e^x, not just for integer x. This is why the exponential function is often written e^x instead of \exp(x). The exponential function is also where π comes from. If we look at it as a complex function, we find that it is periodic – there is a specific number p\in\mathbb{C} such that for every z\in\mathbb{C}, we have \exp(z+p)=\exp(z) (which is the smallest with this property). This number happens to be purely imaginary, so if we divide it by 2i, we get a real number. This real number is what we call π. This way of looking at π – as the period of the most important function in mathematics (divided by 2i) – is much more fundamental, and better explains why π comes up so often, than definitions based on the girth of arbitrary geometric shapes we might scribble. The Blind Men and the Elephant. [the BIG picture] | by Sophia Tepe ... It’s also noteworthy that the exponential function is reminiscent of the blind men and the elephant. It behaves differently and seems to be a different thing if we look at it from different perspectives. If we look at the positive real axis, it is rapidly growing. On the negative real axis, it is rapidly shrinking. On the imaginary axis it is neither growing nor shrinking – it is periodic, repeating the same values in a cycle. Which nature of the exponential function comes to light, depends on the specific differential equation we use it to solve. That’s why some of the applications I mentioned exhibit growth or decay, and some exhibit rotation and cycles. In fact, the well-known periodic functions sin and cos can be seen as projections of what the exponential function does along the imaginary axis. We’ve defined e as the value of the function at 1 – a real number, and we’ve defined π using the period of the function along the imaginary numbers. It should come as no surprise, then, that e often comes up in applications dealing with growth and decay, and π often comes up in applications dealing with cycles and circularity. They are two sides of the same coin. Leave a Reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s
6124477367dfe549
Rubrica biografie Klein Oskar  Biografia estratta da  (1894-1977) Oskar Klein was the youngest son of Sweden's first rabbi, Gottlieb Klein, who was originally from the Southern Carpathian. Gottlieb Klein received his doctorate from Heidelberg and moved to Sweden in 1883. He evidently instilled an interest in learning in his young son, as Oskar became quite fond of biology at an early age. This interest changed to chemistry around the age of 15 and soon after, in 1910, Svante Arrhenius, at what seems to be the behest of Gottlieb, invited Oskar to work in his laboratory at the Nobel Institute. Here he took up an interest in solubility and he published his first paper in 1912 on the solubility of zinc hydroxide in alkalis. This was the very same year that he finished his secondary education. He waited, however, until 1914 to take the University exam. Arrhenius wanted to send Klein to work with Jean-Baptiste Perrin in his laboratory at the University of Paris but the plan was foiled by the outbreak of World War I. Klein found himself caught up in the tempest and saw military service in 1915 and 1916. After his service concluded, but with the war still raging, he returned to work with Arrhenius. Their work now centred around studying dielectric constants of alcohols in various solvents. During this particular stay in Stockholm, he met Hendrik A Kramers, who, at the time (1917), was a student of Niels Bohr in Copenhagen. Kramers and Klein met several times during the next few years both in Stockholm and in Copenhagen, which was to be Klein's next destination. In 1917 Klein received a fellowship to study abroad and, subsequently, arrived in Copenhagen in 1918. Over the course of the next two years he would travel between Stockholm and Copenhagen performing work for both Bohr and Arrhenius, spending the summer of 1919 with Kramers in Copenhagen, and finally returning to Stockholm in 1920. But that was not to be the end of his Copenhagen experience. In fact, it was merely the beginning. Bohr traveled to Stockholm in 1920 to visit Klein and convinced him to return to Copenhagen once more to work at Bohr's Institute. Klein agreed and began what would prove to be quite a fruitful relationship that eventually would lead him to his first teaching position. Around this time, Bohr was working with Svein Rosseland on the statistical equilibrium of a mixture of atomic and free electrons. At the time, it was believed that electrons colliding with atoms always lost energy. However, Klein, in conjunction with Rosseland, introduced "collisions of the second kind" where the electrons actually gained energy! Klein continued his work on the other side of the 'molecular aisle' by turning his attention to ions. In fact, this led him to his thesis research in which he studied the forces between ions in strong electrolytes using Gibbs' statistical mechanics. The result was a generalized formulation of Brownian motion. He defended his doctorate in 1921 at Stockholm Högskola and was opposed by Erik Ivar Fredholm the mathematical physicist best known for his work on integral equations and spectral theory. After his successful defence, Klein returned to Copenhagen, later assisting Bohr on a trip to Göttingen. Around this time Klein turned to publishing semi-popular writings on physics. His first work in this new arena was a philosophical paper that was a refutation of an objection to relativity theory by Swedish philosophers. Not surprisingly, it was around this time that he began to look for a job. In 1923, Oskar Klein married Gerda Agnete Koch and moved to Ann Arbor, Michigan to take up a post at the University of Michigan, a post he won with no small thanks to his venerable friend Niels Bohr. His first work in Ann Arbor dealt with the anomalous Zeeman effect which was a problem that arose out of the fact that no one at the time understood the behavior of atoms in a magnetic field. The classical Zeeman effect was explained, in a nutshell, as the splitting of spectral lines by the magnetic field. The problem was that the classical theory only effectively described atoms with a total electron spin of zero. The difference can be seen in the Hamiltonians of the two. For the time (1923), this was a fairly large problem to tackle, but Klein did not stop there. He went on to work on the interaction of diatomic molecules with precessing electrons, studying the angular momentum within the molecule itself. The following year, in 1924, he taught a course on electromagnetism and lectured on an electric particle in a combined gravitational and electromagnetic field. This was the beginning of his landmark work on a unified field theory. Klein chose to solve the problem by essentially extending his work to a fifth dimension, though his early unification ideas centred around quantum physics as the catalyst. After a time Klein argued less and less that quantum physics could lead to a unified picture, in fact he later abandoned the idea entirely. However, he did see the possibility of unification in five dimensions, which seems to have been present in his initial attempt. At this time, Klein apparently was unaware of the work of Theodor Kaluza. Kaluza, in 1919, sent a paper to Albert Einstein proposing a unification of gravity with Maxwell's theory of light. Einstein initially was uninterested in the paper, but later realized the highly original ideas contained within it and encouraged Kaluza to publish his ideas. In fact the paper was communicated by Einstein himself on 8 December 1921. In 1925, Klein returned to Copenhagen and contracted hepatitis. He was ill for half a year, though he was visited by Heisenberg in July of 1925 and Schrödinger in January of 1926. This was around the time he was finally able to return to work. It was at this time that he finally became aware of Kaluza's work. Wolfgang Pauli communicated this work to him and Klein. Klein's adaptation of Kaluza's work had a major difference from the original in that the extra or fifth dimension was curled up into a ball that was on the order of the Planck length, 10-33 cm. It is important to note, however, that the extra dimension, though curled up, was still Euclidean in nature. Basically, the fifth coordinate was not observable but was a physical quantity that was conjugate to the electrical charge. As Kragh explains, Klein attempted to explain the atomicity of electricity as a quantum law. He also attempted to account for the electron and the proton. Klein assumed the fifth dimension to be periodic: the dimension was on the order of the Planck length. Klein's results were published in Nature in the autumn of 1926 and generated interest from such eminent theorists as Vladimir Fock, Leon Rosenfeld, Louis de Broglie, and Dirk Struik. Unfortunately, despite a lot of initial interest in unification, most physicists eventually went on to more promising and experimentally testable research leaving Kaluza-Klein theory to be explored by another generation of physicists nearly half a century later. In Klein's own words:- Dirac may well say that my main trouble came from trying to solve too many problems at a time. It was also in 1926 that Klein was appointed as docent at Lund University and became, for the next five years, Bohr's closest collaborator both on correspondence and complimentarity, and apparently contributed to the development of the uncertainty principle, as Heisenberg recalled:- After several weeks of discussion, which were not devoid of stress, we soon concluded, not least thanks to Oskar Klein's participation, that we really meant the same, and that the uncertainty relations were just a special case of the more general complementarity principle. In fact, 1926 was a banner year for Klein. I n addition to finally recovering from the hepatitis and becoming docent at Lund, it was in this same year that he made his next great theoretical breakthrough. In a paper in which he determined the atomic transition probabilities (prior to Dirac), he introduced the initial form of what would become known as the Klein-Gordon equation. It is interesting to note that this equation appeared exactly as it has been written in David Bohm's 1951 book Quantum Theory but was not called the Klein-Gordon equation. However, Bethe and Jackiw's Intermediate Quantum Mechanics, originally written in 1964, does refer to the same equation as the Klein-Gordon equation. Klein and Walter Gordon were thus eventually honoured with having the equation named after them, though it seems to have taken over a quarter of a century to receive the honour. Oddly enough, Schrödinger himself privately developed a relativistic wave equation from his original wave equation, which, in reality, was not that difficult to do, and did so prior to Klein and Gordon, though he never published his results. The trouble came when the equation did not result in the correct fine structure of the hydrogen atom and when Pauli introduced the concept of spin a year later (1927). The equation turned out to be incompatible with spin and, as a result, is only useful for calculations involving spinless particles. But, nonetheless, it was an important point in quantum theory and, along with his unification theory, was to ensure a lasting legacy for Klein and cemented 1926 as a pivotal year in his life. In the years following 1926, Klein turned to teaching and continued his research, though possibly at a reduced pace. Brink [5] quotes a friend and mentor to Klein as having said:- You will now fulfill the words: go and teach the people. Your great pedagogical talents always were one of your strongest qualities. I am not of the opinion that finding new laws of nature and indicating new directions is one of your great strengths, although you always have developed a certain ambition in this direction. In 1927, Klein was appointed Lektor in Copenhagen but nonetheless continued his research working with Pascual Jordan on the second quantization in quantum mechanics. In his work with Jordan, he demonstrated the close connection between quantum fields and quantum statistics. It was known that second quantization guarantees that photons obey Bose-Einstein statistics, but Klein showed that second quantization is not confined to free particles only. He and Jordan showed that one can quantize the non-relativistic Schrödinger equation and, in honour of this work, he was the recipient of yet another named mathematical tool, the Jordan-Klein matrices. In subsequent years he collaborated with the Japanese physicist Yoshio Nishina who was in Copenhagen on an extended research visit and worked on the problem of Compton scattering of a Dirac electron. Despite the so-called Klein paradox, that being that the positron was not completely understood by physicists, he was able to convince physicists of the soundness of Dirac's relativistic wave equation. His continued work included the quantum mechanics of the second law of thermodynamics and Klein's lemma. In 1930, he was offered Fredholm's position at Stockholm Högskala and he finally returned to his native city to take up a post that he held until his retirement in 1962. During the 1930s, Klein helped many refugee physicists who were expelled from Germany and other nations largely due to their Jewish heritage. Of the many he helped, one included Walter Gordon who would later join Klein in being the beneficiaries of the named equation we have just discussed. In 1943, Klein also aided in Bohr's escape from Copenhagen. During the 1930s Klein also found time to attend conferences, not the least of which included the 1938 Warsaw Conference where he spoke on (almost) non-Abelian gauge theories. This conference included some of the leading theorists of the day including Sir Arthur Eddington, Eugene Wigner, and others. It was at this conference that Klein suggested that a spin-1 particle mediated beta decay and played a role in weak interactions in a similar manner to the photon in electromagnetism. Klein's hypothesis was yet another crack at a unified field theory, this time in attempt to unify the strong, weak, and electromagnetic forces. The work was not noticed until nearly twenty years later when it was resurrected by Julian Schwinger in 1957. In the 1940s Klein worked on a wide variety of subjects including superconductivity (with Jens Lindhard in 1945), biochemistry, universal p-decay, general relativity, and stellar evolution. Sometime after 1947 he, and independently Giovanni Puppi, realized that both the electron and the -meson were "weak" particles. In the 1950s and 1960s Klein remained active, addressing the 11th Solvay Conference in 1958, developing a new model for cosmology in conjunction with Hannes Alfven in 1963, and tackling Einstein's General Relativity in a paper published in Astrophisica Norvegica in 1964. During his later years, he also became very interested in philosophy and especially in analogies between science and religion. In addition, he took to writing a few popular books, most of which are out of print. Oskar Klein died in Stockholm, one of the finest theoretical physicists of the twentieth century. © 2002 - 2021 ScienzaPerTutti - Grafica Francesca Cuicchio Ufficio Comunicazione INFN - powered by mspweb
3dfc95f7a065f479
Continued elsewhere Saturday, August 30, 2008 Mismanagement and grief (unhappy anniversary) As I remarked on another blog: And as Auden remarked on September 1, 1939: Exiled Thucydides knew All that a speech can say About Democracy, And what dictators do, The elderly rubbish they talk To an apathetic grave; Analysed them all in his book, The enlightenment driven away, The habit-forming pain, Mismanagement and grief: We must suffer them all again. • atheism, naturalism, philosophy in general • politics • economics, libertarianism • doom, boom, futurism • social networks, netarchy, solidarity, coordination, collective action • media, the web, googlectualism, infoglut, attention management • technology, coding, hacks, standards, knowledge representation • right-wing loons Friday, August 29, 2008 Power has made reality its bitch Words in a Time of War Taking the Measure of the First Rhetoric-Major President By Mark Danner Wednesday, August 27, 2008 Ubiquitious PubMed Sunday, August 24, 2008 Priest Off! From Crackle: Priest Off! Wednesday, August 20, 2008 Word pair of the day irenology (peace studies) in contrast to: polemology (conflict studies) (thanks to Wikipedia). World War III 2.0 More Gerson: Friday, August 15, 2008 Social construction is not arbitrary In a rather stupid discussion on tggp's blog I managed to articulate a point about social construction that I have not previously seen made in any reasonable and concise form, so I'm pulling the thought out and expanding on it here, for the edification of the world. The point I was trying to make is that while many things are socially constructed, that doesn't imply that they are 100% arbitrary. We make the world but we do not make it just as we please. Whatever is constructed must conform to the structure of physical reality and of human cognition. So, for instance, while religion is a paradigmatic example of a socially constructed system, with different cultures having very different religions, they all have some broad similarities based on the cognitive and cultural role of religion (eg, to use one of Boyer's examples, all religions posit supernatural agents that care about human action -- there is no religion that has indifferent supernatural beings). When it comes to the social construction of science, there is a great deal more confusion, which I'm not going to clear up in a blog post. Without going into the details, suffice it to say that even the most radical of constructionists of science (like Bruno Latour) don't believe that scientists can just make science anything they want to. The subject of the original discussion was the ontological status of mental illness, which seems like a great example -- it's clearly a socially constructed category, since what counts as a mental illness varies greatly over time (homosexuality used to be, now it's not, for instance). Yet it's also quite clear that in at least some forms of mental illness there is something objectively physical going wrong, although we don't know what it is. So our categories for them, as detailed in the DSM-IV, are quite obviously made up but also reflect something going on in reality. People like Thomas Szasz argue that it's entirely made up and therefore illegitimate, but anyone who has had to deal with a genuinely disturbed person is not likely to buy into his view. Anyway, here's the interesting parts of the earlier discussion, initiated and provoked by the sort of rampaging halfwit-convinced-of-their-own-genius that one finds on the internet. Do you believe agents of the Party can fly around the room if they so will? Saying something is socially constructed does not meant that it is wholly arbitrary. This is a common confusion. The quote about “agents of the Party” is funny and telling. You assume that society is some oppressive outside force. It isn’t. You’re soaking in it. You make it and it makes you. And, to back off a little bit — not everything is a social construct. Reality is what it is (an instantiation of the Schrödinger equation, let’s say). Some concepts are biologically innate (color, objects, up vs down). But everything interesting that we talk about is a sociocultural construct. Not arbitrary, because it all rests on the other layers, but highly malleable and subject to all sorts of primate politics This produced some sputtering insults from melendwyr that I won't bother to reproduce. Me again: Let’s see. You said that social construction implies that people can fly at will. I pointed out that that is not, in general, what social constructionists believe. To repeat, Saying something is socially constructed does not meant that it is wholly arbitrary. You haven’t produced anything that supports your position over mine. ...There should be no doubt that some things are socially constructed. Institutions like the US Government or Microsoft are built out of people’s social practices, and obviously could be constructed differently than they are — but not arbitrarily (it would be hard, for instance, to have a government with sovereignty over left-handed people rather than over a particular geographic area). To take a more challenging example, take Newton’s laws of motion. Are these social constructs? Well, sort of — that’s why we attribute them to Newton, and he himself admitted to standing on the shoulders of giants who presumably were also part of society. Also, the fact that we call them “laws” — an implied and imperfect metaphor based on human law is significant, as is the fact that they are an imperfect approximation to the actual regularities of the physical world. But, that doesn’t mean that Newton pulled them out of his ass, or that he could have just as easily come up with an inverse-linear or inverse-cube law of gravity. Monday, August 11, 2008 Job titles I envy, #2 in a series This job title is currently held by Pascal Boyer, whose book Religion Explained I have been recommending for a couple of years now, and who I occasionally mention here. Here's slides and audio (iTunes required I think) of a talk Boyer gave on at a Transhumanism conference sponsored by the Templeton Foundation (!), called "Considering the evolved mind: Constraints on transhumanism". And here's a pretty good Jonathan Miller interview with Boyer. And some papers here, as well as the introduction to a special issue on the relationship of brain and self that looks very interesting. Saturday, August 09, 2008 Unitarian Jihad This has been around awhile, but it's new to me and amusing: Of course, Mencius Moldbug thinks that the Unitarians already rule the world. Which would explain why peace, harmony, tolerance, and rationality are so widespread. Saturday, August 02, 2008 No sect owns child molestation My regular reader Michael was giving me grief about my alleged anti-Catholic prejudices, based on my mentioning priestly pedophilia and not casting a similar eye on my own faith, such as it is. The answer to this ridiculous criticism is that I am not particularly anti-Catholic, I am anti-authoritarian, and the Catholic Church simply happens to be one of the oldest and most powerful authoritarian institutions in the world. The pedophile stuff is really just a typical example of the inevitable abuse of authority. I believe I also said something to the effect that there is no strong central authority in Judaism, which is rougly true, although various branches of Judaism have governance structures which are more or less authoritarian. And as you would expect, the more authoritarian branches have the same sort of problems. I can't say I am a bit surprised by this.
0352455eb52125c3
I came across some analogous structure of diffusion and the quantum mechanical particle (Schrödinger eq.). I have seen that there have been similar questions asked, but the (probablitily flux and the mass/particle conservation was not adressed in those). In diffusion the particle flux $\vec{j}(\vec{r},t)$ is related to the gradient of the particle density $\vec{\nabla} n(\vec{r},t)$ and the diffusion coeffcient $D$ via Ficks first law $$\vec{j}(\vec{r},t) = -D \nabla n(\vec{r},t) \tag{1a} $$ When this is combined with the particle conservation condition $$ \frac{\partial n(\vec{r},t)}{\partial t} = - \nabla\cdot \vec{j}(\vec{r},t), \tag{2a}$$ one obtains the "Diffusion euqtaion" (Ficks second law) $$ \frac{\partial n(\vec{r},t)}{\partial t} = D \nabla^2 n(\vec{r},t). \tag{3a}$$ Now I find it quite puzzling to compare this with analogous expressions from non-rel. Quantum mechanics. The probability flux is defined by $$ \vec{j}(\vec{r},t) = \frac{\hbar}{2m i}\left[\Psi^*\nabla\Psi - \Psi\nabla(\Psi^*)\right]\tag{1b},$$ keeping in mind that the QM particle density $$n(\vec{r},t)=|\Psi\Psi^*|\tag{4}.$$ Thus $\vec{j}$ in (1b) essentially differs from $\nabla n$ in (1a) only by the "-" sign of the second term. In QM usually the continuity condition (= particle probability conservation): $$ \frac{\partial n(\vec{r},t)}{\partial t} = - \nabla\cdot \vec{j}\tag{2b},$$ is obtained from (1b) and the time-dependent Schrödinger equation: $$ i \frac{\partial \Psi(\vec{r},t)}{\partial t} = -\frac{\hbar}{2m} \nabla^2 \Psi(\vec{r},t) \tag{3b}. $$ So in both settings we have two independent equations of close structural similarity form which a third one follows. In both cases (1) defines a flux, (2) a continuity/conservation condition and (3) the time development of a density function. I am asking myself if there is a theory of a more general structure from which cases (a) and (b) follow as specific cases. I think about something like a Poisson bracket formalism (or the mimisation of action and similar) that contains both cases as special cases. Can anyone hint me to something like that? In particular I would be interested to understand how in such a formalism the above addressed "-"-sign in the definition of the flux can arise. I am asking this because I suspect some physical interpretation or significance of $\nabla n$ in the QM context of the flux. I am aware of similar questions like this on PSE about the analogy of the SE and the Diffusion equation, but no one has adressed particle conservation and flux and in addition I have found no comments that would hint at a "common theory" that would unify both in the sense I am asking for. Edit: to make the analogy better visible I attach this table $$ \begin{array}{c|c|c} (a) & (b) & \\ \hline \vec{j} = -D \nabla n & \vec{j} = \frac{\hbar}{2m i}(\Psi^*\nabla\Psi - \Psi\nabla \Psi^*) & (1) \\ \frac{\partial n}{\partial t} = - \nabla\cdot \vec{j} & \frac{\partial n}{\partial t} = - \nabla\cdot \vec{j} & (2) \\ \frac{\partial n}{\partial t} = D \nabla^2 n & i \frac{\partial \Psi}{\partial t} = -\frac{\hbar}{2m} \nabla^2 \Psi & (3) \end{array}$$ with $n=|\Psi^*\Psi|$ I'm not sure how to focus on your question your way, but first you must compare apples with apples and use the Hydrodynamic formulation of QM introduced by Madelung in 1926. They key point here is that the Schroedinger equation is complex, so it has two dependent variables, unlike the real diffusion equation, so it is basically two equations, a familiar Euler hydrodynamic one, but also a novel "Hamilton-Jacobi" one. The idea is to rewrite Schroedinger's wave function in polar coordinates, $$ \Psi=\sqrt{n} e^{iS/\hbar}, $$ when the diffusion equation only has one dependent variable, n. The key point is that probability flow is not driven by just the probability density n, as in Fick's law, but mainly by the phase S, (note $\vec v= {1\over m} \nabla S$), $$ \vec j= {n\over m}\nabla S, \tag{1b} $$ Thus, (2b), the conservation of probability equation, resembles (2a) conservation of particles in the abstract, but works quite differently, $$ 0=\partial_t n+\nabla \cdot \vec j = \partial_t n+ (n\nabla^2 S + \nabla n \cdot \nabla S)/m. \tag{2b} $$ This Euler equation is only the imaginary part of Schroedinger's equation! (And, as you might have marveled in school, doesn't care a bit about the potential V.) Nevertheless, the big Kahuna is the real part of that equation (the "Quantum Hamilton-Jacobi" equation), $$ 0=\partial_t S+ ( |\nabla S|^2 /2m+V +Q), \tag{4b} $$ where $$ Q= - {\hbar^2\over 2m}{\nabla^2\sqrt{n}\over \sqrt{n}} $$ is Bohm's celebrated quantum potential. It is amazing what an imaginary unit can do to an equation, but there it is. (Actually, your (3b) is spurious: You willfully tossed out V by hand, but, as you see here, it influences the flow of S and hence n, after all.) Looking at the wavepacket might, or might not, help your intuition about quantum flows. Suffice it to say that, in phase space, they are known to exhibit astounding phenomena, thoroughgoingly different than material flows (Steuernagel et al.). But you know QM is weird... • 1 $\begingroup$ Thank you very much! As my question was formulated quite fuzzy (not bettered by my thoughts) I think this is the best answer I could hope for. Mostly since it finally revealed to me what I also (mostly) was searching for for quite a while. Namely the imaginary velocity $v_I=-\frac{\hbar}{2m} \frac{\nabla n}{n}$ that goes into the quantum potential $$ Q = -\frac{1}{2}m v_I^2 + \frac{1}{2}\hbar\nabla\cdot v_I$$. $\endgroup$ – Rudi_Birnbaum Jul 10 '20 at 16:16 • $\begingroup$ Well one more question: would you know a classical (or other) interpretation for the second term with the divergence of $v_I$? $\endgroup$ – Rudi_Birnbaum Jul 10 '20 at 17:10 • 1 $\begingroup$ Not really... there is copious bibliography on such things... $\endgroup$ – Cosmas Zachos Jul 10 '20 at 18:35 Your Answer
06946f4450c3211b
The Neutrinos escape from black holes By Alfonso León Guillén Gómez Independent scientific researcher All rights reserved Wednesday, January 25, 2012 1:07: SafeCreative # 1201250966392 This paper is the development of my thesis proposal in Universe Today, Disqus, on November 17, 2011 ( Neutrino still breaking speed limits. Guillen) Esta obra también puede leerla en español presione aquí There are virtual particles with speed > c, as the virtual photon and the virtual graviton, that due to its kinetic energy, they pass beyond the barriers of electric potential or gravitational potential, particularly of an event horizon of a black hole, according to Newtonian mechanics, because its kinetic energy is greater than the potential energy of the barrier. This phenomenon is not a quantum tunneling effect as was supposed. And in relativity the speed can not be greater than c, since in that case it travels in the past, which violates the principle of Novikov and the law of causality. Therefore, the valid physics to these particles are the Newtonian or the Superluminary Relativity of Anastasovski and the event horizon does not exist for the neutrinos as neither for the virtual photon and vitual graviton because its speed is greater than c, the threshold of the escape velocity, in which the kinetic energy is equal to the gravitational potential. PACS 01.65. + G History of science 03.30. + P Special relativity 04.20.-q Classical General Relativity Singularities and cosmic censorship 04.20.Dw Numerical studies of 04.25.dg black holes and black-hole binaries 04.60.-m Quantum gravity Classical black holes 04.70.Bw Aspects of Quantum 04.70.Dy black holes, evaporation, thermodynamics 05.20.Gg Classical ensemble theory 05.60.Cd Classical transport 05.60.Gg Quantum transport 13.15. + G Neutrino Interactions Table of Contents 1. Introduction 2. Black holes 3. Evaporation of black holes 4. The neutrinos whether escape of black-holes 5. Conclusions 1. Introduction In the research laboratory "Gran Sasso", Italy, in the "Oscillation Project with Emulsion-Tracking Apparatus" (OPERA), a group of scientists accidentally discovered that the muon neutrino travels in a vacuum   with a velocity greater than c, in about .25 ten thousandth. This result was obtained according to the relation (muon neutrino velocity - c) / c = (2.37 ± 0.32 (statistical uncertainty) + (0.34, -0.24) (total systematic uncertainty)) x 10⁵ [1]. These scientists experimentally were investigating the first direct evidence of the oscillation between the neutrinos, muon and tau [2], which is the conversion of one in another due to changes in their quantity of mass, therefore, a phenomenon that only occurs in the particles with mass. However, in February 2012, they found defects in the infrastructure of the experiment, that obligate repeat it. Such failures were, according to a spokesman of OPERA, different  to the official, a faulty connection of a glass fiber cable which is connected to a small box, which converts the optical signal in an electronics and the other is the correction for the master clock of OPERA. Thus, the superluminal speed of the neutrino is in dispute. However, it is strange that the failures may have remained hidden during the long period when the experiment was repeated, before that OPERA reported on his findings. The results obtained in 2008, 2009, 2010 and 2011, in different repetitions, were consistent, when the failure of the cable depends on its inclination and its torsion, highly probable that vary with time. Furthermore, when the scientific comunity was wating for that OPERA repeat the experiment, after making the corrections of the failure, was ICARUS, a group  rival of OPERA who repeated the experiment, and in June 2012 the spokesman of CERN said that really the speed of neutrinos is lower than c; previously the spokesman of ICARUS said that OPERA does not know make the experiment. More worrying is that a few weeks before, a group, almost half of the members of OPERA, rebelled against their leader, Dr. Antonio Ereditato, and forced him to resign. Thus, Ereditato is added to Tom Van Flandern and Paul Marmet that for his disagreement with Einstein's relativity were relentlessly persecuted. The new result about the speed of the neutrino has had a low coverage by the  science magazines.   The experiment of direct observation of neutrino oscillation is of great complexity. At CERN in Geneva, in the Super Proton Synchrotron (SPS), protons are accelerated to the maximum possible energy for this type of experiment, to 400 GeV / c, with a cycle of 6 s. These protons, in the target chamber (TC), are addressed by two magnetic dipoles (magnets) against targets of graphite 2 m long, through two extractions; one takes place in room B and the other in the room C, separated by 50 ms; each extraction has a duration of 10.5 μ s [3]. The signal used for the release of protons is Coordinated Universal Time (UTC), and the length of each launch is 524 ± 5 ns. The two extraction system generates two distributions of protons, which in turn produces, in time, two distributions of neutrinos on departure at CERN, and on arrival at the Gran Sasso. This redundancy is for to make the estimation of statistical and systematic uncertainties and the statistical adjustment with the maximum likelihood method [4], which allows the calculation of the speed of neutrinos. The product of  the collision of protons against graphite, in the   TC, are mesons (hadrons composed of a quark and antiquark pair), with electric charge, some are positives and other are negatives, highly unstable, which decay in: kaon → 2 pions or kaon → 3 pions and 1 pion → 2 gamma rays, 1 electron muon and 1 neutrino, inside a decay rectilinear tunnel (DT), under vacuum, of 1095 m in length. For this tunnel, the electron muons and neutrinos go to a hadron stopping (18 ms in length). The particles, that pass, follow to the first muon electron detector (5 m in length), connected, through a pipe 67 meters long, to a second muon electron detector (5 m in length). Of this last detector, the almost pure beam of muon neutrinos leave with an electron neutrinos contamination of  ~ 0.9%   [5]. The neutrinos   were cleaned by a magnet placed in each detector, which separated the neutrinos from the muon electron who have escaped from the hadron stopping; the muon electron were deflected in the opposite direction to its negative charge, while the neutrinos traveled straight. The neutrinos, through a channel   rectilinear underground. of   730 kilometers, traveled at a constant speed until the OPERA detector in the Gran Sasso laboratory. Within the channel   (νμ → ντ)) the beam of muon neutrinos, traveled with an average energy of Eν   ~ 17 GeV. In this experiment, the energy of the neutrinos depended on the energy of the pions and the energy of these of the energy of the protons at the time of its collision with the graphite; in general, depends on the energy of the triggering event of its production process. The detection of the neutrino beam at the Gran Sasso, is produced under the charged weak interaction, ie, via boson W ± (the other way is via the interaction neutral boson Z0),   with atomic electrons of the detector at Gran Sasso. The minimum energy required for this interaction is > 11 GeV [6]. The energy distribution of neutrinos in the range (with an average total of ~ 17 GeV) of the experiment, had no effect on its speed,   since the speed for higher energy (average ~ 43 GeV) was the same for the lower energy (average of ~ 14 GeV)   [7]. The constant speed of neutrinos, in addition to its mass of positive energy, cause that neutrinos are classified in the category of the particles that are not tachyons [7].    In the CERN and the Gran Sasso two identical systems are installed to measure the time in UTC, consisting of a GPS receiver,   Septentrio PolaRx2e [8] and an atomic clock, Symmetricom CS4000 [9]. The clocks are synchronized by the GPS, with an error of 2 ns [10]. This experiment, with several modifications, was performed in 2008, 2009, 2010 and 2011 and has supplied statistics, high accuracy, for calculate the speed of muon neutrino [1]. In the paper of September 22 and in the review of Nov. 17, 2011,   submitted to the "Journal of High Energy Physics", and also stored in the digital database ArXiv, with very high certainty, was confirmed the superluminal speed of the muon neutrino, which was presented by   Antonio Ereditato, OPERA spokesman on behalf of 179 scientists, mainly from Europe and Asia, belonging to 48 scientific institutions in Germany, Belgium, North and South Korea, Croatia,   Russia, France, Greece, Italy, Israel, Japan, Turkey and   Switzerland [1]. In the experiment of November there was a new package beam particles of about 3 nanoseconds in duration separated by up to 524 nanoseconds. That compared to September is narrower and shorter as this was 10 nanoseconds, period considered as a possible source of error. Thereby, measuring the speed of the muon neutrino is more accurate, in addition, was improved accuracy by obtaining a lower beam intensity, "only 20 neutrino events have been collected by OPERA in this new test, compared to 15,000 analyzed in the former "[11]. The neutrino, which exists in the states of electron, muon and tau, interchangeable during your oscillating, was postulated by Wolfgang Pauli in 1930 and was observed for the first time in 1956. The neutrino oscillation was proposed in 1950 and was observed in 1998. The neutrino is a lepton, an elementary particle, without charge, that together to the quarks, are constituent of the matter, which only experiences the weak interaction and the gravitational force generated in the decay of the proton. The neutrino has mass, according to the four-momentum vector, which is equivalent in energy  to (= 0.24 eV, <15.5 MeV) [12].   The muon neutrino resulting of the oscillation of the electron neutrino has a mass <170 keV (in the OPERA experiment, the maximum value is 2 eV [1]).   The first consequence of the OPERA experiment is that particles with mass can exceed c and, therefore, is false the postulate of Special Relativity on such impossibility. The second consequence is that in my alternative theory about the existence in nature of superluminal speeds, as would the speed of the graviton, it requieres in the model to explain his speed, when the particles have similar energies, as is the case of the muon neutrino with an energy that falls between classes Y = Gamma Radiation and NUV = Ultravioleta [13] and the energy of the photons, includ, as a possible determinant of its speed difference, the types of interaction since, possibly the neutrino speed > c is because while the photon is subject to electromagnetic and gravitational interactions in change, the neutrino is subcjet to weak force and gravitational interactions, as result, the neutrino travels in vacuum with little interaction while the photon in the vacuum with a major interaction, since, the weak interaction is limited, while the static electromagnetic interaction abounds. Another crucial consequence is that the neutrino escapes from black holes, since they meet with the physical condition of to travel above c. For press return Index 2. Black holes Because of the corpuscular theory of light of Newton, his equations about: movement, gravity and escape speed, in 1783, John Michell formulated the existence of stars, with a great mass, which would be invisible because light could not escape of their gravity. In 1915, Albert Einstein showed that light is indeed subject to gravity and Karl Schwarzschild, in applying for a spherical body, according ​​the equations of General Relativity, confirmed that a star with a mass and a certain radius, called the event horizon, its gravity catches the light. In 1930, Subrahmanyan Chandrasekhar determined the critical mass at 1.5 times the Sun and in 1939, Robert Oppenheimer found that a top mass can produce the gravitational collapse of the star. In 1967, Stephen Hawking and Roger Penrose proved that any solution of the equations of General Relativity for a collapsed star generates a singularity. In 1969, John Wheeler called, to the singularity, black hole [14, 15]. According to the equations of General Relativity physically the black hole is defined by the 3 qualities: mass, momentum and electric charge (Uniqueness theorem or absence of hair from Carter-Robinson). To quantitatively determine it, it requires 11 parameters: 1 mass, 1 electric charge, 3 linear momentum, 3 angular momentum and 3 position [16]. The black hole, according to astronomical observations, is classified according to their amount of mass in [17]: - Supermassive, with several million times the mass of the Sun. This black hole is the center of galaxies with spherical, ellipsoid or spiral form and sucks matter, in such great quantities, that it fails to go into the hole and the excess of matter collects in a large accretion disk (formation of a body from others), which by its very high temperature, to the black hole becomes in a quasar, whose core is the black hole, which emits an enormous amount of radiation and strong magnetic field due to that black hole produces 2 relativistic jets (subject to a speed close to c, maybe it reaches a superluminal speed that has not been tested yet) above and below of the disk. - Stellar, more than 1.5 times the mass of the Sun. This black hole abounds in galaxies. Also, according to astronomical observations, there are binary black holes spinning each one around of the other. For example, the binary Black Hole in 3C 75, composed of two supermassive distant at 25 thousand light years, which are the cores of two merging galaxies, located in the Abell 400 (cluster of galaxies compiled by George Abell in early 1950), however, between the galaxies there is near 300 million light years away [18]. The exact solutions of the equations of General Relativity given 4 possible theoretical types of black holes. These are [14, 15]: - Schwarzschild non-rotating and has no electric charge. - Kerr in rotating and has no electric charge. - Reissner-Nordstrom non rotating and has electric charge (with a low probability of existence). - Kerr-Newman rotating and has electric charge (with a low probability of existence). According to the equations of General Relativity, black holes structure is composed of [14]: - The singularity that is the point of collapse, without volume, therefore with spacetime zero and infinite curvature, where is concentrated all the mass of black hole with an infinite density. In charged black holes the singularity has a shape of ring. - The event horizon is the border area whose radius depends on its mass. To the black hole non-rotating its form is spherical while for the black hole rotating is spheroid. This boundary defines the interior, beyond anything, matter or energy, including electromagnetic waves can not escape. Therefore, there is no communication between the inside and outside of the black hole. This property of the event horizon is because the escape speed from the inside, is greater than c, speed limit, according to Special Relativity. Additionally the rotating black hole has [14]: - The ergosphere, has an ellipsoidal shape, consisting of the space that is distorted by the drag due to rotation (inner gravitomagnetic field) around of the event horizon. - The static limit between the ergosphere and the space normal. Additionally the charged black hole has [19]: - External to the event horizon, an electric field at a great distance, its intensity is the same as any other point of load (r ² Q/4Πε0) and only appears in front of other stars with electric charge. - Internal to the event horizon is the Cauchy horizon, where it is suspending the fallen of the particles into the singularity and even has a region of stable orbits. In the spacetime outside the black hole, there are three large areas of orbits [20]: - Stable, more of 3 Schwarzschild radius (rs = 2GM/c2). The orbits are circular and remain over time, therefore, is a safe area. - Unstable, above 1.5 Schwarzschild radius. Circular orbits that due to any disturbance, it escapes to infinity spacetime tangent or it falls into the black hole. - The unstable photon sphere in the which the electromagnetic waves are rotating in circular orbits, exactly to 1.5 Schwarzschild radius. Due to any disturbance, the photons tangentially are launched to the infinite spacetime or they fall into the black hole. - Fall in ellipses, <1.5 Schwarzschild radius. The ellipse orbits fall each time more closer to the event horizon until pass it. For press return Index 3. Evaporation of black holes According to quantum mechanics from the interior of the black holes escape real particles and virtual particles to outer space, producing the global phenomenon of its evaporation, in the manner of the black body radiation with a finite temperature. This process is initially very slow, but steady, so that over time and black hole decreasing mass, accelerates evaporation, until the hole disappears in a burst. The cases currently recognized are the following: - Hawking radiation is of  real photons and massive particles like the neutrino, and other, discovered in 1973 by the russian physicists Zeldovich and Alexander Starobinsky Yacob who demonstrated that rotating black hole, due to the uncertainty principle, it creates and radiates particles. In 1975, Stephen Hawking calculated, demonstrating throu gh the area theorem, that all types of   black hole radiates and is exactly equal to the black body thermal radiation, based on the work of Jacob Bekenstein, who deduced that the area of event horizon be can consider as a measure of the entropy of the black hole, as this area increases depending on the absorbed matter, therefore, the black hole has entropy which is the amount of disorder associated with the movement of all particles within the horizon and is equal to the area of the horizon divided by the square of the Planck length [21]. The entropy is inversely proportional to the amount of mass and directly proportional to temperature of the black hole, which is directly proportional to its surface gravity and inversely proportional to its mass, then the black hole must radiate to reach the thermodynamic equilibrium between the inner spacetimes and outside of the event horizon [22]. This radiation, in September 2010, was observed in the laboratory, by a team of scientists led by Franco Belgiorno, University of Milan [23]. And in the first half of 2011, was detected by different astronomers,   from a Schwarzschild hole [24]. At all points of spacetime inside and outside the event horizon, during the period of uncertainty, due to the oscillation of the zero point energy associated with the vacuum, constantly it creates pairs of virtual particles, the pair is composed of a particle and an antiparticle, which within the lapse they annihilate each other. In the original model of Hawking, the radiation does not really exist, since he assumed the assumption that nothing can escape of the black hole, then the pair of virtual particles originates from the in vacuum outer of the event horizon, a particle virtual (positive energy) may fall or escape, changing to  real particle in the external vacuum;   when the particle escapes it creates the illusion that the particle was radiated; the other particle (negative energy) can fall without destroying a complementary virtual particle, due to that before, too, becomes a real particle with positive energy within the event horizon;   with increasing black hole mass, these virtual-real changes   are due to the intense gravitational field and only occur at random. Within this process of false irradiation, occurs loss of mass of the black hole, due to the flow from outside the virtual antiparticle with   negative energy when this does not become a real particle, event more likely that the opposite event, compensating the particles emitted falsely; the reducing of the mass does that the event horizon it reduces, and it augments the apparent temperature, and consequently, greater the apparent radiation [25, 26 , 27]. According to Hawking, the lifetime of the black hole is of 1071 mass 3 seconds [25].   Despite that the theory, about the apparent radiation of Hawking is based on strictly mathematical terms, on the equations of General Relativity [26], the author prefers the alternative model under which the black hole really radiates. Although the author does not reject the original Hawking´s radiation, because whether there, but is not the main cause of this type of radiation, since, as a process of interaction of the black holewith the external vacuum, from the point of view thermodynamic, the external vacuum does not "warm" the black hole but the opposite, ie the black hole "warms" the external vacuum, and consequently the flow of radiation from the external vacuum toward the black hole is marginal regarding the flow from the black hole toward the external vacuum. The model of the real radiation by the black hole is based on the theory of quantum tunneling in which a virtual particle with velocity ≤ c, of the pair created by the oscillation of the vacuum behind the event horizon, with a lower kinetic energy goes through the gravitational potential barrier with an energy greater of the edge of the horizon, and goes outside, which violates the principle of Newtonian mechanics whom states that the particle must have a kinetic energy exceeding the energy of the barrier to cross it. The tunnel effect in strict rigor   was formulated for electrical potential barriers (impedance) but too is applicable to the gravitational potential barrier. In quantum mechanics, due to wave-particle duality, it uses the Schrödinger equation, which assigns a certain probability that a particle, without lift the barrier, uses a energetic tunnel to cross the barrier, and the particle passes to the other side. However, the author creates a new version, in the that changes the mechanism of the real radiation, thereby, he suppresses the tunnel effect, that is not required, and he explains that the virtual photon escapes due to its speed   > c, since in this case the kinetic energy is greater than the gravitational potential energy of the barrier, the same occurs in front of the electric potential barrier. And in his way of seeing the things, the author believes that Nimtz in their experiments achieves speeds > c, because he produced virtual photons through of the technique of dielectric photonic barriers that are of two types: the first type of barrier is constituted by the central part of the wave guide, which is a section sufficiently narrow, less than half the wavelength in both directions, perpendicular to the propagation, which only allows pass the lower wavelengths; the other barrier is the double prisms where microwaves suffer total reflection inside of the prism of input and the residue, that it refracting, passes through an air gap to the prism of exit [28]. Nimtz said that the evanescent waves, that it produces, is made ​​up of virtual photons that have superluminal speed, to which the author adds, that such virtual photons does not exceed the electrical potential due to tunneling but whether to its speed. Petar Anastasovski has found, as a result of his outstanding research in nuclear physics, a better understanding of nuclear phenomena if they are explained with speeds greater than c. And the author underlines, in general, a better understanding of quantum phenomena. Moreover, Anastasovski solves the mathematical problem of the Lorentz transformation for v > c, which maintain c as constant of the nature, for all inertial observers, in his Superluminal Relativity theory [28].   The scene where the radiation process occurs is the space-time taken as the union of three subspaces: internal stationary spacetime of the event horizon + internal nonstationary spacetime of the black hole + external stationary spacetime of the event horizon. The internal spacetime is associated with the internal vacuum and the external spacetime is associated with the external vacuum [26]. A stationary spacetime "is a space-time where it can find a natural coordinate system in which none of the components of the metric tensor depends on the time coordinate" [29].   The real radiation mechanism according to the author, is that on the boundary of the event horizon, within the internal vacuum, due to the uncertainty principle, it creates a pair of virtual particles. The virtual antiparticle (negative energy) due to the extreme brevity of its existence, decays rapidly as a real particle; but too in the terms of the stochastic process, closed to the value of the maximum probability (p = 1), when the particle creates closed to the limit of the horizon, the particle maybe escapes, always that passes the horizon like yet a virtual particle. While the virtual particle (positive energy) passes to the external vacuum, due its greater lapse of existence, but also stochastically with a probability proportional to increasing the distance from the point of internal vacuum, where is created, with respect to the limit of the event horizon and until the point of the singularity (p = (0, 1)), in the internal vacuum, maybe the virtual particle it becomes a real particle. The escape of the particles from black holes is always due to superluminal speed, which have the virtual particles, and not to the quantum tunnel effect. The superluminal speed of the virtual photon was tested in the experiments with evanescent waves made, ​​since 1992, in Cologne, Germany   by Professor Günter Nimtz [28, 30, 31] and confirmed in the experiments of William   Walker, in 1998,   of   preformation of the electromagnetic wave in the nearfield [28, 32]. The black hole loses energy-mass, due to the emission, that causes the contraction of the event horizon. The change of particle between virtual-real, according to the author,   occurs inside or outside of the event horizon due to the high density of the gravitational potential energy as a result of the stream to a fabulous scale of virtual gravitons, of the external vacuum, when it is very close to the edge of the event horizon, or of the internal vacuum ( in Orthodoxy is said, to the immense gravity). This virtual-real change of course is not a deterministic process, since as a quantum process is always a stochastic process, thus, the virtual particle that escapes may also remain in the virtual state within the external vacuum and act to distance as a Lorentz force. - The escape of static electromagnetic field (electric and magnetic fields uncoupled) or radiation of Carlip-Wiener is produced by the virtual photons escape from a charged black hole, always in front of other charged star, which may be another black hole, especially a black hole binary, which produce repelling. This radiation was discovered   in 1996,   by   Steve Carlip and Matthew P Wiener, whom   under the standard assumptions that c is the cosmic speed limit and "event horizon of black holes is where normal matter (and forces) must exceed the speed of light for escape, and therefore they are trapped". However, Carlip-Wiener implicitly distinguish and introduce forces whose transmitters are virtual particles with velocity between (<c, c) and the static electromagnetic force transmitted by the virtual photon with speed > c. Of course: "The horizon does not have sense for a virtual particle with enough speed. In particular, a charged black hole is a source of virtual photons "   [33].   Although they do not demonstrate or provide evidence of the superluminal speed of the virtual photon, for me is clear that they are based and recognize the experimental discovery of Günter Nimtz, at 1992, since no there is other antecedent known within the scientific community for support this declaration. For FAQ, before of this statement of Carlip-Wiener, and of the statement of Matt McIrvin [34], in 1994, in the same tone, ie. without evidence, the speed of all virtual particles could not exceed c.   - The gravitational escape or radiation of Van Flandern is produced by the virtual gravitons escape from all types of black hole, that in the distance behaves as the gravitational field of a spherical star with a mass equal to the of the radiating black hole. This radiation was discovered, in 1998, by Tom van Flandern, who showed that virtual graviton travels at least 20.000 millones c and, therefore, it escapes the event horizon of the black hole. Tom first reflects about how the black hole has gravity? As long as its source of gravity is behind the event horizon, exactly at the singularity, where it has gone throughout its substance. "If nothing can escape the event horizon because nothing can propagate faster than light, how exists a gravity field out of a black hole?. The answer always is that the gravity field around of a black hole, it froze in the surrounding space-time before of the collapse of the star behind of the event horizon, and has remained in that state ever since." Tom rejects this response due to lack of causal agents, in addition is unable to respond in front of the black hole's binary, whose orbital connection requires of those agents. "suppose we have a black hole binary, with the two stars that collapsed in an elliptical orbit around each other. In this way, each field must be continuously updated by the change of orbital field contribution from the other. How does each field know what to do if it no longer communicating with its mass, its gravity source, hidden behind the event horizon? "." if the mass of each source when interacts with the other obligates to the two black holes to accelerate, why each point of the field with a certain curvature suffers exactly the same acceleration as the source of gravity, along the entire field (infinitely?) "." Without communication, how can the system remain intact and coherent? ". Tom concludes that the external gravitational field continually it regenerates and, therefore, it is connected with the singularity, using a causal link. Thus, the propagation speed of these causal entities, virtual gravitons, "largely exceed the speed of light." Tom with this reflection on the binary black hole, along with other of astronomical character, lead him to justify a model of quantum gravity regardless of whether exists or not a theory adequate. And under such circumstances, he find a formula to measure the speed of virtual gravitons; in fact, he proves the superluminal speed of the virtual graviton [35]. As the event horizon does not exist for virtual graviton then it escapes into space, producing the black hole's gravitational field. For press return Index 4. The neutrinos whether escape of black holes According to General Relativity, inside of the event horizon, all the geodesies carry the particles to the singularity until that the particles fall in it, except for the charged black hole, case where the geodesies, inside of the Cauchy horizon [19], they curve backward, toward the event horizon, causing the particles do not end up swallowed by the singularity, but this hole is very unlikely to exist. Therefore, the absence of geodesies leading to outside, do impossible that the particles, while travel spacelike or lightlike, escape. Both the real particle with mass, according the four-vector momentum, as the real photon are swallowed by the singularity. Also, the virtual particles, with speed ≤ c, fall into the singularity, althought before, they become real particles. The Relativity was formulated, in the absence of the distinction between real and virtual particles, therefore all particles in nature have a speed   ≤ c. Even when quantum mechanics introduced the division between real and virtual particles, from the principle of   Heisenberg's uncertainty, and when, in 1929,   virtual particle was discovered by Paul Dirac, this restriction is maintained. For this reason, the black hole evaporation is presented in Hawking's theory like only apparent, and in the alternative theory through of the quantum tunneling effect. In no case because, the virtual particles have a velocity > c. From the scientists recognized by the scientific community, only Nimtz, Carlip-Wiener and Walker, have said that the virtual photon has a velocity > c, and Van Flandern and Walker that the virtual graviton has a superluminal speed. But Carlip after that he said on behalf of FAQ, where, in 1994, was first formulated by Matt McIrvin, in his later papers, he assumes the defense of orthodox relativistic thinking and Van Flandern, vexed in life, after his death, he simply is ignored. As theoretical solutions of the equations we have: In General Relativity, if the speed of the particle > c then the particle travels timelike, exactly in a geodesy in the past light cone, that under the intense gravitational field of a black hole is a geodesy in a closed curve timelike, found by Kurt Gödel in 1949. And, in Special Relativity the tachyon always with speed > c, independently found by Arnold Sommerfeld, George Sudarshan, Olexa-Myron Bilaniuk, Vijay Deshpande and Gerald Feinberg in the 1960s. A particle, in a closed curve timelike, violates the principle of consistency of Novikov which postulates that if an event exists and causes a paradox, or any changes to the past than causes the paradox, then the probability of that event is zero, too, violates the chronology protection conjecture of Hawking and, in general, causality law. And the tachyon in quantum field theory, due to its imaginary mass, is too unstable as to that it consideres real, additionally, it violates the causality law. The only consistent solution of the equations of General Relativity, about a superluminal speed is the that Hawking presents in wormholes, at 2010, which only allows time travel into the future, to through of a shortcut in the spacetime (How to build a time machine. Published by Mail online). As in the case of Hawking's radiation this superluminal speed is a apparent effect, due to that locally (inside wormhole) only is possible subluminal speeds. Now, with the experimental discovery of the superluminal speed of the neutrino, due to that Gödel´s solution also applies in the spacetime inside the event horizon, therefore the neutrinos coming from the exterior that are trapped by the gravity of the black hole and the neutrinos produced inside of the black hole when the mass falls into of the singularity, then, the neutrino travels in the past light cone, inside of a closed geodesy, just in the past. The neutrino, that is a real particle with mass according to the four-vector momentum, is a totally new phenomenon of particle with a superluminal speed. What means physically that neutrino travels in the past?. Can the neutrino violate the principle of Novikov?. Can the neutrino breaking the law of causality?. In reality there is no physically acceptable solution in General Relativity. The speed > c of a particle, not causes that the particle escapes of the event horizon, therefore, is not really possible in General Relativity. And the solution of Anastasovski, in the Superluminal Relativity, is not known within the scientific community, less accepted. According to the mechanic of Newton the escape speed is the speed neccesary for be released of the gravity of a body. Exactly, is the speed in which the kinetic energy is greater than the gravitational potential energy. For a spherical-symmetric body, such as black hole,   the escape speed is: G > √ 2GM / r. Where G is the gravitational constant (G = 6.67 × 10 -11  m 3 kg -1  s -2), M the   black hole mass and r is the distance from the singularity extended to the event horizon. This escape speed is > c. The neutrino as the virtual photon and the graviton virtual meet this condition, then for the neutrino, the event horizon does not exist and the neutrino escapes beyond of event horizon, into the exterior spacetime. For press return Index 5. Conclusions The experimental discovery of the speed > c of the neutrino has no place in Relativity. This brings us back to the mechanics of Newton. Therefore, if we apply the concept of escape speed to a black hole we find that the neutrino will escape of the event horizon. But due to that the speed c is a constant of the nature then only is possible the solution of the Superluminal Relativity. On the other hand, the Relativity limits the causal link between events to actions that can communicate with a velocity = c. By now, the OPERA experiment this limit the remplace by the superluminal speed of the neutrino. For press return Index [1] OPERA. Measurement of the neutrino velocity with the OPERA detector in the CNGS beam [2] Oscillation Project with Emulsion-tRacking Apparatus. [3] Stipcevic, Mario. Superluminal anomaly in OPERA experiment.  Croacia. 2011. [4] Ereditato, Antonio. Measurement of the neutrino velocity with the OPERA detector in the CNGS beam. Suiza. 2011. [5] Gornushkin, Yu. Search for oscillations νμ → ντ  in appearance mode in the OPERA experiment. Federación Rusia. 2011. [6] Thomson, Mark. Particle Physics. England. 2009. [7]  Francis, E. Por qué los neutrinos de OPERA no pueden ser taquiones. 2011. [8] Septentrio PolaRx2e. [9]  Symmetricom Cs4000. [10] Sánchez, Renata.  La teoría de la relatividad, ¿en entredicho?. [11] Muy Interesante. El experimento OPERA confirma la medida de neutrinos más rápidos que la luz. Noviembre, 2011. [12] Peltoniemi, Juha; Sarkamo, Juho. Laboratory measurements and limits for neutrino properties. 2005. [13] Wikipedia. Electromagnetic spectrum. [14] Wikipedia. Black hole. [15] Hooft, Gerard 't. Introduction to the theory of Black Holes. Holanda. 2009. [16] Wikipedia. Rotating black hole. [17] Cain, Fraser. Universe Today. Blazars. 2009. [18] NASA. Binary black hole in 3C 75. 2006. [19] Dokuchaev, V I. Is there life inside black holes? Russia. 2011. [20] Hamilton, Andrew. Journey into a Schwarzschild black hole. USA. 2012. [21] Maldacena, Juan. Los agujeros negros y la estructura del espacio-tiempo. USA. [22] Fargueta; Salvador. La bella teoria: La radiación del agujero negro o de Hawking. España. 2008. [23] Palazzesi, Ariel. Detectan la radiación de Hawking. España. 2010. Shiga, David. Hawking radiation glimpsed in artificial black hole. 2010. [24] Barbado, LC; Barcelo, C; Garay, LJ.  Hawking radiation as perceived by different observers. 2011. [25]  Schmelzer, Ilja; Baez, John. Hawking Radiation. FAQ. 1997. [26]  Fernández, J. Ma. Radiación de Hawking. España. 2010. [27] Hawking, Stephen. Historia del tiempo. Colombia. 1989. [28] Guillén, Alfonso. La velocidad de la gravedad. Colombia. 2005. [29] Diccionario de teoria de la Relatividad. Definición de Espaciotiempo estacionario. 2009. [30] Nimtz, Gunter;  Haibel, A. Basics of Superluminal Signals. Germany. 2001. [31] Vetter, R.-M; Haibel, A.; Nimtz, Gunter. Negative phase time for Scattering atQuantum Wells: A Microwave Analogy Experiment. Germany. 2000. [32] Walker, William. Experimental Evidence of near-field superluminally propagating electromagnetic fields. Sweden. 1999. [33] Wiener, Matthew P; Carlip,  Steve. How can gravity escape from a black hole?. USA. 1996. Subject: D.00 Astrophysics [Dates in brackets are last edit.] D.01 Do ... [34] McIrvin, Matt. Some Frequently Asked Questions About Virtual Particles. 1994. [35] Van Flandern, Tom. The Speed of Gravity What the Experiments Say. USA. 1998. For press return Index
d6812f1f673c360c
Modern Wavefunction Methods in Electronic Structure Theory 3 - 8 October 2016, Gelsenkirchen, Germany In the recent past, wavefunction based ab initio methods have received revived interest in the theoretical chemistry community. It is generally recognized that these methods provide a systematic, accurate and transparent route towards solving the molecular Schrödinger equation to high precision. While for a long time the high computational cost that is characteristic of these methods has precluded their large-scale application in chemistry, modern algorithms, modern hardware and reduced scaling approaches have drastically changed this situation. However, the physical basis of wavefunction based methods involves an elaborate apparatus of advanced mathematical and physical concepts that is frequently beyond a typical university curriculum. Hence, in order to be able to do research in this field, it is necessary that young researchers get familiar with these concepts. The MWM16 school is designed to fill this gap by providing lectures and tutorials that are designed to introduce the students to the advanced concepts of ab initio electronic structure theory. Furthermore, the school will provide ample opportunity for discussion between the participants, teachers and tutors in which specialized and research oriented questions can be addressed. Wissenschaftspark Gelsenkirchen Wissenschaftspark Gelsenkirchen , Munscheidstr. 14 , Gelsenkirchen, 45886, Germany Contact information Showing all upcoming events Start Date End Date Subject area Event type Advertisement Spotlight Advertisement E-mail Enquiry
cd8907b35b1dcc52
Equations of motion From Wikipedia, the free encyclopedia   (Redirected from Equation of motion) Jump to navigation Jump to search In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time.[1] More specifically, the equations of motion describe the behaviour of a physical system as a set of mathematical functions in terms of dynamic variables: normally spatial coordinates and time are used, but others are also possible, such as momentum components and time. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system.[2] The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions to the differential equations describing the motion of the dynamics. There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations. However, kinematics is simpler as it concerns only variables derived from the positions of objects, and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the SUVAT equations, arising from the definitions of kinematic quantities: displacement (s), initial velocity (u), final velocity (v), acceleration (a), and time (t). Equations of motion can therefore be grouped under these main classifiers of motion. In all cases, the main types of motion are translations, rotations, oscillations, or any combinations of these. A differential equation of motion, usually identified as some physical law and applying definitions of physical quantities, is used to set up an equation for the problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a family of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants. To state this formally, in general an equation of motion M is a function of the position r of the object, its velocity (the first time derivative of r, v = dr/dt), and its acceleration (the second derivative of r, a = d2r/dt2), and time t. Euclidean vectors in 3D are denoted throughout in bold. This is equivalent to saying an equation of motion in r is a second order ordinary differential equation (ODE) in r, where t is time, and each overdot denotes one time derivative. The initial conditions are given by the constant values at t = 0, The solution r(t) to the equation of motion, with specified initial values, describes the system for all times t after t = 0. Other dynamical variables like the momentum p of the object, or quantities derived from r and p like angular momentum, can be used in place of r as the quantity to solve for from some equation of motion, although the position of the object at time t is by far the most sought-after quantity. Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how sensitive the system is to the initial conditions. Historically, equations of motion first appeared in classical mechanics to describe the motion of massive objects, a notable application was to celestial mechanics to predict the motion of the planets as if they orbit like clockwork (this was how Neptune was predicted before its discovery), and also investigate the stability of the solar system. It is important to observe that the huge body of work involving kinematics, dynamics and the mathematical models of the universe developed in baby steps – faltering, getting up and correcting itself – over three millennia and included contributions of both known names and others who have since faded from the annals of history. In antiquity, notwithstanding the success of priests, astrologers and astronomers in predicting solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the Moon, there was nothing other than a set of algorithms to help them. Despite the great strides made in the development of geometry made by Ancient Greeks and surveys in Rome, we were to wait for another thousand years before the first equations of motion arrive. The exposure of Europe to the collected works by the Muslims of the Greeks, the Indians and the Islamic scholars, such as Euclid’s Elements, the works of Archimedes, and Al-Khwārizmī's treatises [3] began in Spain, and scholars from all over Europe went to Spain, read, copied and translated the learning into Latin. The exposure of Europe to Arabic numerals and their ease in computations encouraged first the scholars to learn them and then the merchants and invigorated the spread of knowledge throughout Europe. By the 13th century the universities of Oxford and Paris had come up, and the scholars were now studying mathematics and philosophy with lesser worries about mundane chores of life—the fields were not as clearly demarcated as they are in the modern times. Of these, compendia and redactions, such as those of Johannes Campanus, of Euclid and Aristotle, confronted scholars with ideas about infinity and the ratio theory of elements as a means of expressing relations between various quantities involved with moving bodies. These studies led to a new body of knowledge that is now known as physics. Of these institutes Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, of similar in stature to the intellectuals at the University of Paris. Thomas Bradwardine, one of those scholars, extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion. For writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, greater velocity as a result of greater elevation. Only Domingo de Soto, a Spanish theologian, in his commentary on Aristotle's Physics published in 1545, after defining "uniform difform" motion (which is uniformly accelerated motion) – the word velocity wasn't used – as proportional to time, declared correctly that this kind of motion was identifiable with freely falling bodies and projectiles, without his proving these propositions or suggesting a formula relating time, velocity and distance. De Soto's comments are shockingly correct regarding the definitions of acceleration (acceleration was a rate of change of motion (velocity) in time) and the observation that during the violent motion of ascent acceleration would be negative. Discourses such as these spread throughout Europe and definitely influenced Galileo and others, and helped in laying the foundation of kinematics.[4] Galileo deduced the equation s = 1/2gt2 in his work geometrically,[5] using the Merton rule, now known as a special case of one of the equations of kinematics. He couldn't use the now-familiar mathematical reasoning. The relationships between speed, distance, time and acceleration was not known at the time. Galileo was the first to show that the path of a projectile is a parabola. Galileo had an understanding of centrifugal force and gave a correct definition of momentum. This emphasis of momentum as a fundamental quantity in dynamics is of prime importance. He measured momentum by the product of velocity and weight; mass is a later concept, developed by Huygens and Newton. In the swinging of a simple pendulum, Galileo says in Discourses[6] that "every momentum acquired in the descent along an arc is equal to that which causes the same moving body to ascend through the same arc." His analysis on projectiles indicates that Galileo had grasped the first law and the second law of motion. He did not generalize and make them applicable to bodies not subject to the earth's gravitation. That step was Newton's contribution. The term "inertia" was used by Kepler who applied it to bodies at rest. (The first law of motion is now often called the law of inertia.) Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope. Galileo also was interested by the laws of the pendulum, his first observations of which were as a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum. More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation varies with the square root of length but is independent of the mass the pendulum. Thus we arrive at René Descartes, Isaac Newton, Gottfried Leibniz, et al.; and the evolved forms of the equations of motion that begin to be recognized as the modern ones. Later the equations of motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations.[7] However, the equations of quantum mechanics can also be considered "equations of motion", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields. Kinematic equations for one particle[edit] Kinematic quantities[edit] From the instantaneous position r = r(t), instantaneous meaning at an instant value of time t, the instantaneous velocity v = v(t) and acceleration a = a(t) have the general, coordinate-independent definitions;[8] Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature. The rotational analogues are the "angular vector" (angle the particle rotates about some axis) θ = θ(t), angular velocity ω = ω(t), and angular acceleration α = α(t): where is a unit vector in the direction of the axis of rotation, and θ is the angle the object turns through about the axis. The following relation holds for a point-like particle, orbiting about some axis with angular velocity ω:[9] where r is the position vector of the particle (radial from the rotation axis) and v the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body. Uniform acceleration[edit] The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below. Constant translational acceleration in a straight line[edit] These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration.[10] Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) – only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one. Equations [1] and [2] are from integrating the definitions of velocity and acceleration,[10] subject to the initial conditions r(t0) = r0 and v(t0) = v0; in magnitudes, Equation [3] involves the average velocity v + v0/2. Intuitively, the velocity increases linearly, so the average velocity multiplied by time is the distance traveled while increasing the velocity from v0 to v, as can be illustrated graphically by plotting velocity against time as a straight line graph. Algebraically, it follows from solving [1] for and substituting into [2] then simplifying to get or in magnitudes From [3], substituting for t in [1]: From [3], substituting into [2]: Usually only the first 4 are needed, the fifth is optional. Here a is constant acceleration, or in the case of bodies moving under the influence of gravity, the standard gravity g is used. Note that each of the equations contains four of the five variables, so in this situation it is sufficient to know three out of the five variables to calculate the remaining two. In elementary physics the same formulae are frequently written in different notation as: where u has replaced v0, s replaces r, and s0 = 0. They are often referred to as the SUVAT equations, where "SUVAT" is an acronym from the variables: s = displacement (s0 = initial displacement), u = initial velocity, v = final velocity, a = acceleration, t = time.[11][12] Constant linear acceleration in any direction[edit] Trajectory of a particle with initial position vector r0 and velocity v0, subject to constant acceleration a, all three quantities in any direction, and the position r(t) and velocity v(t) after time t. The initial position, initial velocity, and acceleration vectors need not be collinear, and take an almost identical form. The only difference is that the square magnitudes of the velocities require the dot product. The derivations are essentially the same as in the collinear case, although the Torricelli equation [4] can be derived using the distributive property of the dot product as follows: Elementary and frequent examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial speed u, one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity g. At this point one must remember that while these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as unidirectional vectors. Choosing s to measure up from the ground, the acceleration a must be in fact −g, since the force of gravity acts downwards and therefore also the acceleration on the ball due to it. At the highest point, the ball will be at rest: therefore v = 0. Using equation [4] in the set above, we have: Substituting and cancelling minus signs gives: Constant circular acceleration[edit] The analogues of the above equations can be written for rotation. Again these axial vectors must all be parallel to the axis of rotation, so only the magnitudes of the vectors are necessary, where α is the constant angular acceleration, ω is the angular velocity, ω0 is the initial angular velocity, θ is the angle turned through (angular displacement), θ0 is the initial angle, and t is the time taken to rotate from the initial state to the final state. General planar motion[edit] Position vector r, always points radially from the origin. Velocity vector v, always tangent to the path of motion. Acceleration vector a, not parallel to the radial motion but offset by the angular and Coriolis accelerations, nor tangent to the path but offset by the centripetal and radial accelerations. Kinematic vectors in plane polar coordinates. Notice the setup is not restricted to 2D space, but a plane in any higher dimension. These are the kinematic equations for a particle traversing a path in a plane, described by position r = r(t).[13] They are simply the time derivatives of the position vector in plane polar coordinates using the definitions of physical quantities above for angular velocity ω and angular acceleration α. These are instantaneous quantities which change with time. The position of the particle is where êr and êθ are the polar unit vectors. Differentiating with respect to time gives the velocity with radial component dr/dt and an additional component due to the rotation. Differentiating with respect to time again obtains the acceleration which breaks into the radial acceleration d2r/dt2, centripetal acceleration 2, Coriolis acceleration 2ωdr/dt, and angular acceleration . Special cases of motion described be these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration. State of motion Constant r r linear in t r quadratic in t r non-linear in t Constant θ Stationary Uniform translation (constant translational velocity) Uniform translational acceleration Non-uniform translation θ linear in t Uniform angular motion in a circle (constant angular velocity) Uniform angular motion in a spiral, constant radial velocity Angular motion in a spiral, constant radial acceleration Angular motion in a spiral, varying radial acceleration θ quadratic in t Uniform angular acceleration in a circle Uniform angular acceleration in a spiral, constant radial velocity Uniform angular acceleration in a spiral, constant radial acceleration Uniform angular acceleration in a spiral, varying radial acceleration θ non-linear in t Non-uniform angular acceleration in a circle Non-uniform angular acceleration in a spiral, constant radial velocity Non-uniform angular acceleration in a spiral, constant radial acceleration Non-uniform angular acceleration in a spiral, varying radial acceleration General 3D motion[edit] In 3D space, the equations in spherical coordinates (r, θ, φ) with corresponding unit vectors êr, êθ and êφ, the position, velocity, and acceleration generalize respectively to In the case of a constant φ this reduces to the planar equations above. Dynamic equations of motion[edit] Newtonian mechanics[edit] The first general equation of motion developed was Newton's second law of motion, in its most general form states the rate of change of momentum p = p(t) = mv(t) of an object equals the force F = F(x(t), v(t), t) acting on it,[14] The force in the equation is not the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as since m is a constant in Newtonian mechanics. Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continua, like deformable solids or fluids, but the motion of the system must be accounted for, see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum, see variable-mass system. It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex. The momentum form is preferable since this is readily generalized to more complex systems, generalizes to special and general relativity (see four-momentum).[14] It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces. For a number of particles (see many body problem), the equation of motion for one particle i influenced by other particles is[8][15] where pi is the momentum of particle i, Fij is the force on particle i by particle j, and FE is the resultant external force due to any agent not part of system. Particle i does not exert a force on itself. Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton–Euler equations combine the forces and torques acting on a rigid body into a single equation. Newton's second law for rotation takes a similar form to the translational case,[16] by equating the torque acting on the body to the rate of change of its angular momentum L. Analogous to mass times acceleration, the moment of inertia tensor I depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity, Again, these equations apply to point like particles, or at each point of a rigid body. Likewise, for a number of particles, the equation of motion for one particle i is[17] where Li is the angular momentum of particle i, τij the torque on particle i by particle j, and τE is resultant external torque (due to any agent not part of system). Particle i does not exert a torque on itself. Some examples[18] of Newton's law include describing the motion of a simple pendulum, and a damped, sinusoidally driven harmonic oscillator, For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass m thrown in the air, in air currents (such as wind) described by a vector field of resistive forces R = R(r, t), where G is the gravitational constant, M the mass of the Earth, and A = R/m is the acceleration of the projectile due to the air currents at position r and time t. The classical N-body problem for N particles each interacting with each other due to gravity is a set of N nonlinear coupled second order ODEs, where i = 1, 2, …, N labels the quantities (mass, position, etc.) associated with each particle. Analytical mechanics[edit] As the system evolves, q traces a path through configuration space (only some are shown). The path taken by the system (red) has a stationary action (δS = 0) under small changes in the configuration of the system (δq).[19] Using all three coordinates of 3D space is unnecessary if there are constraints on the system. If the system has N degrees of freedom, then one can use a set of N generalized coordinates q(t) = [q1(t), q2(t) ... qN(t)], to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the generalized velocities The Euler–Lagrange equations are[2][20] where the Lagrangian is a function of the configuration q and its time rate of change dq/dt (and possibly time t) Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled N second order ODEs in the coordinates are obtained. Hamilton's equations are[2][20] where the Hamiltonian is a function of the configuration q and conjugate "generalized" momenta in which /q = (/q1, /q2, …, /qN) is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time t, Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled 2N first order ODEs in the coordinates qi and momenta pi are obtained. The Hamilton–Jacobi equation is[2] is Hamilton's principal function, also called the classical action is a functional of L. In this case, the momenta are given by Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order non-linear PDE, in N + 1 variables. The action S allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether. All classical equations of motion can be derived from the variational principle known as Hamilton's principle of least action stating the path the system takes through the configuration space is the one with the least action S. Lorentz force f on a charged particle (of charge q) in motion (instantaneous velocity v). The E field and B field vary in space and time. In electrodynamics, the force on a charged particle of charge q is the Lorentz force:[21] Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle: or its momentum: The same equation can be obtained using the Lagrangian (and applying Lagrange's equations above) for a charged particle of mass m and charge q:[22] where A and ϕ are the electromagnetic scalar and vector potential fields. The Lagrangian indicates an additional detail: the canonical momentum in Lagrangian mechanics is given by: instead of just mv, implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation. Alternatively the Hamiltonian (and substituting into the equations):[20] can derive the Lorentz force equation. General relativity[edit] Geodesic equation of motion[edit] Geodesics on a sphere are arcs of great circles (yellow curve). On a 2Dmanifold (such as the sphere shown), the direction of the accelerating geodesic is uniquely fixed if the separation vector ξ is orthogonal to the "fiducial geodesic" (green curve). As the separation vector ξ0 changes to ξ after a distance s, the geodesics are not parallel (geodesic deviation).[23] The above equations are valid in flat spacetime. In curved space spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a geodesic of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor g, the metric provides the notion of arc length (see line element for details), the differential arc length is given by:[24] and the geodesic equation is a second-order differential equation in the coordinates, the general solution is a family of geodesics:[25] where Γ μαβ is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system). Given the mass-energy distribution provided by the stress–energy tensor T αβ, the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of space time is equivalent to a gravitational field (see equivalence principle). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The relative acceleration of one geodesic to another in curved spacetime is given by the geodesic deviation equation: where ξα = x2αx1α is the separation vector between two geodesics, D/ds (not just d/ds) is the covariant derivative, and Rαβγδ is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field.[26] For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity. Spinning objects[edit] In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson–Papapetrou–Dixon equations describe the motion of spinning objects moving in a gravitational field. Analogues for waves and fields[edit] Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified. Sometimes in the following contexts, the wave or field equations are also called "equations of motion". Field equations[edit] Equations that describe the spatial dependence and time evolution of fields are called field equations. These include This terminology is not universal: for example although the Navier–Stokes equations govern the velocity field of a fluid, they are not usually called "field equations", since in this context they represent the momentum of the fluid and are called the "momentum equations" instead. Wave equations[edit] Equations of wave motion are called wave equations. The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves. From classical equations of motion and field equations; mechanical, gravitational wave, and electromagnetic wave equations can be derived. The general linear wave equation in 3D is: where X = X(r, t) is any mechanical or electromagnetic field amplitude, say:[27] and v is the phase velocity. Nonlinear equations model the dependence of phase velocity on amplitude, replacing v by v(X). There are other linear and nonlinear wave equations for very specific applications, see for example the Korteweg–de Vries equation. Quantum theory[edit] In quantum theory, the wave and field concepts both appear. In quantum mechanics, in which particles also have wave-like properties according to wave–particle duality, the analogue of the classical equations of motion (Newton's law, Euler–Lagrange equation, Hamilton–Jacobi equation, etc.) is the Schrödinger equation in its most general form: where Ψ is the wavefunction of the system, Ĥ is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and ħ is the Planck constant divided by 2π. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schrödinger equation itself reduces to the Hamilton–Jacobi equation when one considers the correspondence principle, in the limit that ħ becomes zero. Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schrödinger equation that govern the time evolution and behavior of a quantum system, for instance: See also[edit] 1. ^ Encyclopaedia of Physics (second Edition), R.G. Lerner, G.L. Trigg, VHC Publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1 (VHC Inc.) 0-89573-752-3 3. ^ See History of Mathematics 4. ^ The Britannica Guide to History of Mathematics, ed. Erik Gregersen 5. ^ Discourses, Galileo 6. ^ Dialogues Concerning Two New Sciences, by Galileo Galilei; translated by Henry Crew, Alfonso De Salvio 7. ^ Halliday, David; Resnick, Robert; Walker, Jearl (2004-06-16). Fundamentals of Physics (7 Sub ed.). Wiley. ISBN 0-471-23231-9. 9. ^ M.R. Spiegel; S. Lipschutz; D. Spellman (2009). Vector Analysis. Schaum's Outlines (2nd ed.). McGraw Hill. p. 33. ISBN 978-0-07-161545-7. 10. ^ a b Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, second Edition, 1978, John Murray, ISBN 0-7195-3382-1 11. ^ Hanrahan, Val; Porkess, R (2003). Additional Mathematics for OCR. London: Hodder & Stoughton. p. 219. ISBN 0-340-86960-7. 12. ^ Keith Johnson (2001). Physics for you: revised national curriculum edition for GCSE (4th ed.). Nelson Thornes. p. 135. ISBN 978-0-7487-6236-1. The 5 symbols are remembered by "suvat". Given any three, the other two can be found. 13. ^ 3000 Solved Problems in Physics, Schaum Series, A. Halpern, Mc Graw Hill, 1988, ISBN 978-0-07-025734-4 14. ^ a b An Introduction to Mechanics, D. Kleppner, R.J. Kolenkow, Cambridge University Press, 2010, p. 112, ISBN 978-0-521-19821-9 15. ^ Encyclopaedia of Physics (second Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (VHC Inc.) 0-89573-752-3 16. ^ "Mechanics, D. Kleppner 2010" 17. ^ "Relativity, J.R. Forshaw 2009" 18. ^ The Physics of Vibrations and Waves (3rd edition), H.J. Pain, John Wiley & Sons, 1983, ISBN 0-471-90182-2 19. ^ R. Penrose (2007). The Road to Reality. Vintage books. p. 474. ISBN 0-679-77631-1. 20. ^ a b c Classical Mechanics (second edition), T.W.B. Kibble, European Physics Series, 1973, ISBN 0-07-084018-0 21. ^ Electromagnetism (second edition), I.S. Grant, W.R. Phillips, Manchester Physics Series, 2008 ISBN 0-471-92712-0 22. ^ Classical Mechanics (second Edition), T.W.B. Kibble, European Physics Series, Mc Graw Hill (UK), 1973, ISBN 0-07-084018-0. 23. ^ Misner, Thorne, Wheeler, Gravitation 24. ^ C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (second ed.). p. 1199. ISBN 0-07-051400-3. 25. ^ C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (second ed.). p. 1200. ISBN 0-07-051400-3. 26. ^ J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 34–35. ISBN 0-7167-0344-0. 27. ^ H.D. Young; R.A. Freedman (2008). University Physics (12th ed.). Addison-Wesley (Pearson International). ISBN 0-321-50130-6.
36efecb2fe5e2d20
Category: arrow of time No, Thermodynamics Does Not Explain Our Perceived Arrow Of Time “As far as we can tell, the second law of thermodynamics is true: entropy never decreases for any closed system in the Universe, including for the entirety of the observable Universe itself. It’s also true that time always runs in one direction only, forward, for all observers. What many don’t appreciate is that these two types of arrows — the thermodynamic arrow of entropy and the perceptive arrow of time — are not interchangeable. During inflation, where the entropy remains low and constant, time still runs forward. When the last star has burned out and the last black hole has decayed and the Universe is dominated by dark energy, time will still run forward. And everywhere in between, regardless of what’s happening in the Universe or with its entropy, time still runs forward at exactly that same, universal rate for all observers. If you want to know why yesterday is in the immutable past, tomorrow will arrive in a day, and the present is what you’re experiencing right now, you’re in good company. But thermodynamics, interesting though it may be, won’t give you the answer. As of 2019, it’s still an unsolved mystery.” No matter who you are, where you are, or what you’re doing, you’ll always perceive time running forward, from your frame of reference, at exactly the same rate: one second-per-second. The fact that this is true has led many to speculate as to what the cause of time’s arrow might be, and many, having noticed that entropy never decreases in our Universe, place the blame squarely on thermodynamics as the root of our arrow of time. But that’s almost certainly not the case, and we can demonstrate that fact in a number of ways, including by decreasing entropy in a region and noting that time still moves forwards. The perceived arrow of time is still a mystery. We Still Don’t Understand Why Time Only Flows Forward Why does time flow forwards and not backwards, in 100% of cases, if the laws of physics are completely time-symmetric? From Newton’s laws to Einstein’s relativity, from Maxwell’s equations to the Schrödinger equation, the laws of physics don’t have a preferred direction. Except, that is, for one: the second law of thermodynamics. Any closed system that we look at sees its entropy only increase, never decrease. Could this thermodynamic arrow of time be responsible for what we perceive as the forward motion of time? Interestingly enough, there’s an experiment we can perform: isolate a system and perform enough external work on it to force the entropy inside to *decrease*, an “unnatural” progression of entropy. What happens to time, then? Does it still run forward? Find out the answer, and learn whether thermodynamics has anything to do with the passage of time or not!
abe21311f69ceabe
Tuesday, March 06, 2007 Monads, Vector Spaces and Quantum Mechanics pt. II Back from wordpress.com: I had originally intended to write some code to simulate quantum computers and implement some quantum algorithms. I'll probably eventually do that but today I just want to look at quantum mechanics in its own right as a kind of generalisation of probability. This is probably going to be the most incomprehensible post I've written in this blog. On the other hand, even though I eventually talk about the philosophy of quantum mechanics, there's some Haskell code to play with at every stage, and the codeives the same results as appear in physics papers, so maybe that will help give a handle on what I'm saying. First get some Haskell fluff out of the way: > import Prelude hiding (repeat) > import Data.Map (toList,fromListWith) > import Complex > infixl 7 .* Now define certain types of vector spaces. The idea is the a W b a is a vector in a space whose basis elements are labelled by objects of type a and where the coefficients are of type p. > data W b a = W { runW :: [(a,b)] } deriving (Eq,Show,Ord) This is very similar to standard probability monads except that I've allowed the probabilities to be types other than Float. Now we need a couple of ways to operate on these vectors. mapW allows the application of a function transforming the probabilities... > mapW f (W l) = W $ map (\(a,b) -> (a,f b)) l and fmap applies a function to the basis element labels. > instance Functor (W b) where > fmap f (W a) = W $ map (\(a,p) -> (f a,p)) a We want our vectors to support addition, multiplication, and actually form a monad. The definition of >>= is similar to that for other probability monads. Note how vector addition just concatenates our lists of probabilities. The problem with this is that if we have a vector like a+2a we'd like it to be reduced to 3a but in order to do that we need to be able to spot that the two terms a and 2a both contain multiples of the same vector, and to do that we need the fact that the labels are instances of Eq. Unfortunately we can't do this conveniently in Haskell because of the lack of restricted datatypes and so to collect similar terms we need to use a separate collect function: > instance Num b => Monad (W b) where > return x = W [(x,1)] > l >>= f = W $ concatMap (\(W d,p) -> map (\(x,q)->(x,p*q)) d) (runW $ fmap f l) > a .* b = mapW (a*) b > instance (Eq a,Show a,Num b) => Num (W b a) where > W a + W b = W $ (a ++ b) > a - b = a + (-1) .* b > _ * _ = error "Num is annoying" > abs _ = error "Num is annoying" > signum _ = error "Num is annoying" > fromInteger a = if a==0 then W [] else error "fromInteger can only take zero argument" > collect :: (Ord a,Num b) => W b a -> W b a > collect = W . toList . fromListWith (+) . runW Now we can specialise to the two monads that interest us: > type P a = W Float a > type Q a = W (Complex Float) a P is the (hopefully familiar if you've read Eric's recent posts) probability monad. But Q allows complex probabilities. This is because quantum mechanics is a lot like probability theory with complex numbers and many of the rules of probability theory carry over. Suppose we have a (non-quantum macroscopic) coin that we toss. It's state might be described by: > data Coin = Heads | Tails deriving (Eq,Show,Ord) > coin1 = 0.5 .* return Heads + 0.5 .* return Tails :: P Coin Suppose that if Albert sees a coin that is heads up he has a 50% chance of turning it over and if he sees a coin that is tails up he has a 25% chance of turning it over. We can describe Albert like this: > albert Heads = 0.5 .* return Heads + 0.5 .* return Tails > albert Tails = 0.25 .* return Heads + 0.75 .* return Tails We can now ask what happens if Albert sees a coin originally turned up heads n times in a row: > repeat 0 f = id > repeat n f = repeat (n-1) f . f > (->-) :: a -> (a -> b) -> b > g ->- f = f g > (-><) :: Q a -> (a -> Q b) -> Q b > g ->< f = g >>= f > albert1 n = return Heads ->- repeat n (->< albert) ->- collect Let me explain those new operators. ->- is just function application written from left to right. The > in the middle is intended to suggest the direction of data flow. ->< is just >>= but I've written it this way with the final < intended to suggest the way a function a -> M b 'fans out'. Anyway, apropos of nothing else, notice how Albert approaches a steady state as n gets larger. Quantum mechanics works similarly but with the following twist. When we come to observe the state of a quantum system it undergoes the following radical change: > observe :: Ord a => Q a -> P a > observe = W . map (\(a,w) -> (a,magnitude (w*w))) . runW . collect Ie. the quantum state becomes an ordinary probabilistic one. This is called wavefunction collapse. Before collapse, the complex weights are called 'amplitudes' rather than probabilities. The business of physicists is largely about determining what these amplitudes are. For example, the well known Schrödinger equations is a lot like a kind of probabilistic diffusion, like a random walk, except with complex probabilities instead of amplitudes. (That's why so many physicists have been hired into finance firms in recent years - stocks follow a random walk which has formal similarities to quantum physics.) The rules of quantum mechanics are a bit like those of probability theory. In probability theory the sum of the probabilites must add to one. In addition, any process (like albert) must act in such a way that if the input sum of probabilities is one, then so is the output. This means that probabilistic process are stochastic. In quantum mechanics the sum of the squares of the magnitudes of the amplitudes must be one. Such a state is called 'normalised'. All processes must be such that normalised inputs go to normalised outputs. Such processes are called unitary ones. There's a curious subtlety present in quantum mechanics. In classical probability theory you need to have the sum of the probabilities of your different events to sum to one. But it's no good having events like "die turns up 1", "die turns up 2", "die turns up even" at the same time. "die turns up even" includes "die turns up 2". So you always need to work with a mutually exclusive set of events. In quantum mechanics it can be pretty tricky to figure out what the mutually exclusive events are. For example, when considering the spin of an electron, there are no more mutually exclusive events beyond "spin up" and "spin down". You might think "what about spin left?". That's just a mixture of spin up and spin down - and that fact is highly non-trivial and non-obvious. But I don't want to discuss that now and it won't affect the kinds of things I'm considering below. So here's an example of a quantum process a bit like albert above. For any angle $latex \theta$, rotate turns a boolean state into a mixture of boolean states. For $latex \theta=0$ it just leaves the state unchanged and for $latex \theta=\pi$ it inverts the state so it corresponds to the function Not. But for $latex \theta=\pi/2$ it does something really neat: it is a kind of square root of Not. Let's see it in action: > rotate :: Float -> Bool -> Q Bool > rotate theta True = let theta' = theta :+ 0 > in cos (theta'/2) .* return True - sin (theta'/2) .* return False > rotate theta False = let theta' = theta :+ 0 > in cos (theta'/2) .* return False + sin (theta'/2) .* return True > snot = rotate (pi/2) > repeatM n f = repeat n (>>= f) > snot1 n = return True ->- repeatM n snot ->- observe We can test it by running snot1 2 to see that two applications take you to where you started but that snot1 1 gives you a 50/50 chance of finding True or False. Nothing like this is possible with classical probability theory and it can only happen because complex numbers can 'cancel each other out'. This is what is known as 'destructive interference'. In classical probability theory you only get constructive interference because probabilities are always positive real numbers. (Note that repeatM is just a monadic version of repeat - we could have used it to simplify albert1 above so there's nothing specifically quantum about it.) Now for two more combinators: > (=>=) :: P a -> (a -> b) -> P b > g =>= f = fmap f g > (=><) :: P (Q a) -> (a -> Q b) -> P (Q b) > g =>< f = fmap (>>= f) g The first just uses fmap to apply the function. I'm using the = sign as a convention that the function is to be applied not at the top level but one level down within the datastructure. The second is simply a monadic version of the first. The reason we need the latter is that we're going to have systems that have both kinds of uncertainty - classical probabilistic uncertainty as well as quantum uncertainty. We'll also want to use the fact that P is a monad to convert doubly uncertain events to singly uncertain ones. That's what join does: > join :: P (P a) -> P a > join = (>>= id) OK, that's enough ground work. Let's investigate a physical process that can be studied in the lab: the Quantum Zeno effect, otherwise known as the fact that a watched pot never boils. First an example related to snot1: > zeno1 n = return True ->- repeatM n (rotate (pi/fromInteger n)) ->- collect ->- observe The idea is that we 'rotate' our system through an angle $latex \pi/n$ but we do so in n stages. The fact that we do it in n stages makes no difference, we get the same result as doing it in one go. The slight complication is this: suppose we start with a probabilistic state of type P a. If we let it evolve quantum mechanically it'll turn into something of type P (Q a). On observation we get something of type P (P a). We need join to get a single probability distribution of type P a. The join is nothing mysterious, it just combines the outcome of two successive probabilistic processes into one using the usual laws of probability. But here's a variation on that theme. Now we carry out n stages, but after each one we observe the system causing wavefunction collapse: > zeno2 n = return True ->- repeat n ( > \x -> x =>= return =>< rotate (pi/fromInteger n) =>= observe ->- join > ) ->- collect Notice what happens. In the former case we flipped the polarity of the input. In this case it remains closer to the original state. The higher we make n the closer it stays to its original state. (Not too high, start with small n. The code suffers from combinatorial explosion.) Here's a paper describing the actual experiment. Who needs all that messing about with sensitive equipment when you have a computer? :-) A state of the form P (Q a) is called a mixed state. Mixed states can get a bit hairy to deal with as you have this double level of uncertainty. It can get even trickier because you can sometimes observe just part of a quantum system rather than the whole system like oberve does. This inevitably leads mixed states. von Neumann came up with the notion of a density matrix to deal with this, although a P (Q a) works fine too. I also have a hunch there is an elegant way to handle them through an object of type P (Q (Q a)) that will eliminate the whole magnitude squared thing. However, I want to look at the quantum Zeno effect in another way that ultimately allows you deal with mixed states in another way. Unfortunately I don't have time to explain this elimination today, but we can look at the general approach. In this version I'm going to consider a quantum system that consists of the logical state in the Zeno examples, but also include the state of the observer. Now standard dogma says you can't can't form quantum states out of observers. In other words, you can't form Q Observer where Observer is the state of the observer. It says you can only form P Observer. Whatever. I'm going to represent an experimenter using a list that representing the sequence of measurements they have made. Represent the complete system by a pair of type ([Bool],Bool). The first element of the pair is the experimenter's memory and the second element is the state of the boolean variable being studied. When our experimenter makes a measurement of the boolean variable, its value is simply prepended to his or her memory: > zeno3 n = return ([],True) ->- repeatM n ( > \(m,s) -> do > s' <- rotate (pi/fromInteger n) s > return (s:m,s') > ) ->- observe =>= snd ->- collect Note how we now delay the final observation until the end when we observe both the experimenter and the poor boolean being experimented on. We want to know the probabilities for the final boolean state so we apply snd so as to discard the state of the observer's memory. Note how we get the same result as zeno2. (Note no mixed state, just an expanded quantum state that collapses to a classical probabilistic state.) There's an interesting philosophical implication in this. If we model the environment (in this case the experimenter is part of that environment) as part of a quantum system, we don't need all the intermediate wavefunction collapses, just the final one at the end. So are the intermediate collapses real or not? The interaction with the environment is known as decoherence and some hope that wavefunction collapse can be explained away in terms of it. Anyway, time for you to go and do something down-to-earth like gardening. Me, I'm washing the kitchen floor... I must mention an important cheat I made above. When I model the experimenter's memory as a list I'm copying the state of the measured experiment into a list. But you can't simply copy data into a quantum register. One way to see this is that unitary processes are always invertible. Copy data into a register destroys the value that was there before and hence is not invertible. So instead, imagine that we really have an array that starts out zeroed out and that each time something is added to the list, the new result is xored into the next slot in the array. The list is just non-unitary looking, but convenient, shorthand for this unitary process. Labels: , Anonymous Alex said... This reminds me of Youssef's theory that bayesian probabilities should actually really be complex numbers, and that quatum phyisics makes more sense if they are. I've never quite got my head round it, though. You can find it via Youseff's web page: Tuesday, 06 March, 2007   Blogger sigfpe said... I've met Youssef's stuff before though I've not read it properly. But I do agree very strongly with his opening sentence "If it weren’t for the weight of history, it would seem natural to take quantum mechanical phenomena as an indication that something has gone wrong with probability theory and to attempt to explain such phenomena by modifying probability theory itself, rather than by invoking quantum mechanics." Except that it seems to me that quantum mechanics is a modified probability theory. Skimming ahead I did notice that like me, he eliminates mixed states. I did it more out of computational convenience that anything - but on thinking further I suspect I'm doing the same thing as Youssef. To be honest, I wasn't trying to do anything non-standard, I just wanted to do some textbook standard QM and show it it formally looks just like probability theory to the point where you can share code between a probability and quantum monad. Tuesday, 06 March, 2007   Anonymous infinity downline said... Really nice review! Unboxing is the best part of getting something! thanks for a nice post. Monday, 28 June, 2010   Anonymous kebab said... Quantum mechanics can already be seen as a theory of projection-valued measures (ie. a theory where probability measures take "non-commuting values"). This point of view has been well established since Von Neumann. Answers to questions "is my spin up" or "is the particle within the Borel set A" are answered with a closed subspace (equivalently with the projection onto this closed space), such that familiar laws of measure theory hold : P(\cup_i A_i) = \sum_i P(A_i) for disjoint A_i, P(never) = the zero projector, P(always) = identity. Such projection valued measures naturally yield self adjoint operators (and conversely, by the Spectral Theorem) which are the "observables" that we know about. Heck, one could even speak of toposes here, with the projections of some fixed Hilbert space as the subobject classifier and Hilbert tensor products as products (objects, ie. state spaces are Hilbert spaces), particular subspaces as equalizers, C as an initial object, etc. Projection valued measures form some sort of monad: a measure is a device which integrates functions, its type is something like (a -> r) -> r (r fixed, here to a vector space of projectors), and the "monad of measures" is therefore a kind of Cont monad... This should perhaps make sense in Haskell. Thanks again for this blog. Wednesday, 26 October, 2011   Post a Comment << Home
f52d9f4ae92cd48a
[Content Warnings: Psychedelic Depersonalization, Fear of the Multiverse, Personal Identity Doubts, Discussion about Quantum Consciousness, DMT entities, Science] The brain is wider than the sky, For, put them side by side, The one the other will include With ease, and you beside. – Emily Dickinson Is it for real? A sizable percentage of people who try a high dose of DMT end up convinced that the spaces they visit during the trip exist in some objective sense; they either suspect, intuit or conclude that their psychonautic experience reflects something more than simply the contents of their minds. Most scientists would argue that those experiences are just the result of exotic brain states; the worlds one travels to are bizarre (often useless) simulations made by our brain in a chaotic state. This latter explanation space forgoes alternate realities for the sake of simplicity, whereas the former envisions psychedelics as a multiverse portal technology of some sort. Some exotic states, such as DMT breakthrough experiences, do typically create feelings of glimpsing foundational information about the depth and structure of the universe. Entity contact is frequent, and these seemingly autonomous DMT entities are often reported to have the ability to communicate with you. Achieving a verifiable contact with entities from another dimension would revolutionize our conception of the universe. Nothing would be quite as revolutionary, really. But how to do so? One could test the external reality of these entities by asking them to provide information that cannot be obtained unless they themselves held an objective existence. In this spirit, some have proposed to ask these entities complex mathematical questions that would be impossible for a human to solve within the time provided by the trip. This particular test is really cool, but it has the flaw that DMT experiences may themselves trigger computationally-useful synesthesia of the sort that Daniel Tammet experiences. Thus even if DMT entities appeared to solve extraordinary mathematical problems, it would still stand to reason that it is oneself who did it and that one is merely projecting the results into the entities. The mathematical ability would be the result of being lucky in the kind of synesthesia DMT triggered in you. A common overarching description of the effects of psychedelics is that they “raise the frequency of one’s consciousness.” Now, this is a description we should take seriously whether or not we believe that psychedelics are inter-dimensional portals. After all, promising models of psychedelic action involve fast-paced control interruption, where each psychedelic would have its characteristic control interrupt frequency. And within a quantum paradigm, Stuart Hameroff has argued that psychedelic compounds work by bringing up the quantum resonance frequency of the water inside our neurons’ microtubules (perhaps going from megahertz to gigahertz), which he claims increases the non-locality of our consciousness. In the context of psychedelics as inter-dimensional portals, this increase in the main frequency of one’s consciousness may be the key that allows us to interact with other realities. Users describe a sort of tuning of one’s consciousness, as if the interface between one’s self and the universe underwent some sudden re-adjustment in an upward direction. In the same vein, psychedelicists (e.g. Rick Strassman) frequently describe the brain as a two-way radio, and then go on to claim that psychedelics expand the range of channels we can be attuned to. One could postulate that the interface between oneself and the universe that psychonauts describe has a real existence of its own. It would provide the bridge between us as (quantum) monads and the universe around us; and the particular structure of this interface would determine the selection pressures responsible for the part of the multiverse that we interact with. By modifying the spectral properties of this interface (e.g. by drastically raising the main frequency of its vibration) with, e.g. DMT, one effectively “relocates” (cf. alien travel) to other areas of reality. Assuming this interface exists and that it works by tuning into particular realities, what sorts of questions can we ask about its properties? What experiments could we conduct to verify its existence? And what applications might it have? The Psychedelic State of Input Superposition Once in a while I learn about a psychedelic effect that captures my attention precisely because it points to simple experiments that could distinguish between the two rough explanation spaces discussed above (i.e. “it’s all in your head” vs. “real inter-dimensional travel”). This article will discuss a very odd phenomenon whose interpretations do indeed have different empirical predictions. We are talking about the experience of sensing what appears to be a superposition of inputs from multiple adjacent realities. We will call this effect the Psychedelic State of Input Superposition (PSIS for short). There is no known way to induce PSIS on purpose. Unlike the reliable DMT hyper-dimensional journeys to distant dimensions, PSIS is a rare closer-to-home effect and it manifests only on high doses of LSD (and maybe other psychedelics). Rather than feeling like one is tuning into another dimension in the higher frequency spectrum, it feels as if one just accidentally altered (perhaps even broke) the interface between the self and the universe in a way that multiplies the number of realities you are interacting with. After the event, the interface seems to tune into multiple similar universes at once; one sees multiple possibilities unfold simultaneously. After a while, one somehow “collapses” into only one of these realities, and while coming down, one is thankful to have settled somewhere specific rather than remaining in that weird in-between. Let’s take a look at a couple of trip reports that feature this effect: [Trip report of taking a high dose of LSD on an airplane]: So I had what you call “sonder”, a moment of clarity where I realized that I wasn’t the center of the universe, that everyone is just as important as me, everyone has loved ones, stories of lost love etc, they’re the main character in their own movies. That’s when shit went quantum. All these stories begun sinking in to me. It was as if I was beginning to experience their stories simultaneously. And not just their stories, I began seeing the story of everyone I had ever met in my entire life flash before my eyes. And in this quantum experience, there was a voice that said something about Karma. The voice told me that the plane will crash and that I will be reborn again until the quota of my Karma is at -+0. So, for every ill deed I have done, I would have an ill deed committed to me. For every cheap T-shirt I purchased in my previous life, I would live the life of the poor Asian sweatshop worker sewing that T-shirt. For every hooker I fucked, I would live the life of a fucked hooker. And it was as if thousands of versions of me was experiencing this moment. It is hard to explain, but in every situation where something could happen, both things happened and I experienced both timelines simultaneously. As I opened my eyes, I noticed how smoke was coming out of the top cabins in the plane. Luggage was falling out. I experienced the airplane crashing a thousand times, and I died and accepted death a thousand times, apologizing to the Karma God for my sins. There was a flash of the brightest white light imagineable and the thousand realities in which I died began fading off. Remaining was only one reality in which the crash didn’t happen. Where I was still sitting in the plane. I could still see the smoke coming out of the plane and as a air stewardess came walking by I asked her if everything was alright. She said “Yes, is everything alright with YOU?”. — Reddit user I_DID_LSD_ON_A_PLANE, in r/BitcoinMarkets (why there? who knows). Further down on the same thread, written by someone else: [A couple hours after taking two strong hits of LSD]: Fast-forward to when I’m peaking hours later and I find myself removed from the timeline I’m in and am watching alternate timelines branch off every time someone does something specific. I see all of these parallel universes being created in real time, people’s actions or interactions marking a split where both realities exist. Dozens of timelines, at least, all happening at once. It was fucking wild to witness. Then I realize that I don’t remember which timeline I originally came out of and I start to worry a bit. I start focusing, trying to remember where I stepped out of my particular universe, but I couldn’t figure it out. So, with the knowledge that I was probably wrong, I just picked one to go back into and stuck with it. It’s not like I would know what changed anyway, and I wasn’t going to just hang out here in the whatever-this-place-is outside of all of them. Today I still sometimes feel like I left a life behind and jumped into a new timeline. I like it, I feel like I left a lot of baggage behind and there are a lot of regrets and insecurities I had before that trip that I don’t have anymore. It was in a different life, a different reality, so in this case the answer I found was that it’s okay to start over when you’re not happy with where you are in life. — GatorAutomator Let us summarize: Person X takes a lot of LSD. At some point during the trip (usually after feeling that “this trip is way too intense for me now”) X starts experiencing sensory input from what appear to be different branches of the multiverse. For example, imagine that person X can see a friend Y sitting on a couch in the corner. Suppose that Y is indecisive, and that as a result he makes different choices in different branches of the multiverse. If Y is deciding whether to stand up or not, X will suddenly see a shadowy figure of Y standing up while another shadowy figure of Y remains sitting. Let’s call them Y-sitting and Y-standing. If Y-standing then turns indecisive about whether to drink some water or go to the bathroom, X may see one shadowy figure of Y-standing getting water and a shadowy figure of Y-standing walking towards the bathroom, all the while Y-sitting is still on the couch. And so it goes. The number of times per second that Y splits and the duration of the perceived superposition of these splits may be a function of X’s state of consciousness, the substance and dose consumed, and the degree of indecision present in Y’s mind. The two quotes provided are examples of this effect, and one can find a number of additional reports online with stark similarities. There are two issues at hand here. First, what is going on? And second, can we test it? We will discuss three hypotheses to explain what goes on during PSIS, propose an experiment to test the third one (the Quantum Hypothesis), and provide the results of such an experiment. Hard-nosed scientists may want to skip to the “Experiment” section, since the following contains a fair amount of speculation (you have been warned). Three Hypothesis for PSIS: Cognitive, Spiritual, Quantum In order to arrive at an accurate model of the world, one needs to take into account both the prior probability of the hypothesis and the likelihoods that they predict that one would obtain the available evidence. Even if one prior of yours is extremely strong (e.g. a strong belief in materialism), it is still rational to update one’s probability estimates of alternative hypotheses when new relevant evidence is provided. The difficulty often comes from finding experiments where the various hypotheses generate very different likelihoods for one’s observations.  As we will see, the quantum hypothesis has this characteristic: it is the only one that would actually predict a positive result for the experiment. The Cognitive Hypothesis The first (and perhaps least surreal) hypothesis is that PSIS is “only in one’s mind”. When person X sees person Y both standing up and staying put, what may be happening is that X is receiving photons only from Y-standing and that Y-sitting is just a hallucination that X’s inner simulation of her environment failed to erase. Psychedelics intensify one’s experience, and this is thought to be the result of control interruption. This means that inhibition of mental content by cortical feedback is attenuated. In the psychedelic state, sensory impressions, automatic reactions, feelings, thoughts and all other mental contents are more intense and longer-lived. This includes the predictions that you make about how your environment will evolve. Not only is one’s sensory input perceived as more intense, one’s imagined hypotheticals are also perceived more intensely. Under normal circumstances, cortical inhibition makes our failed predictions quickly disappear. Psychedelic states of consciousness may be poor at inhibiting these predictions. In this account, X may be experiencing her brain’s past predictions of what Y could have done overlaid on top of the current input that she is receiving from her physical environment. In a sense, she may be experiencing all of the possible “next steps” that she simply intuited. While these simulations typically remain below the threshold of awareness (or just above it), on a psychedelic state they may reinforce themselves in unpredictable ways. X’s mind never traveled anywhere and there is nothing really weird going on. X is just experiencing the aftermath of a specific failure of information processing concerning the inhibition of past predictions. Alternatively, very intense emotions such as those experienced on intense ego-killing psychedelic experiences may distort one’s perception so much that one begins to suspect that one is perhaps dead or in another dimension. We can posit that the belief that one is not properly connected to one’s brain (or that one is dying) can trigger even stronger emotions and unleash a cascade of further distortions. This positive feedback loop may create episodes of intense confusion and overlapping pieces of information, which later might be interpreted as “seeing splitting universes”. The Spiritual Hypothesis Many spiritual traditions postulate the existence of alternate dimensions, additional layers of reality, and hidden spirit pathways that connect all of reality. These traditions often provide rough maps of these realities and may claim that some people are able to travel to such far-out regions with mental training and consciousness technologies. For illustration, let’s consider Buddhist cosmology, which describes 31 planes of existence. Interestingly, one of the core ideas of this cosmology is that the major characteristic that distinguishes the planes of existence is the states of consciousness typical of their inhabitants. These states of consciousness are correlated with moral conditions such as the ethical quality of their past deeds (karma), their relationship with desire (e.g. whether it is compulsive, sustainable or indifferent) and their existential beliefs. In turn, a feature of this cosmology is that it allows inter-dimensional travel by changing one’s state of consciousness. The part of the universe one interacts with is a function of one’s karma, affinities and beliefs. So by changing these variables with meditation (or psychedelic medicine) one can also change which world we exist in. An example of a very interesting location worth trying to travel to is the mythical city of Shambhala, the location of the Kalachakra Tantra. This city has allegedly turned into a pure land thanks to the fact that its king converted to Buddhism after meeting the Buddha. Pure lands are abodes populated by enlightened and quasi-enlightened beings whose purpose is to provide an optimal teaching environment for Buddhism. One can go to Shambhala by either reincarnating there (with good karma and the help of some pointers and directions at the time of death) or by traveling there directly during meditation. In order to do the latter, one needs to kindle one’s subtle energies so that they converge on one’s heart, while one is embracing the Bodhisattva ethic (focusing on reducing others’ suffering as a moral imperative). Shambhala may not be in a physical location accessible to humans. Rather, Buddhist accounts would seem to depict it as a collective reality built by people which manifests on another plane of existence (specifically somewhere between the 23rd and 27th layer). In order to create a place like that one needs to bring together many individuals in a state of consciousness that exhibits bliss, enlightenment and benevolence. A pure land has no reality of its own; its existence is the result of the states of consciousness of its inhabitants. Thus, the very reason why Shambhala can even exist as a place somewhere outside of us is because it is already a potential place that exists within us. Similar accounts of a wider cosmological reality can be found elsewhere (such as Hinduism, Zoroastrianism, Theosophy, etc.). These accounts may be consistent with the sort of experiences having to do with astral travel and entity contact that people have while on DMT and other psychedelics in high doses. However, it seems a lot harder to explain PSIS with an ontology of this sort. While reality is indeed portrayed as immensely vaster than what science has shown so far, we do not really encounter claims of parallel realities that are identical to ours except that your friend decided to go to the bathroom rather than drink some water just now. In other words, while many spiritual ontologies are capable of accommodating DMT hyper-dimensional travel, I am not aware of any spiritual worldview that also claims that whenever two things can happen, they both do in alternate realities (or, more specifically, that this leads to reality splitting). The only spiritual-sounding interpretation of PSIS I can think of is the idea that these experiences are the result of high-level entities such as guardians, angels or trickster djinns who used your LSD state to teach you a lesson in an unconventional way. The first quote (the one written by Reddit user I_DID_LSD_ON_A_PLANE) seems to point in this direction, where the so-called Karma God is apparently inducing a PSIS experience and using it to illustrate the idea that we are all one (i.e. Open Individualism). Furthermore, the experience viscerally portrays the way that this knowledge should impact our feelings of self-importance (by creating a profound feeling of sonder). This way, the tripper may develop a lasting need to work towards peace, wisdom and enlightenment for the benefit of all sentient beings. Life as a learning experience is a common trope among spiritual worldviews. It is likely that the spiritual interpretations that emerge in a state of psychedelic depersonalization and derealization will depend on one’s pre-existing ideas of what is possible. The atonement of one’s sins, becoming aware of one’s karma, feeling our past lives, realizing emptiness, hearing a dire mystical warning, etc. are all ideas that already exist in human culture. In an attempt to make sense- any sense- of the kind of qualia experienced in high doses of psychedelics, our minds may be forced to instantiate grandiose delusions drawn from one’s reservoir of far-out ideas. On a super intense psychedelic experience in which one’s self-models fail dramatically and one experiences fear of ego dissolution, interpreting what is happening as the result of the Karma God judging you and then giving you another chance at life can viscerally seem to make a lot of sense at the time. The Quantum Hypothesis For the sake of transparency I must say that we currently do not have a derivation of PSIS from first principles. In other words, we have not yet found a way to use the postulates of quantum mechanics to account for PSIS (that is, assuming that the cognitive and spiritual hypothesis are not the case). That said, there are indeed some things to be said here: While a theory is missing, we can at least talk about what a quantum mechanical account of PSIS would have to look like. I.e. we can at least make sense of some of the features that the theory would need to have to predict that people on LSD would be able to see the superposition of macroscopic branches of the multiverse. Why would being on acid allow you to receive input from macroscopic environments that have already decohered? How could taking LSD possibly prevent the so-called collapse of the wavefunction? You might think: “well, why even think about it? It’s simply impossible because the collapse of the wavefunction is an axiom of quantum mechanics and we know it is true because some of the predictions made by quantum mechanics (such as QED) are in agreement with experimental data up to the 12th decimal point.” Before jumping to this conclusion, though, let us remember that there are several formulations of quantum mechanics. Both the Born rule (which determines the probability of seeing different outcomes from a given quantum measurement) and the collapse of the wavefunction (i.e. that any quantum state other than the one that was measured disappears) are indeed axiomatic for some formulations. But other formulations actually derive these features and don’t consider them fundamental. Here is Sean Carroll explaining the usual postulates that are used to teach quantum mechanics to undergraduate audiences: The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this: 1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space. 2. Wave functions evolve in time according to the Schrödinger equation. 3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured. 4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue. 5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities). In contrast, here is what you need to specify for the Everett (Multiple Worlds) formulation of quantum mechanics: And that’s it. As you can see this formulation does not employ any collapse of the wavefunction, and neither does it consider the Born rule as a fundamental law. Instead, the wavefunction is thought to merely seem to collapse upon measurement (which is achieved by nearly diagonalizing its components along the basis of the measurement; strictly speaking, neighboring branches never truly stop interacting, but the relevance of their interaction approaches zero very quickly). Here the Born rule is derived from first principles rather than conceived as an axiom. How exactly one can derive the Born rule is a matter of controversy, however. Currently, two very promising theoretical approaches to do so are Quantum Darwinism and the so-called Epistemic Separability Principle (ESP for short, a technical physics term not to be confused with Extra Sensory Perception). Although these approaches to deriving the Born rule are considered serious contenders for a final explanation (and they are not mutually exclusive), they have been criticized for being somewhat circular. The physics community is far from having a consensus on whether these approaches truly succeed. Is there any alternative to either axiomatizing or deriving the apparent collapse and the Born rule? Yes, there is an alternative: we can think of them as regularities contingent upon certain conditions that are always (or almost always) met in our sphere of experience, but that are not a universal fact about quantum mechanics. Macroscopic decoherence and Born rule probability assignments work very well in our everyday lives, but they may not hold universally. In particular -and this is a natural idea to have under any view that links consciousness and quantum mechanics- one could postulate that one’s state of consciousness influences the mind-body interaction in such a way that information from one’s quantum environment seeps into one’s mind in a different way. Don’t get me wrong; I am aware that the Born rule has been experimentally verified with extreme precision. I only ask that you bear in mind that many scientific breakthroughs share a simple form: they question the constancy of certain physical properties. For example, Einstein’s theory of special relativity worked out the implications of the fact that the speed of light is observer-independent. In turn this makes the passage of time of external systems observer-dependent. Scientists had a hard time believing Einstein when he arrived at the conclusion that accelerating our frame of reference to extremely high velocities could dilate time. What was thought to be a constant (the passage of time throughout the universe) turned out to be an artifact of the fact that we rarely travel fast enough to notice any deviation from Newton’s laws of motion. In other words, our previous understanding was flawed because it assumed that certain observations did not break down in extreme conditions. Likewise, maybe we have been accidentally ignoring a whole set of physically relevant extreme conditions: altered states of consciousness. The apparent wavefunction collapse and the Born rule may be perfectly constant in our everyday frame of reference, and yet variable across the state-space of possible conscious experiences. If this were the case, we’d finally understand why it seems so hard to derive the Born rule from first principles: it’s impossible. Succinctly, the Quantum Hypothesis is that psychedelic experiences modify the way one’s mind interacts with its quantum environment in such a way that the world does not appear to decohere any longer from one’s point of view. Our ignorance about the non-universality of the apparent collapse of the wavefunction is just a side effect of the fact that physicists do not usually perform experiments during intense life-changing entheogenic mind journeys. But for science, today we will. Deriving PSIS with Quantum Mechanics Here we present a rough (incomplete) sketch of what a possible derivation of PSIS from quantum mechanics might look like. To do so we need three background assumptions: First, conscious experiences must be macroscopic quantum coherent objects (i.e. ontologically unitary subsets of the universal wavefunction, akin to super-fluid helium or Bose–Einstein condensates, except at room temperature). Second, people’s decision-making process must somehow amplify low-level quantum randomness into macroscopic history bifurcations. And third, the properties of our quantum environment* are in part the result of the quantum state of our mind, which psychedelics can help modify. This third assumption brings into play the idea that if our mind is more coherent (e.g. is in a super-symmetrical state) it will select for wavefunctions in its environment that themselves are more coherent. In turn, the apparent lifespan of superpositions may be elongated long enough so that the quantum environment of one’s mind receives records from both Y-sitting and Y-standing as they are overlapping. Now, how credible are these three assumptions? That events of experience are macroscopic quantum coherent objects is an explanation space usually perceived as pseudo-scientific, though a sizable number of extremely bright scientists and philosophers do entertain the idea very seriously. Contrary to popular belief, there are legitimate reasons to connect quantum computing and consciousness. The reasons for making this connection include the possibility of explaining the causal efficacy of consciousness, finding an answer to the palette problem with quantum fields and solving the phenomenal binding problem with quantum coherence and panpsychism. The second assumption claims that people around you work as quantum Random Number Generators. That human decision-making amplifies low-level quantum randomness is thought to be likely by at least some scientists, though the time-scale on which this happens is still up for debate. The brain’s decision-making is chaotic, and over the span of seconds it may amplify quantum fluctuations into macroscopic differences. Thus, people around you making decisions may result in splitting universes (e.g. “[I] am watching alternate timelines branch off every time someone does something specific.” – GatorAutomator’s quote above). Presumably, this assumption would also imply that during PSIS not only people but also physics experiments would lead to apparent macroscopic superposition. With regards to the third assumption: widespread microscopic decoherence is not, apparently, a necessary consequence of the postulates of quantum mechanics. Rather, it is a very specific outcome of (a) our universe’s Hamiltonian and (b) the starting conditions of our universe, i.e. Pre-Inflation/Eternal Inflation/Big Bang. (A Ney & D Albert, 2013). In principle, psychedelics may influence the part of the Hamiltonian that matters for the evolution of our mind’s wavefunction and its local interactions. In turn, this may modify the decoherence patterns of our consciousness with its local environment and- perhaps- ultimately the surrounding macroscopic world. Of course we do not know if this is possible, and I would have to agree that it is extremely far-fetched. The overall picture that would emerge from these three assumptions would take the following form: both the mental content and raw phenomenal character of our states of consciousness are the result of the quantum micro-structure of our brains. By modifying this micro-structure, one is not only altering the selection pressures that give rise to fully formed experiences (i.e. quantum darwinism applied to the compositionality of quantum fields) but also altering the selection pressures that determine which parts of the universal wave-function we are entangled with (i.e. quantum darwinism applied to the interactions between coherent objects). Thus psychedelics may not only influence how our experience is shaped within, but also how it interacts with the quantum environment that surrounds it. Some mild psychedelic states (e.g. MDMA) may influence mostly the inner degrees of freedom of one’s mind, while other more intense states (e.g. DMT) may be the result of severe changes to the entanglement selection pressures and thus result in the apparent disconnection between one’s mind and one’s local environment. Here PSIS would be the result of decreasing the rate at which our mind decoheres (possibly by increasing the degree to which our mind is in a state of quantum confinement). In turn, by boosting one’s own inner degree of quantum superposition one may also broaden the degree of superposition acceptable at the interface with one’s quantum environment. One could now readily take in packets of information that have a wider degree of superposition. In the right circumstances, this may result in one’s mind experiencing information seemingly coming from alternate branches of the multiverse. In other words, the trick to PSIS both in the Quantum and the Spiritual Hypothesis is the same (though for different reasons): travel to other dimensions by being the change that you wish to see in the world. You need to increase your own degree of quantum coherence so that you become able of interacting with a more coherent quantum environment. If this were the case it would call for a conceptual revolution. We would stop thinking of “our universe” as the “place we inhabit” and instead think of it as “the quantum environment our consciousness can interface with”. Perhaps life as we know it exists in a quantum niche that can give rise to embodied biological beings, but the universal wavefunction also contains alien state-spaces in which the patterns of entanglement work in different ways and lead to different evolutionary histories. Our minds evolved to interact with the particular environment that leads to the sustenance of our everyday macroscopic world. Our consciousness is shaped to fit in here, and it has a symbiotic relationship with the body that hosts it. Now, how do we go about testing the quantum hypothesis? Following previous work on psychophysics for psychedelic research, the main idea will be to show that people on psychedelics can achieve something that sober individuals can’t. In this case, the task is designed to be impossible if one is only receiving input from one world (as predicted by the cognitive hypothesis). Only if PSIS is the result of simultaneously experiencing input from the several nearby branches that correspond to different outcomes of quantum events can the task be successfully performed. The Experiment The task in question involves three white little squares on a black background. At each time step (~16 milliseconds) the squares move one pixel down, and some number of pixels horizontally. While the downward movement is constant, the change in the horizontal direction is chosen randomly. The horizontal movement of one of the squares is being randomized on the fly with a Quantum Random Number Generator (qRNG**). We call this the quantum square. The horizontal movement of the other two squares follows a path that was determined at the beginning of each trial by a Classical Random Number Generator (cRNG***). We will refer to these two squares as the classical squares. The task for the participant is to identify which of the squares is the “odd one out” (i.e. the quantum square). Here is an infographic that explains the experimental setup: In order to make the horizontal movement seem very chaotic and, at times even dramatic, it is randomized by choosing a horizontal change of either -4, -1, 1 or 4 pixels. By using a jump of 4 pixels (rather than e.g. 2) the squares sometimes seem to advance very quickly in one direction. The rationale for doing this was to make the predicted effect (assuming the Quantum Hypothesis) more noticeable – we want low probability paths that are extremely different than the typical path and in turn make it easy to distinguish between single paths and the superposition of many of the possible paths at once. Critically, the starting position of the quantum square (whether left, center or right) for each of the trials during an experiment is obtained with a cRNG at the very beginning of the experiment. Why not use a qRNG? If we randomized the position of the quantum square using a qRNG we might be sabotaging ourselves: participants might see a superposition of classical and quantum squares on each of the three positions and thus not see any square behave differently than the other two. Thus, for the experiment to work (under the assumption that PSIS is explained by the Quantum Hypothesis) the only aspect that should be randomized using a qRNG is the horizontal displacement of the quantum square. Since the numbers coming from the qRNG need to be requested over the internet, it is also crucial that we try to obtain fresh quantum random numbers as frequently as possible without messing up the flow of the experiment. Thus, a request for new quantum random numbers is made as often as possible (usually every 1 to 2 seconds). Since internet connections, server response time and network reliability varies from moment to moment, it is inevitable that once in a while we will have to stop the downward movement of all three squares as we wait for the next batch of random numbers. In principle this should have no negative effect at all in the ability to choose the odd-one-out. This problem merely interrupts the flow of the experiment, but it should not invalidate the results. We kept track of the number of trials that were interrupted for this reason (which you will find recorded in the “Network” column of the table of results down below). Participants were instructed to: 1. Clone the Github code for the experiment, and to open it up in a Chrome browser. 2. Make sure that they have a reliable high-speed internet connection. 3. Record the pertinent facts (time of experiment, substance, dose and time of intake). 4. Use a sensorially-minimized room for the experiment (such as a closed bathroom), turn off the lights, ask anyone who is in the house to be quiet and mind their own business, and wear earplugs. 5. Meditate for a couple of minutes before conducting the experiment and to try to induce PSIS voluntarily. 6. Choose the odd-one-out (i.e. the quantum square) on each trial, for a total of at least 30 trials. Under this setup, what should the participants expect to see if the Quantum Hypothesis is correct? Predicted Psychedelic Perception The pictures above illustrate how the experiment looks like to three different observers. On the left we see what a sober participant would see at a given time during a trial. The squares are moving downwards but since we are taking a snapshot in time they seem to be static. The image on the center shows what we would expect out of a generic psychedelic experience. In brief, the typical control interrupt action of psychedelics (i.e. tracers) effectively allows you to see more clearly the path that the squares have traced over time, but it would not allow you to notice any difference between the classic and quantum squares. The image on the right shows what we would expect to see if the Quantum Hypothesis is correct and PSIS is actually about perceiving nearby branches of the Everett multiverse. Notice how the center square is visibly different than the other two: it consists of the superposition of many alternative paths the square took in slightly different branches. Implications of a Positive Result: Quantum Mind, Everett Rescue Missions and Psychedelic Cryptography It is worth noting that if one can indeed reliably distinguish between the quantum and the classical squares, then this would have far-reaching implications. It would indeed confirm that our minds are macroscopic quantum coherent objects and that psychedelics influence their pattern of interactions with their surrounding quantum environment. It would also provide strong evidence in favor of the Everett interpretation of quantum mechanics (in which all possibilities are realized). More so, we would not only have a new perspective on the fundamental nature of the universe and the mind, but the discovery would just as well suggest some concrete applications. Looking far ahead, a positive outcome is that this knowledge would encourage research on the possible ways to achieve inter-dimensional travel, and in turn instantiate pan-Everettian rescue missions to reduce suffering elsewhere in the multiverse. The despair of confirming that the quantum multiverse is real might be evened out by the hope of finally being able to help sentient beings trapped in Darwinian environments in other branches of the universal wavefunction. Looking much closer to home, a positive result would lead to a breakthrough in psychedelic cryptography (PsyCrypto for short), where spies high on LSD would obtain the ability to read information that is secretly encoded in public light displays. More so, this particular kind of PsyCrypto would be impervious to discovery after the fact. Even if given an arbitrary amount of time and resources to analyze a video recording of the event, it would not be possible to determine which of the squares was being guided by quantum randomness. Unlike other PsyCrypto techniques, this one cannot be decoded by applying psychedelic replication software to video recordings of the transmission. Three persons participated in the experiments: S (self), A, and B. [A and B are anonymous volunteers; for more information read the legal disclaimer at the end of this article]. Participant S (me) tried the experiment both sober and after drinking 2 beers. Participant A tried the experiment sober, on LSD, 2C-B and a combination of the two. And participant B tried the experiment both sober and on DMT. The total number of trials recorded for each of the conditions is: 90 for the sober state, 275 for 2C-B, 60 for DMT, 120 for LSD and 130 for the LSD/2C-B combo. The overall summary of the results is: chance level performance outcomes for all conditions. You can find the breakdown of results for all experiments in the table shown below, and you can download the raw csv file from the Github repository. Columns from left to right: Date, State (of consciousness), Dose(s), T (time), #Trials (number of trials), Correct (number of trials in which the participant made the correct choice), Percent correct (100*Correct/Trials), Participants (S=Self, A/B=anonymous volunteers), Requests / Second (server requests per second), Network (this tracks the number of times that a trial was temporarily paused while the browser was waiting for the next batch of quantum random numbers), Notes (by default the squares left a dim trail behind them and this was removed in two trials; by default the squares were 10×10 pixels in size, but a smaller size was used in some trials). I thought about visualizing the results in a cool graph at first, but after I received them I realized that it would be pointless. Not a single experiment reached a statistically significant deviation from chance level; who is interested in seeing a bunch of bars representing chance-level outcomes? Null results are always boring to visualize.**** In addition to the overall performance in the task, I also wanted to hear the following qualitative assessment from the participants: did they notice any difference between the three squares? Was there any feeling that one of them was behaving differently than the other two? This is what they responded when I asked them: “I could never see any difference between the squares, so it felt like I was making random choices” (from A) and “DMT made the screen look like a hyper-dimensional tunnel and I felt like strange entities were watching over me as I was doing the experiment, and even though the color of the squares would fluctuate randomly, I never noticed a single square behaving differently than the other two. All three seemed unique. I did feel that the squares were being controlled by some entity, as if with an agency of their own, but I figured that was made up by my mind.” When I asked them if they noticed anything similar to the image labeled Psychedelic view as predicted by the Quantum Hypothesis (as shown above) they both said “no”. It is noteworthy that neither participant reported an experience of PSIS during the experiments. Even without an explicit and noticeable input superposition, PSIS may turn out to be a continuum rather than a discrete either-or phenomenon. If so, we might still expect to see some deviations from chance. This may be analogous to how in blindsight people report not being able to see anything and yet perform better than chance in visual recognition tasks. That said, the effect size of blindsight and other psychological effects in which information is processed unbeknownst to the participant tend to be very small. Thus, in order to confirm that quantum PSIS is happening below the threshold of awareness we may require a much larger number of samples (though still a lot smaller than what we would need if we were aiming to use the experiment to conduct Psi research with or without psychedelics, again, due to the extremely small effect sizes). Why did the experiment fail? The first possibility is that it could be that the Quantum Hypothesis is simply wrong (and possibly because it requires false assumptions to work). Second, perhaps we were simply unlucky that PSIS was not triggered during the experiments; perhaps the set, setting, and dosages used simply failed to produce the desired effect (even if the state does indeed exist out there). And third, the experiment itself may be wrong: the second-long delays between the server requests and the qRNG may be too large to produce the effect. In the current implementation (and taking into account network delays), the average delay between the moment the quantum measurement was conducted and the moment it appeared on the computer screen as horizontal movement was .9 seconds (usually in the range of .4 to 1.4 seconds, given an average of 1/2 second lag due to the number buffering and 400 milliseconds in network time). This problem would be easily sidestepped if we used an on-site qRNG obtained from hardware directly connected to the computer (as is common in psi research). To minimize the delay even further, the outcomes of the quantum measurements could be delivered directly to your brain via neuroimplants. If psychedelic experiences do make you interact with other realities, I would like to know about it with a high degree of certainty. The present study was admittedly a very long shot. But to my judgement, it was totally worth it. As Bayesians, we reasoned that since the Quantum Hypothesis can lead to a positive result for the experiment but the Cognitive Hypothesis can’t, then a positive result should make us update our probabilities of the Quantum Hypothesis a great deal. A negative result should make us update our probabilities in the opposite direction. That said, the probability should still not go to zero since the negative result could still be accounted for by the fact that participants failed to experience PSIS, and/or that the delay between the quantum measurement and the moment it influences the movement of the square in the screen is too large. Future studies should try to minimize these two possible sources of failure. First, by researching methods to reliably induce PSIS. And second, by minimizing the delay between branching and sensory input. In the meantime, we can at least tentatively conclude that something along the lines of the Cognitive Hypothesis is the most likely case. In this light, PSIS turns out to be the result of a failure to inhibit predictions. Despite losing their status as suspected inter-dimensional portal technology, psychedelics still remain a crucial tool for qualia research. They can help us map out the state-space of possible experiences, allow us to identify the computational properties of consciousness, and maybe even allow us to reverse engineer the fundamental nature of valence. [Legal Disclaimer]: Both participants A and B contacted me some time ago, soon after the Qualia Computing article How to Secretly Communicate with People on LSD made it to the front page of Hacker News and was linked by SlateStarCodex. They are both experienced users of psychedelics who take them about once a month. They expressed their interest in performing the psychophysics experiments I designed, and to do so while under the influence of psychedelic drugs. I do not know these individuals personally (nor do I know their real names, locations or even their genders). I have never encouraged these individuals to take psychedelic substances and I never gave them any compensation for their participation in the experiment. They told me that they take psychedelics regularly no matter what, and that my experiments would not be the primary reason for taking them. I never asked them to take any particular substance, either. They just said “I will take substance X on day Y, can I have some experiment for that?” I have no way of knowing (1) if the substances they claim they take are actually what they think they are, (2) whether the dosages are accurately measured, and (3) whether the data they provided is accurate and isn’t manipulated. That said, they did explain that they have tested their materials with chemical reagents, and are experienced enough to tell the difference between similar substances. Since there is no way to verify these claims without compromising their anonymity, please take the data with a grain of salt. * In this case, the immediate environment would actually refer to the quantum degrees of freedom surrounding our consciousness within our brain, not the macroscopic exterior vicinity such as the chair we are sitting on or the friends we are hanging out with. In this picture, our interaction with that vicinity is actually mediated by many layers of indirection. ** The experiment used the Australian National University Quantum Random Numbers Server. By calling their API every 1 to 2 seconds we obtain truly random numbers that feed the x-displacement of the quantum square. This is an inexpensive and readily-available way to magnify decoherence events into macroscopic splitting histories in the comfort of your own home. *** In this case, Javascript’s Math.random() function. Unfortunately the RGN algorithm varies from browser to browser. It may be worthwhile to go for a browser-independent implementation in the future to guarantee a uniform high quality source of classical randomness. **** As calculated with a single tailed binomial test with null probability equal to 1/3. The threshold of statistical significance at the p < 0.05 level is found at 15/30 and for p < 0.001 we need at least 19/30 correct responses. The best score that any participant managed to obtain was 14/30. 1. Pingback: Glossary of Qualia Research Institute Terms | Qualia Computing 3. Pingback: Estimated Cost of the DMT Machine Elves Prime Factorization Experiment | Qualia Computing 4. Pingback: Open Individualism and Antinatalism: If God could be killed, it’d be dead already | Qualia Computing 5. Pingback: Qualia Formalism in the Water Supply: Reflections on The Science of Consciousness 2018 | Qualia Computing 6. Pingback: Every Qualia Computing Article Ever | Qualia Computing 7. Bjørn · July 13, 2017 What if you’re unable to collect statistically significant results because the participants themselves branch off into their own universes where they by themselves DO get significant results, but where you never get to meet these versions of them? 🙂 Liked by 1 person 8. Pingback: Psychedelic Science 2017: Take-aways, impressions, and what’s next | Qualia Computing 9. Pingback: Qualia Computing Attending the 2017 Psychedelic Science Conference | Qualia Computing 10. Pingback: The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes | Qualia Computing 11. joel (@Joelisawake) · November 15, 2016 Really cool. I’ve only found this page because I was researching Quantum measurements and a connection to the state you describe as PSIS. I had a feeling that they were connected being a computer programmer and just now starting to having an understanding of how quantum computing works etc. Then connecting my own experiences of this: I’ve seen this state several times. I feel as if I’m very susceptible to it. Without sounding uneducated, I feel like these are the dots in which I was trying to connect. Somehow the human brain ends up utilizing a superposition and boom. Keep up the good work. I wish I could help, but I be only a lowly coder who has fun on the weekends. if you have any questions. 12. Pingback: David Pearce on the “Schrodinger’s Neurons Conjecture” | Qualia Computing 13. Kory Noble · November 1, 2016 Greetings and salutations! I’m an independent researcher in the LSD experience and have attempted and succeeded in producing a holographic display system utilising much of the methodology you described above. I work with the following methodology: Holographic Epistemology: Native Common Sense Manulani Aluli Meyer Te Wānanga o Aotearoa (Māori University of New Zealand), New Zealand As well as my own discovery via Shamanism that I’ve consolidated into a method of creating a singularity called Holographic Shamanism. The Spirit of the method is based upon the projection of thought into literal holographic manifestations through the ego being displayed a platform called a Holographic Memory Capture device – or “Acid Glass Novelty tables” – I’m still working on the name. Essentially, it’s a way to display the syntax-enabled iridology perspective of the human psyche, and I’m not sure if this is some government project that I happened to ‘bump into’ or some advanced alien spacecraft navigation system, but it worked to heal me of PTSD, depression, bi-polar disorder, tobacco cessation, and enabled me to have active engagement with The Logos, and met someone possessed by The Spirit of God. I’m a Veteran who spent 3 years in Iraq, and I’m fully disabled by a rare neurological disease called Behcet’s Disease (Silk Road Virus). I’m currently looking for interested parties in researching the capacity of this device, as so far the applications are endless… Literally, I’ve used the device in a room where people didn’t speak the same language, and after the use of LSD and 30 minutes, instantly, the full concept of memory through glass layered manifestations of tachyon energy fields was witnessed by everyone in the room. There are so many ways to ‘fear’ this type of technology, but I assure you, that it was more harmful for me to assume that I could make a profit off of this ‘invention’ when in all reality – I could be in serious trouble with this ‘device.’ I read that the Air Force put a lot of money into a Holographic Projection system in the 90’s, developed to project an image of allah over the city – but who actually knows what ‘allah’ looks like? eh – anyways, if this really is the ‘cure’ for what we consider post traumatic stress disorder – how much do you think that’d be worth? I mean, it made me almost lose my mind considering the possibilities – however, thinking in lame man’s terms, sharing the idea is the most important part… an interest? Liked by 1 person • algekalipso · November 2, 2016 Hello Kory! Thanks for sharing. Is the video you linked to a video of the device? I must admit that I am confused about what you are trying to say, but given the strong positive effects that you describe I’m interested in hearing more. One possibility is that the device helps you connect certain parts of your visual field to each other in a way that creates euphoric patterns of qualia. There are indeed several devices I’m aware of whose effects are to harmonize one’s neural activity, which in turn is subjectively experienced as relief and pleasure (which may be all that is needed to work through PTSD-triggering memories, as may be happening under MDMA therapy). In other words, the “null hypothesis” could be that the device stimulates one’s brain in a helpful way. But I could be missing something. I’m happy to chat more via email or Skype. Feel free to contact me personally (see my contact info in the contact section of the site). 14. Forest · October 31, 2016 Just the idea of this makes it true. You do not need to investigate any further than – a thought that has been formulated in your mind, because non locality (two objects at two location are inseparable in space-time). If we have thought of it, it must be real somewhere else in life. You may think this to be egotistical or irrational thinking, but if there are multiverses (which science claims there are) then each universe in the multiverse is broken down into what we call string theory, where one action leads to a consequence, and leads to another action. We seem to think that string theory only exists for the human mind, but we wouldn’t be able to postulate this idea without it existing somewhere else in the material world, hence the cosmos. Furthermore, string theory also pertains to the laws of nature. In one universe, in a multiverse, string theory suggests that gravity might be different on Earth, or that the Earths tilt is slightly different, because the laws of action exist everywhere in the universe. …Point being; that the claim above is true, because we have already seen this in cosmology, therefore it affects us through non locality, and changes how we perceive and understand the structure of consciousness. Liked by 1 person 15. Pingback: Realidad y psicodélicos. – Sé y Haz. Leave a Reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
ce2e3f1610ab5190
The Winding Road to Quantum Supremacy Greetings from QIP’2019 in Boulder, Colorado! Obvious highlights of the conference include Urmila Mahadev’s opening plenary talk on her verification protocol for quantum computation (which I blogged about here), and Avishay Tal’s upcoming plenary on his and Ran Raz’s oracle separation between BQP and PH (which I blogged about here). If you care, here are the slides for the talk I just gave, on the paper “Online Learning of Quantum States” by me, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak. Feel free to ask in the comments about what else is going on. I returned a few days ago from my whirlwind Australia tour, which included Melbourne and Sydney; a Persian wedding that happened to be held next to a pirate ship (the Steve Irwin, used to harass whalers and adorned with a huge Jolly Roger); meetings and lectures graciously arranged by friends at UTS; a quantum computing lab tour personally conducted by 2018 “Australian of the Year” Michelle Simmons; three meetups with readers of this blog (or more often, readers of the other Scott A’s blog who graciously settled for the discount Scott A); and an excursion to Grampians National Park to see wild kangaroos, wallabies, koalas, and emus. But the thing that happened in Australia that provided the actual occassion for this post is this: I was interviewed by Adam Ford in Carlton Gardens in Melbourne, about quantum supremacy, AI risk, Integrated Information Theory, whether the universe is discrete or continuous, and to be honest I don’t remember what else. You can watch the first segment, the one about the prospects for quantum supremacy, here on YouTube. My only complaint is that Adam’s video camera somehow made me look like an out-of-shape slob who needs to hit the gym or something. Update (Jan. 16): Adam has now posted a second video on YouTube, wherein I talk about my “Ghost in the Quantum Turing Machine” paper, my critique of Integrated Information Theory, and more. And now Adam has posted yet a third segment, in which I talk about small, lighthearted things like existential threats to civilization and the prospects for superintelligent AI. And a fourth, in which I talk about whether reality is discrete or continuous. Related to the “free will / consciousness” segment of the interview: the biologist Jerry Coyne, whose blog “Why Evolution Is True” I’ve intermittently enjoyed over the years, yesterday announced my existence to his readers, with a post that mostly criticizes my views about free will and predictability, as I expressed them years ago in a clip that’s on YouTube (at the time, Coyne hadn’t seen GIQTM or my other writings on the subject). Coyne also took the opportunity to poke fun at this weird character he just came across whose “life is devoted to computing” and who even mistakes tips for change at airport smoothie stands. Some friends here at QIP had a good laugh over the fact that, for the world beyond theoretical computer science and quantum information, this is what 23 years of research, teaching, and writing apparently boil down to: an 8.5-minute video clip where I spouted about free will, and also my having been arrested once in a comic mix-up at Philadelphia airport. Anyway, since then I had a very pleasant email exchange with Coyne—someone with whom I find myself in agreement much more often than not, and who I’d love to have an extended conversation with sometime despite the odd way our interaction started. 74 Responses to “The Winding Road to Quantum Supremacy” 1. Andrew G Says: Hi Scott, On the QIP schedule, the speaker for your talk was listed as your co-author Xinyi Chen. While it was certainly a pleasant surprise to get a Scott Aaronson talk at this year’s QIP, I am curious why the organizers didn’t make an announcement about the change. Perhaps if that were the case, the audience would not have been able to fit in the ballroom? Also out of curiosity: at how many QIPs have you given talks thus far, and what’s your longest consecutive streak? 2. Scott Says: Andrew #1: The schedule was finalized more than a month ago, I think. Then Xinyi decided she couldn’t come because of a paper deadline, so I was deputized to give the talk instead. I don’t think the information ever made it to the QIP organizers. It happens. Looking at my talks page, I see that I’ve spoken at the following QIPs: 2002, 2003, 2004, 2005, 2007, 2008, 2009, 2010. So two consecutive runs of four each. (Note that this includes rump session talks and a still-notorious after-dinner talk. Also, since QIP straddles the December/January boundary, it’s not always actually held in the listed year.) 3. Tamás V Says: It seems the director has cut out the part about whether the universe is discrete or continuous, what a pity. Is there an uncut version? Other question: how much faith do you have in PsiQ’s 1 million qubit quantum computer in 5 years? 4. Scott Says: Tamas #3: Adam seems to be posting one clip per day (in discrete chunks, you might say…). I’m sure he’ll get to the discrete vs. continuous bit soon. I have no faith that PsiQ or anyone else will be able to build a million-qubit QC in 5 years. I hope that they or others will someday be able to do such things, whether in 5 years, 10, 20, or however long it takes. 5. Adam Ford Says: Scott: Many thanks for the interview, it was fascinating to see you in person! I will have transcripts sometime soon. Tamás V: my internet is slow, as is the pace of my editing – I am staggering the release of each of the interview segments. I just released a segment on AI and XRisk: https://www.youtube.com/watch?v=gi67h6v-6fc 6. JimV Says: A few years ago, I brought up one of your comments (credited to you) on the subject in a thread at Dr. Coyne’s website on free will, and Dr. Coyne responded with a lecture on how Bell’s Theorem ruled out your comment (!) and accused me of rudeness for prefacing my views with “I think” or “In my opinion”. Bad memories … anyway, that may be why he seemed to be predisposed against you. (I think that was the first time anyone mentioned you on his website.) As it is his website and his rules, I respected his prerogatives and have never commented there since. 7. Richard Gaylord Says: “My only complaint is that Adam’s video camera somehow made me look like an out-of-shape slob who needs to hit the gym or something.”. well, at least it didn’t make your mind seem out-of-shape. 8. Jules-Pierre Mao Says: @Jerry Coine >If you [read the GIQTM paper], weigh in That sounds like an amusing task. Imho the first important step to understand SA on these matters is that he’s taking determinism much more seriously than you are. No kidding! What you care about is whether determinism is true. What he cares about is what *kind* of determinism is true and whether we can prove it. With insights from complexity theory he invites us to consider three serious candidates. First candidate is the P world. Basically that’s the good old Laplace world. Second candidate is the BPP world -basically the former plus randomness. Third and last, the BQP world -our world if it obeys quantum mechanics. [From time to time he can also talk about PSPACE and CTCs, aka what happens if we’re more serious about general relativity than about quantum mechanics, but let’s leave that for another day.] Of these three worlds, SA finds BPP kind of boring (because it’s probably impossible to separate candidate BPP from candidate P using the outcome of an experiment), predicts P will be excluded some day (all you need is construct a quantum computer, plus prove some math utterly believed to be true -piece of cake), so that leaves us with BQP. Now the second step is, SA doesn’t like that state of affair. Specifically, he doesn’t like one could predict what he’ll do or think before he does or thinks it himself. Fortunately, and although this conclusion is impossible to escape if we’re in a P world, there is a subtle trick that can do the job if we are in a BQP world. Namely: don’t touch the law of physics themselves, just specify that the initial state (all the way down to the big bang or before if there’s some before) includes some special qbits that the human brain (or any well designed brain, artificial or not) can later use to make its dynamic non predictable. Adding flesh on this bone idea is all what GIQTM is about. My two cents: it’s far from complete (how these freebits are supposed to impact brain activity in any meaningful manner?) and one can question his notion of freedom, but at least he respects law of physics as we know it. In the long run I’d bet it’s false (because if old photons traveling from deep space were to have any non trivial effects on our brains, why don’t we notice anything each time we mess with the quantum states of the brain using an MRI scanner?), but even if false his attempt would still teach us something non trivial: one can respect both the law of physics and some interesting definition for freedom. 9. ppnl Says: I have participated in several of the free will debates at WEIT. His argument seems to be that since our brains are deterministic we can’t be held accountable bad acts like murder. True enough given determinism. But then he argues that we should, therefore, structure our justice system around this fact. He rejects compatibilism yet his argument here is a compatibilist argument. If determinism holds then society is no more liable for its bad acts such as the election of Trump than the individual is for murder. If determinism holds then what we do, what society does and what the human species does was set by the initial conditions of the big bang. The universe may as well be a movie. There is no “ought”, there cannot be any “should”. There are only the consequences of the initial state of the big bang. He is a compatibilist but a compatibilist on the societal level. This highlights the ironic fact that we have little choice but to act as if we have free will. And the wild card is the fact that we experience the movie. Who ordered that? As for quantum mechanics I think it is clear that some observables of the universe simply cannot be in a particular state until observed. Thus causality disolves at that level. Given Bell’s inequality how can you preserve causality? 10. Vadim Kosoy Says: Hi Scott! In your interview with Adam Ford, you say that you don’t work on AI safety because, you wouldn’t know how to test that you are making progress. However, it seems like the same can be said about quantum computers? Neither quantum computers nor superintelligent AI exist at present, and yet nothing impedes you from working on the former. Of course, quantum computers do not have the issue of “having to make it work from the first try”, like you said in the interview, but for current theoretical research it seems irrelevant since they don’t exist anyway. Also, you said that humanity never succeeded before at doing something from the first try, except for examples like the Apollo program where the individual components could be tested up front. However, it seems plausible that for AGI you will also have some ability to test individual components up front. Clearly, not every code segment in a putative AGI will be an AGI onto itself. 11. Scott Says: Vadim #10: I’d say that quantum computing is different because there, unlike with AI safety, for 25+ years we’ve had a very precise mathematical theory that delineates what can and can’t be done, and in some sense our job as theorists is “merely” to figure out the interesting logical consequences of that theory. I don’t personally work on the experimental side of QC (except indirectly, as a “theoretical consultant” to experimental groups)—but there as well, there are extremely clear metrics to tell you how much progress you’re making, for example the gate fidelities, the coherence times of the qubits, and the number of qubits you’ve successfully integrated. And we also know theoretical bounds on what gate fidelities and coherence times would suffice for universal, fault-tolerant QC, and we can see that in systems with many integrated qubits, the experimentalists are still far from those bounds but are steadily progressing toward them. In that respect, QC is more similar to (say) fusion power than to superhuman AI. It’s not an a-priori unbounded and undefined engineering target, which is the property that makes the AI problem seem so vertiginously terrifying. Yes, I agree that one could imagine an AI where the individual components could be tested prior to being integrated—and such a case, of course that could, should, and hopefully would be done! More concerning is the so-called “foom” scenario, where you’d have something analogous to a deep network that could simply be trained and run as a whole. In that case, there might indeed be substructure, but if so it would be emergent substructure, rather than anything the designers had engineered by hand or necessarily understood. It’s in this latter case that I don’t even really know how to begin to think about safety, though I respect the efforts now underway to try to figure out how to begin to think about it. 12. I Says: I genuinely think this is the best you’ve looked for a while. Interpret that for what you will. Anyway, your mention the noise problem. Everyone keeps saying “100+noisy qubits=1 logical qubit”. Where do those numbers come from? Nowhere could I find a reference for this, besides “some scientists on quanta magazine said so”. My masters project was about modelling noise in adiabatic computation, and if we can mitigate it. Or even make it beneficial. For a single qubit with linear evolution it looks like we might be able to. Can we do something similar with gate or measurement based systems? Is it scalable? 13. Scott Says: I #12: The best reference I can think of offhand is this paper by Fowler et al.—does anyone else know a better one? Briefly, though, the numbers of physical qubits per logical qubit that you see quoted are going to come from simply looking at what that number would be in the best existing fault-tolerance schemes, assuming that you’re executing a computation with such-and-such numbers of qubits and gates and you have such-and-such error! 14. mjgeddes Says: Good critique of IIT, yet another theory of consciousness I think we have to consign to the scrap-heap. Such a huge volume of wrong theories about consciousness piling up! Although, if the person proposing the consciousness theory is smart, it’s at least *interesting* nonsense. No offence Scott, but I fear your own wacky ideas in GIQTM may fall into the same category 😉 Consciousness, I’m now reasonably confident, is a symbolic language for reasoning about cause-and-effect via imagining counter-factuals; it’s purpose is communication, planning and reflection. And it works via some sort of extension of modal logic, which is in the form of a tree data-structure. I call my developing theory ‘TPTA : Temporal Perception & Temporal Action’. Coming to AI, and excellent comments from Scott there. Yes, I think CS=AI (Computer Science is in some sense equivalent to Artificial Intelligence). My tentative theory is that you have 2 main components to AI (1) Reasoning under uncertainty with limited resources, which about what computer science is all about, and the *extension* of that (2) Harnessing reasoning to achieve goals, which yields the algorithms comprising AI. So AI is simply a natural extension of computer science I think. Coming to the very difficult question of AI values, I favor an approach to multi-level modeling based on an extension of the ideas of Robin Hanson; Hanson proposed two main cognitive-modes: (far), which is top-down abstract reasoning, and (near), which is bottom-up detail-oriented reasoning. I suggested adding a third mode: (procedural), which is the actual algorithms for doing things, and I proposed that procedural mode is based on the integration or *balance* between (far) and (near) modes. This yields a recursive procedure for knowledge representation that can be applied to all cognition. So for values, we see that we can make a division into *near mode*, which refers to our emotions and instincts (cognitive psychology), and *far mode*, which refers to our ideals and ethical/aesthetic philosophies (axiology). Then *decision theory* would refer to *procedural mode*, the ‘middle-out’ between axiology (top) and cognitive psychology (bottom). So the role of decision theory is to integrate our emotions and instincts that come from the ‘bottom-up’, with our ideals, coming from the ‘top-down’. Thus, we see a single unified ‘stack’ of knowledge domains: Decision Theory Cognitive Psychology 15. James B Says: Who decided this interview should be filmed outside in very bright sunlight? 16. Joshua Zelinsky Says: Scott #13, Marginally on topic, but do we have non-trivial lower bounds on how many error correcting qubits are needed given a given error rate? 17. Scott Says: mjgeddes #14: If, instead of writing a one-off and very tentative essay to explore the ideas in GIQTM, I had fixated on some specific numerical measure for how many freebits there are (even in the teeth of counterexamples showing that the measure produced absurd-seeming results); and if I’d pursued a decades-long research program around the measure with students, postdocs, etc … then maybe it would become reasonable to compare the two things. 18. Scott Says: James #15: Adam and I discussed where to film and jointly decided on the park. I actually thought it worked kind of well… 19. Tamás V Says: Can it be that intelligence has nothing to do with consciousness, in that the latter uses the former as a mere tool? Would be great news for the prospects of true AI, but also bad news suggesting that researcing intelligence will not get us any closer to the secrets of consciousness. 20. I Says: Scott #13: Thanks for the reply. I should have thought to check there. As an aide, some one here linked to a lecture Susskind gave a little while ago, but I can’t seem to find it. It was something about how black holes volumes evolve suct that their complexity increases at the greatest rate possible. Does anyone have a link to that? Honestly, I think I lack the imagination to think of something like that, so I’m probably not imagining it. 21. Vadim Kosoy Says: Scott #11: I agree that, as opposed to quantum computing, in AI safety there is still no consensus on the mathematical formalism within which the questions should be studied. In this sense, the field is, at this stage, more similar to physics than to mathematics. On the other hand, theoretical computer science does have an impressive track record of coming up with mathematical models for initially informal concepts (“algorithm”, “complexity”, “randomness”, “proof system”…) Regarding “a deep network that could simply be trained and run as a whole”, I don’t think it’s *that* different from science and engineering as we know it. We certainly can test the network’s training algorithm (in the sense of, search for bugs in the implementation) without running the AI “live” or feeding real data. The problem is, producing theoretical guarantees for what the network will do. As you mentioned in the interview, this is something we don’t know to do very well right now, although IMO there were some promising results recently (arxiv.org/abs/1811.04918, proceedings.mlr.press/v54/zhang17a.html). 22. fred Says: Scott, great to see that you got in shape! About the progress of AI, the fact is that you don’t need generalized AI or some form of singularity for a group of humans to start dominating the rest of the planet by using breakthroughs in AI. Creating AIs that can beat humans at any games would be a huge win, and there’s been a natural progression in the classes of problems AI researchers have been tackling. It started by applying self learning algos to simple 2D platformer video games (with no opposing intelligence). Then Alpha Go used inputs from human knowledge to beat the best human player, but Alpha Go Zero was able to learn Go from scratch on its own, just by playing itself with zero human intervention, in a few hours (with smaller resources)… then the same algo worked on any board game (beating the best custom made programs that play chess for example). AI has also basically “solved” poker (introducing partial knowledge and psychological bluff), beating teams of the best players. The latest recent transition has been to apply AI to more free form strategy video games, and it was able to beat the best human players in 1 on 1. The next step will be to add human language to those games so that machines can learn psychological manipulation – most humans aren’t particularly good at detecting that they’re being manipulated, and at the same time the advertisement/video game/social networking industries have developed clear methods to manipulate the human mind. Another interesting trend is the research trying to understand why deep learning is working so well. 23. fred Says: You’re confusing consciousness and the content of consciousness (thoughts). No scientific theory can explain consciousness (as a truly emergent phenomena) because none of the atom/symbol manipulation processes you can come up with will ever require consciousness – as the subjective experience of being something, our ground level truth (the only thing I can’t doubt/deny is that I’m conscious). There’s no such thing as a truly emergent phenomena in science because it’s all a bottom up approach, and what we take as emergent processes are all based on the fact we’re conscious, we’re the ones projecting them onto nature. 24. fred Says: I think many people confuse “free will” (an illusion) with “degrees of freedom”. Take two round marbles. Drop marble A in a smooth inclined pipe, it will describe a very nice and predictable sine path along it. Drop marble B on an inclined hill full of rocks of all size, it will describe a very chaotic and unpredictable path along it. Both systems are deterministic, but somehow marble B has “free will” while marble A hasn’t? 25. Andrew Krause Says: I #20 I believe the lecture you were thinking of was recorded in this arXiv paper, and I imagine you can find it online if the video version is more useful to you than the transcript. https://arxiv.org/abs/1810.11563 26. Neil Says: I think your remarks about predictability and free will raise some interesting questions. Suppose a machine could be built that could predict someone’s future actions. That would seem to drive a stake through the heart of free will. But would it? First, the prediction would need to be withheld from the subject otherwise he or she coud just negate the prediction by taking another action. Note how this differs from predicting other events, like predicting an eclipse. Any prediction about the actions of an intelligent being could affect how it acts if the prediction is known. But what if the prediction is not known by the subject before it takes an action? Might the subject’s action now depend on whether the subject knows of the existence of the prediction machine? Perhaps it will try to outsmart the prediction machine even not knowing its prediction. After all, the subject should know itself as well as the machine does. Finally, what about prediction paradoxes, like Newcomb’s paradox? 27. Tamás V Says: Let’s assume there is a computer with an AI software running on it, and an operator that gives it incentives (reward to pursue) time to time. Whenever the operator gives an incentive, the AI software will do something that is useful for him/her. Now, replace “AI software” with intelligence, and “operator” with consciousness. In this sense, intelligence is really just a tool (i.e. like an arm), and searching for consciousness within it is doomed to failure. (Ok, it may be all commonplace, I’m not following AI and consciousness research at all, I have to admit.) I think this goes along well with what Einstein said: “Most people say that it is the intellect which makes a great scientist. They are wrong: it is character.” Although not sure he had the exact same analogy in mind 🙂 28. Jules-Pierre Mao Says: Tamás V #19, FiLM network may convinced you that intelligence does not requires consciousness. I doubt it’s bad news. 29. I Says: Andrew #25 Thanks, that’s exactly what I was looking for. By the way, since you’re a mathematician studying non-linear systems: why do strange attractors of a system twist continuously when some parameter of the system is altered? That is, whilst the system is chaotic. I’ve encountered this behaviour a couple of times in simple systems, but I don’t know why it occurs. Sorry that this is so off topic Scott. 30. Tamás V Says: Jules-Pierre Mao #28, Thanks for the link, here is another one that’s even more convincing for me, although not a scientific paper: 31. Tamás V Says: Hi Scott, when you say “most fundamental description of nature”, what structure do you have in mind? Is it like a theory that has a finite number of axioms (e.g. like Newton’s laws of motion or the constancy of the speed of light in special relativity)? If yes, then my concern is that the theory could not explain its own axioms, and that would imply that other “most fundamental” descriptions would also exist, starting from different axioms about possibly very different concepts. So there would exist equally correct “most fundamental” descriptions, one where reality is discrete, one where it’s continuous, one where it’s hybrid, one where there is space, one where this is no space, etc. Or do you believe that one and only one “most fundamental” description of nature exists? 32. Scott Says: Tamas #31: Sorry, no questions about the interview that would require me to re-watch it for the context. In general, though, yes, I agree that it’s possible that you can have two different equally correct and equally fundamental mathematical descriptions of the same physics, as long as they agree in their predictions for observed phenomena. We even have some examples where that happens (e.g., the Heisenberg, Schrödinger, and Dirac-Feynman pictures of quantum mechanics). It’s even conceivable that continuous degrees of freedom would exist in one description but not in another—although if continuous quantities were easy to eliminate with a simple change of description, many of us might prefer to do that (by contrast, any theory will involve discrete choices, like how many dimensions of space or how many particles or whatever). My discussion in the interview was not a-priori, but was framed around the best theories we actually have, namely quantum mechanics, quantum field theory, and quantum gravity (the latter as partially realized by, e.g., the Bekenstein-Hawking entropy calculation and AdS/CFT). As far as I know, my comments would apply regardless of how you mathematically formulate those theories. 33. Nicholas Teague Says: Scott #13 Another paper that comes to mind around same time period which was perhaps a little more mainstream was the Nature article on D-Wave, reprinted in Scientific American: “D-Wave’s Quantum Computer Courts Controversy” which included the quote “Like a rocket that requires tons of fuel to hoist a tiny payload, a gate-model quantum computer might need billions of error-correcting qubits just to get 1,000 functional qubits to do something productive.” – although now that I look at it perhaps this estimate was a little less optimistic. 34. Tamás V Says: Scott #32: Sure, thanks for the answer, I did not mean to re-watch the interview, I thought you use that term every day anyway 🙂 To me a more interesting question is whether a “theory that explains everything” could exist, and if yes what form it would take. Because naively, if it has an axiom, it would not be able to explain that very axiom, which would mean it’s not a theory that explains everything. (I assume this is one reason why we have to be careful and talk about a theory fully “describing” nature, as opposed to full “explaining” it.) Because of this, I became interested in philosophies that say “nonsenses” like things exist and don’t exist at the same time… and the people who developed those thoughts did not even know about quantum mechanics at that time (thousands of years ago). How interesting. 35. Andrew Krause Says: I #25: There’s a lot of complexity there that I am not really an expert in (regarding strange attractors). A good PRL paper discussing some aspects of these things is “Classification of strange attractors by integers,” which discusses (only in the context of a specific subclass of attractors) how control parameters can leave the topology of a strange attractor fixed, but change geometric things, likely leading to the twisting you describe. I don’t know any reason offhand why one would expect twisting of the orbits, though presumably playing with Smale horseshoes etc can give you some intuition why such attractors themselves should always contain orbits which wind around one another. This is related to the density of unstable periodic orbits and the topological template approach which is discussed in that article. While this is a bit off-topic, there are of course nice connections between chaotic dynamics, predictability, free-will, and computational complexity. As always there are many things we don’t understand yet, and much work to do. 36. mjgeddes Says: fred #23 “You’re confusing consciousness and the content of consciousness (thoughts). It’s an open question as to whether science can explain consciousness. Let’s see how far the scientific method can take us before jumping to conclusions. I’d point to the proposed equivalence Scott mentioned ‘CS=AI’ (Computer Science = Artificial Intelligence). Let’s take that as the starting point and see what this would imply. If CS=AI, then the methods of comp-sci can’t be separated from cognition itself. This admittedly does lead to some strange , counter-intuitive conclusions. It implies that each ‘element of cognition’ is equivalent to a modeling method from comp-sci. And consciousness itself would be no exception. In this picture, you can’t separate consciousness from the contents of consciousness (there would be no such thing as ‘content free’ consciousness). So if we assume that CS=AI, we can put a modeling method from comp-sci in equivalence with each element of cognition. For example, I could propose: Comp-Sci Model = Element of Cognition Stochastic Model (Probability Theory) = Perception ? Cellular Automaton (Information Theory) = Optimal Action Selection ? Grammar Model (Modal Logic) = Planning (Reflection & Consciousness) ? If CS=AI, then a table like the above implies that there’s no difference between the entries on the left (comp-sci) and the entries on the right (cognition ). So if I’m right, there’s no difference between the correct grammar model (modal logic) and consciousness itself. If CS=AI, there’s simply no separation between modal logic (the contents of consciousness) and consciousness itself. 37. Jalex Stark Says: Fred #24: Actually in the specific case of “marbles on hills”, the situation is worse than it appears. There are easy-to-describe hills such that newton’s equations of motion for a marble on the hill are non-deterministic! 38. fred Says: Jalex Stark Ah, yes, I read about that a while back. My point was more about Scott’s idea that the brain is special in that it can’t be duplicated, practically. But we also can’t build two double pendulums that would behave identically for any extended amount of time either (because of chaotic behavior), so non-duplication is a trivial fact of the physical world, not something special to brains. As I was saying before, I think the misunderstanding here is that many tend to conflate the content of consciousness with consciousness itself, because the self/ego gets in the way. Consciousness isn’t the author of its own content, those come from the brain. By content, I mean thoughts, intentions, volition, desires, feelings, emotions, sounds, images, pain, pleasure, the sense of self, … Consciousness is the space where those appear. We think that we are riding in our heads, as an observer, and this observer is the author of thoughts and some sort of permanent self. But this sense of self can’t hold up to scrutiny because everything that makes up that sense of self is observable (just like any other content of consciousness). That’s not to say either that there is consciousness on one side, and its content on the other. There’s no duality here, consciousness is “the knowing”, “the noticing” of whatever appears in the present moment. I personally think that consciousness is linked to the formation of memories. When we perceive something, it’s because the brain is actually committing it to memory. That’s why I think it’s a more universal property that maybe runs all the way down to atoms – all things are as conscious as we are, just that the quality/quantity of content that is perceived varies wildly. 39. Craig Says: Are you a quantum supremacist? If so, how do you reconcile this with your left-leaning world-views? 40. Scott Says: Craig #39: It’s not the term I would’ve chosen, but sure, I’m someone who actively works on the theoretical foundations of quantum supremacy experiments and who’s excited about them. And I suppose I reconcile that with my vaguely center-left worldview, more-or-less the same way I reconcile my support for the protection of right whales with my being left-handed. 😉 41. fred Says: you often say that QM can viewed as just doing another type of probability (at least as far as building a QC goes). But is there in this approach a point where the quantum aspect (as far as all physical properties being discrete) of QM comes in, besides the fact that the state vectors are finite/discrete? I guess for QC theory all you need is something that has two states, without ever mentioning the more general puzzling aspects of quantization in physics, or the Heisenberg uncertainty principle. So, those aspects of QM would never bring any extra computational power? Can’t the quantum tunneling effect be used to help solve optimization problems? (It wouldn’t be a programmable digital machine, but more like some specialized “analog” machine I guess) 42. Scott Says: fred #41: Popular accounts have failed you. Everything you mention—all of it, quantization and the Heisenberg uncertainty principle and all the rest—are just different logical consequences of the more fundamental axioms of QM, or else relate to the details of how those axioms get implemented in our universe. Those axioms are the principle of superposition, the principle of unitary evolution (i.e. the Schrödinger equation), the Born rule, and I guess the tensor product rule if you want to be pedantic. 🙂 43. mjgeddes Says: fred #38 Consciousness is closely connected to time and/or our perception of time in my view. Scott also postulated a connection to the flow of time. I’m just not so sure we need to bring in any wacky quantum stuff… Consciousness is surely also closely linked to language, communication and knowledge representation. If you combine the two links (time and language) , and stick to ordinary computer science as much as possible (assuming consciousness is just computation) , avoiding wacky theories, you’re led naturally to modal logic, specifically some sort of temporal logic. My wagons circle around something like computation tree logic (CTL). This sort of logic is how you represent the flow of time. 44. Gerard Says: mjgeddes #43 If you believe that the execution of a (classical) computation can create consciousness (ie. just so we’re clear on definitions, by consciousness, I mean subjective experience) what if that computation is carried out by a person in a room with pencil and paper ? If this process creates subjective experiences, what is having those experiences ? Surely not the human “computer” since the content of his experiences are clearly very different from those that would be implied by the computation itself. It seems to me that there are only two possible answers to these question either: A) Computation alone cannot produce conscious experience. B) Making marks on a piece of paper can somehow lead to the creation of a new, completely unobservable, conscious entity. To me believing (B) seems very similar to believing in unseen spirits or gods. 45. fred Says: Scott #42 Somehow the only class I got on QM back in 1990 at engineering school (in Brussels) was pretty weird and old school. The professor was taking a historical approach on it (he must have learned it in the 40’s), starting at the black box radiation conundrum, a ton of stuff about “wave packets” math, and then focusing almost only on solving the hydrogen atom model using the Schrodinger equation, jumping all sorts of steps because we didn’t have the right math knowledge yet (only engineers who specialized in physics, to work in nuke plants, later learned the necessary math tools to “work” the Schrodinger equation). A few years later I read the Feynman book on QM and was pretty shocked how different it was. 46. Scott Says: fred #45: Yup. 🙂 47. Adam Ford Says: The last section of the interview ‘On Suffering, Utopia, Radical Uncertainty & Free Will‘ is up – I hope to churn transcriptions out by the end of the week. 48. Yoni Says: Hi Scott I liked the clip on discreet vs. continuous universe (can’t pretend I followed all of it). My question is: you describe the amplitude functions (and if I understand that correctly then also the probability function of the observable) as continuous. Is there good evidence that this is the case, or could it just as easily be that the functions are actually discreet and the continuous functions you use are just approximations to the actual underlying discreet reality? If that is correct would it then imply that some events that have a very low calculated probability are actually impossible? 49. Yoni Says: On the free will issue (and admittedly I haven’t ready your How could such a machine – even in theory – be possible. Surely as time moves on, we are all affected by our environment – and at the quantum level this would include quantum fluctuations both from within our brain structure, but also from without. I thought there were rules against copying quantum states (or even reading them precisely). You may say that the quantum fluctuations don’t have an effect on decision making, and that may be true over short time periods, but surely over time they will have an impact. If so your hypothetical machine may be able to determine my actions with high accuracy over a given period, but as you extend the period the accuracy will drop off no matter how accurate the initial inputs. The two potential responses I can think of are: 1 – there is some sort of error correcting mechanism (as with computers) that filters out for quantum randomness. However to this I will respond a) is there any reason to think this is happening, and b) that would only work with a high probability that again tails off over time. 2 – you get to keep updating the machine with the new information – but then the machine isn’t really doing the predicting. 50. Jules-Pierre Mao Says: Gerard #44, This so-called paradox is famous and was considered solved by many almost since it first appeared. Searl discuss that solution under the name “the system reply”, and briefly explains why he’s not convinced. Notice that his counterargument *postulates* that “[The whole system] has no way to understand the meanings of the Chinese symbols from the operations of the system”. One counter-counter-answer is to *postulate* that his counterargument is made of cheese, then dismiss it because it’s then made of cheese. 51. Yoni Says: Jules-Pierre Mao #8 “why don’t we notice anything each time we mess with the quantum states of the brain using an MRI scanner” How would you expect to notice them? What sort of non-trivial effects are you talking about? Surely given enough time passing the brain’s state is significantly different post-these-effects than had they never occurred. 52. Yoni Says: Jules-Pierre Mao #50 Thanks for the link. It seems to me that the problem with the thought experiment is basically one of timescale. If you slow the brain down to the speed where you can regognise each individual calculation, then – at that timescale – the brain doesnt “understand” anything, it just looks like a machine. It is only at the sped-up timescale (that we perceive in real life) that we can have these feelings. 53. Gerard Says: Jules-Pierre Mao #50 The problem I have with Searle’s Chinese Room experiment is that it includes too many ill-defined anthropomorphic concepts such as “cognition”, “to think”, “to understand”, etc. This makes it difficult to understand exactly what Searle is claiming (hence the misinterpretations section in the article you cite). I think my argument is much clearer because it addresses only a very specific question: whether or not a computational process is a sufficient condition for the existence of subjective experience. Can a computer “think” ? It all depends on how you understand what is meant by “to think”. By this we usually mean those mental processes of which we are conscious (as opposed to all of the stuff our nets are doing below the level of consciousness) and we usually think of these as something that can be expressed in human language. A computer could certainly represent such “thoughts” and process them to produce other “thoughts” based on some form of deductive or probabilistic reasoning. However I don’t believe that a computer could “experience” such thoughts the way we do any more than it could experience pain. However when we use the word “to think” we are often thinking not only of the process of thinking but also of the fact that we experience our own thinking. When Descartes said “I think, therefore I am” I think what he really meant was “I experience myself thinking, therefore I am.” In my view intelligence and conscious experience are two distinct things and neither one implies the other. Unfortunately the words we use to discuss cognition tend to mix these two concepts and lead to an anthropomorphization of intelligence, due to the fact that our language developed millenia before we could concieve of any form of artificial intelligence. 54. Jules-Pierre Mao Says: Yoni #51 >How would you expect to notice them? Each time we put someone in a scan, we mess with its EM field. The only known effect is that (if you move too fast) it can activate your photoreceptors. No other effect have been attributed to MRI, despite we know we leave some energy (usually so that temperature increase is less than 0.1K). So, if you say that freebits interacts with brains, then there must be some principle reason for why it is unaffected by MRI. For example you could suppose that any effect would comes from forces other than electromagnetic (Penrose played with this idea in some -not all- of his books on the topic), or that it is based on a frequency channel too low to get noticed in an MRI scan. >What sort of non-trivial effects are you talking about? Any distortion in the result one can obtain from within versus from outside a scanner. Specifically, if one theory suggest that freebits from space help free decisions, then I would expect some predictions on, say, results to the Iowa gambling task. Either that or a principle reason for why there should be no effect. Indeed, but the difference should also be significant in the sense of *neither trivial nor random*. I think that’s why SA disregarded “usual” randomness and turn to “knigthian uncertainty”. (but I’m not sure I fully understand the way he uses this notion). 55. Jules-Pierre Mao Says: Yoni #52, That’s a very interesting though, thank you. Gerard #53, Indeed. Yes, your question is much better imho too. But I regard it as solved, because cognitive science (see Dehaene below, for examples of results that support this conclusion) agrees with AI that at least some cognitive processes can be done without consciousness (it’s impressive that the time course for reaching this conclusion was somewhat similar in both fields). So, if there’s an algorithm or a computation that produce or detect consciousness, it’s not because we rely on it for cognition purpose. However I don’t see how this conclusion turns into “a computer cannot experience pain”. 56. Gerard Says: Jules-Pierre Mao #55 I’m not quite sure what you’re saying here. What exactly do you regard as solved ? Because I think it’s safe to say that it’s not generally accepted that the nature of consciousness or the causal factors necessary for it’s existence is “solved”. That some cognitive processes do not involve consciousness seems evident but I don’t see much of a connection between that and my claim that no computational process is a sufficient causal factor for the existence of subjective experience. Again my claim is that no computational process alone can generate subjective experiences, because the contrary belief leads to what to me is the absurd conclusion that making marks on paper can create a new consciousness. Feeling pain is a subjective experience. If computers cannot have subjective experiences then they cannot feel pain. 57. Jon K. Says: Hi Scott, You look great in the videos, especially considering the harsh lighting. I thought your classical computing (history, hardware, and software) analogies were very helpful in talking about where QM has been, where it is now, and your hopes for where it will be at some point in the future. With regard to IIT’s ideas around asking the question of whether or not the system under consideration can be decomposed without harming it’s functionality/nature/phi metric, do you see any connection to the QM distinction between quantum states that can and can’t be decomposed? Are classical states only cloneable (most of the time unless your computer has an issue) because the state you are interested in is actually some high-level average state, made stable though error correction or some sort of fault-tolerant redundancy engineered in at a lower level? I hope these questions make sense 🙂 58. Bennett Standeven Says: @Gerard #56: I think a similar argument shows that no cognitive process is a sufficient causal factor for consciousness either. Since otherwise, an imaginary cognitive process should still generate a real consciousness. (For example, imagining Sherlock Holmes making a deduction would mean that a real Sherlock Holmes is experiencing the deduction.) @Jon K. #57: Yeah, classical states can be cloned and deleted because they are composites of many quantum states. Although their fault tolerance doesn’t need to be engineered; it’s pretty much automatic that macroscopic states will behave this way. 59. Yoni Says: Jules-Pierre Mao #51 “the difference should also be significant in the sense of *neither trivial nor random*”. I may be misunderstanding the point you were trying to make, but for the purposes of SA’s point on the machine being able to predict the outcome, surely even a random affect would be problematic? 60. fred Says: Let’s say that in 5 or 10 years (or during the lifetime of your active career) there’s enough breakthroughs in CS that millions of qbits suddenly become available (and quantum supremacy is verified). What would you be focusing on? Designing new quantum algorithms? For example to probe the space between P and NP? (like Shor’s) 61. Scott Says: Jon #57: There’s a clear mathematical analogy between quantum entanglement and the properties that tend to produce large values of Φ, which I even talked about in my blog posts of the subject. But entanglement is a real phenomenon that’s not only experimentally measurable but connected to other things one might care about, whereas my own view is that the supposed link between large Φ and consciousness is just a pure, 100% confusion and mistake—one that arose by obsessing about one particular example (the brain) where those two properties happen to go together, while ignoring the many examples where they don’t. 62. Scott Says: fred #60: Well, I’d probably no longer focus so much on the foundations of quantum supremacy experiments. 🙂 But apart from that, my research interests would probably be surprisingly unaffected by the actual availability of the devices. It’s like, there are dozens of fundamental problems that are still open about the ultimate capabilities and limits of quantum algorithms. And while those problems will surely acquire more practical importance if someone actually builds a million-qubit QC, the latter achievement isn’t going to magically solve those problems—any more than the classical personal computer revolution of the past 40 years magically solved P vs. NP or the other fundamental open problems about classical algorithms. Some of the problems were solved within that time—but not primarily because classical computers were widely available—while many others are still open. 63. Tomislav Ostojich Says: Computers will never be “smarter” than humans because computers can’t think. A human brain has as its final cause to think. So the fact that it thinks is something intrinsic and observer-independent. A computer is a collection of switches that has an externally-imposed meaning. The meaning of a computation is not observer-independent, unlike the human brain. So the computation of a computer is more like the bits of metal on a watchface. The watch’s metal don’t have any inherent purpose to tell time, because their function as time-keeping devices is externally imposed on the metal. And if computers can think, that sounds like a great excuse for not holding programmers accountable. “You see your honor, it’s not that I made a flaw in the program that calculated chemotherapy dosage, the computer had a mine of its own!” “Bailiff, take him away!” 64. Gerard Says: @Tomislav Ostojich #63 “Computers will never be smarter than humans because computers can’t think.” That seems a bit like saying “Cars will never be faster than humans because cars can’t run.” 65. Scott Says: Tomislav #63: And how exactly do you decide what something’s “final cause” is? Suppose an actual human brain, or something physically indistinguishable to one, were assembled in a factory—would its “final cause” then be to think or merely to manipulate symbols? I sometimes ask myself how this sort of obliviousness to the enormities of a question comes coupled with such beatific confidence about the answer, but then I reflect that that question answers itself. 🙂 Everyone: I’ve been having problems with Akismet (my moderation queue overrun with spam), and even more problems with the start of the semester and zero free time, but I’ll have a new post up soon, promise! 66. Ajit R. Jadhav Says: Gerard # 64: 1. He had the scare-quotes around the word “smarter.” 2. The cars analogy does not fit. Cars undergo *physical* displacements, and running, in the primary sense of the term, refers to the *physical* aspects of a certain kind of a (living) man’s activity—one that results in physical displacements. In short (and taking the liberty to put it somewhat vaguely), running is physical. But thinking is not. Thinking is *primarily* a mental activity, even if, symmetrically, this activity does require certain physical apparatus (the nervous system, esp. the brain, of a *living* being), and does have certain physical (chemical, electrical etc.) correlates which go with it. In short (and vague) terms, thinking is mental. This difference is crucially important. 3. A while ago, I wrote a couple of posts examining panpsychism in detail at my blog. See if interested. Bye for now. 67. Tomislav Ostojich Says: @Gerard that sounds like a meme, not a reason. Richard Dawkins speculated that religions are memes. But now that we have observed memes on the Internet we know that religions are systematic knowledge and memes are analytic, neutral forms of cognition and so are disjoint from religion in every way. Computers are switches and so are observer dependent. The brain has thinking as its final cause and so is not observer dependent. A brain thinks even if nobody is looking. A computer does not compute unless someone is both looking at it and interpreting its output. This is why when a computer kills someone judges do not accept “the computer has gained self-awareness” as an excuse. The programmer still gets punished and not the computer. I sometimes wonder whether because of Silicon Valley programmers we will end up establishing computer courts, which would be an even greater farce than animal courts of the Dark Ages, thereby proving that the Dark Age peasant was more intelligent than the modern Silicon Valley Startup founder. 68. Gerard Says: @Tomislav Ostojich #65 You can call it a meme if you want but that’s irrelevant to the fact that it’s a counterexample which shows that a proposition of the form: Given that not q(H) implies p(H) = 0: p(A) < p(H) because q(H) and not q(A) is not a valid argument. As for the rest of what you say, I don’t see how it supports the claim that “Computers will never be smarter than humans”. How does computation being “observer dependent” prevent a computer from achieving it’s goals and thus demonstrating intelligence ? 69. Tomislav Ostojich Says: I already told you. A final cause is observer independent. A cat sees regardless if anybody else interprets it as seeing or even if the cat is conscious of himself seeing. Artifices like a clockface are just random arrangements of bits of metal that don’t mean anything unless an observer assigns a meaning. They have no final cause. Only the human’s intention to tell time is a final cause. People who are serious about computers gaining sentience should be campaigning for computer courts to be established for when they do evil. Greater than human sentience also implies moral agency. 70. Scott Says: Tomislav #69: Your reply helps illustrate why Aristotle’s notion of “final cause” was abandoned with the rise of modern science. I asked you for a principled criterion to determine which physical systems have “final causes” and which don’t. In response, you simply repeated your original foot-stomping assertion that humans have “final causes” while computers are “random arrangements of bits of metal,” and that’s that. OK, but what about worms? Amoebas? Extraterrestrials? Part-organic, part-silicon cyborgs of the future? If we’re refusing to engage with hard cases, or even acknowledge their existence, then we’re not doing philosophy and are just wasting time. Banned from this blog for one year. 71. mjgeddes Says: Yes, Scott, aka Tomislav, it’s a shame that those who don’t even grasp the rudiments of the scientific method can’t even get out of the starting block. And sadly, those who philosophically object to the idea that computation generates consciousness aren’t much better off unfortunately. Stuck in a fruitless philosophical quagmire, and literally not even able to leave the starting block. If one just begins with the assumption that computation generates consciousness as an interesting premise, and then tries to explore where this idea leads, one might just be able to achieve something glorious… So extending my baronet of rationality, and roaring with a mighty battle cry, I charge up the steps and storm the cognitive fortress! First in the world! OORAH! 😀 Solution to consciousness in less than 200 words: ‘Consciousness is a symbolic language for modelling time (TPTA – Temporal Perception & Temporal Action)! There are two types of time – (1) logical time – a high-level abstract tree of the structure of a logical argument – call this an ‘argument tree’, and (2) physical time – a low-level tree showing counter-factual possibilities representing physical causality– call this a ‘grammar tree’. Both types of time are represented by ‘computation trees’ , an extension of temporal logic (a type of modal logic). Consciousness arises when the argument trees (representing logical time) are integrated with the grammar trees (representing physical time) to form an internal ‘self-model’ – call this a ‘narrative tree’ (or cognitive time). The argument tree lets us plan for the future (Temporal Action), the grammar tree lets us reflect on the past (Temporal Perception), and the narrative tree (the self-model) is for communicating our intentions in the present (Choice). TPTA – Temporal Perception & Temporal Action !’ 72. Joe Says: A couple hours of googling quantum computers died here. I started the evening hearing from a friend that 2000 qubit quantum computers were being made and tons of universities were buying them. Ended up watching your video saying quantum supremacy is a hope for the future. So disappointing. 73. The Winding Road to Quantum Supremacy – Scott Aaronson | Science, Technology & the Future Says: […] His primary areas of research are quantum computing and computational complexity theory. Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than […] 74. The Ghost in the Quantum Turing Machine – Scott Aaronson | Science, Technology & the Future Says: Leave a Reply
6fb31affb20f6bcc
It is known that the Schrödinger's equation of the electron's wave function in atoms can be solved analitically only when a single electron is present (the "hydrogenlike atom"). In that case, the angular part of the equation leads to the solution with spherical harmonics, whence the quantization rules and numbers for angular momentum. When speaking about atoms and molecules with more than one electron, it is widely assumed that the angular momentum is quantized in a similar manner. This is a good approximation based on experimental observations, or it can be proved mathematically that the azimuthal and magnetic quantum numbers can be used notwithstanding the presence of several electrons and in molecules of several nuclei (in these cases the Schrödinger's equation does not contain a potential function that is purely radial)? According to an answer to a previous question, it seems that it is an observational evidence rather than a mathematical assertion (https://physics.stackexchange.com/q/22815). • $\begingroup$ The question you linked to at the end does not contain any such claim. What parts of the answer give you such an impression? $\endgroup$ – Emilio Pisanty Jan 20 at 16:23 • $\begingroup$ The fact that in the answer an observational method (rotational spectroscopy) is cited, but no reference to a mathematical proof is provided. Of course, the conclusion is not stated explicitly (that is the reason why I wrote "it seems that..."). $\endgroup$ – Stefano Zunino Jan 21 at 7:38 This kind of depends on exactly what it is you're talking about. • In multi-electron atoms, the total angular momentum $\mathbf J = \mathbf L+\mathbf S$, which includes the orbital and spin angular momenta of all the electrons in the atom, is rigorously conserved. This is the $J$ quantum number of the atom and it is always a "good quantum number" (in the sense that it represents a conserved quantity). • If you ignore spin, and spin-orbit coupling, then even in a multi-electron atom the orbital angular momentum is rigorously conserved, which seems to be one of your primary worries when you say things like in that case the potential is not central, because of the repulsion between the different electrons. Indeed, for multi-electron atoms the potential is no longer central, but it is still symmetric under global rotations: if you move a single electron without moving the others then the hamiltonian obviously changes (so the angular momentum of each individual electron isn't conserved), but if you move them all under the same transformation then their relative distances are untouched, so the interaction remains invariant. This global symmetry under rotations means that the generator of that symmetry (i.e. the total orbital angular momentum operator, $\mathbf L = \sum_j \mathbf L_j$) is rigorously conserved, and there is a shared eigenbasis of the electronic hamiltonian and this angular momentum. • In multi-electron atoms, the total orbital and spin angular momenta (i.e. $\mathbf L$ and $\mathbf S$ separately, each of which includes contributions from all the atom's electrons) are rigorously conserved in the absence of strong spin-orbit coupling. This is the framework is where term symbols of the form $^{2S+1}L_J$ with well-defined orbital and spin angular momenta (like, say, ${}^2\mathrm{P}_{3/2}$) come from. As the strength of the spin-orbit coupling increases, which generally happens as the atomic number gets bigger, you will generally start getting a small amount of mixing between terms of different character (so, say, a ${}^2\mathrm{P}_{1/2}$ state might also include a 0.1% amplitude of ${}^4\mathrm{D}_{1/2}$ character). In more rigorous language, this means that the hamiltonian eigenfunctions no longer have 100% overlap with their dominant contributing state in a definite-$\mathbf L$ basis, or that the $\mathbf L$ eigenfunctions are no longer exact eigenfunctions of $H$. (Nevertheless, though, the fact that this involves an approximation on the side of which terms contribute to the hamiltonian should not be taken as a way to immediately dismiss angular-momentum conservation. In particular, everything in physics is approximate, and you always need to look at what other approximations are involved in whatever you're doing. In this specific case, if you wanted to detect that ${}^4\mathrm{D}_{1/2}$ contribution, then you'd look for a transition line that would otherwise be (dipole-)forbidden in the ${}^2\mathrm{P}_{1/2}$ state ─ like, say, to an $\rm F$ state ─ but then you have to seriously consider whether an observation of that transition is evidence of that ${}^4\mathrm{D}_{1/2}$ contribution or whether it is instead caused by quadrupole or higher-order selection rules.) As the atomic number gets really big, and you get into the large-atom regime, then $\mathbf L$ stops being a "good quantum number" (i.e. the spin-orbit coupling becomes so big that $\mathbf L$ eigenfunctions are no longer good approximations to the hamiltonian eigenstates), and you pass through some intermediate coupling schemes (which again have mostly-well-defined angular momenta coming from sundry combinations of spin and orbital angular momenta from different shells) before settling into jj coupling at the high-$Z$ end of the scale, where you have states with well defined $\mathbf j$ per shell (with trace contributions from other $j$'s). • In multi-electron atoms, it is also common to talk about the angular-momentum characteristics of individual electrons, and the orbitals they occupy to form the system's multi-electron state. These orbitals should only ever be considered as a convenient basis within which to formulate the true multi-electron state, and indeed single-electron orbitals have extremely limited physical reality in a multi-electron setting, as I've explained elsewhere. • For molecules, and specifically for linear molecules like diatomics, we often talk about the angular momentum of the electrons. Generally, the total angular momentum of the electrons is never conserved in such molecules, since the hamiltonian is not spherically symmetric, but within the Born-Oppenheimer approximation we do have one axis of symmetry and therefore the conservation of one component of the angular momentum, generally taken on the $z$ axis. This is where molecular term symbol notation comes from, with one letter denoting the orbital angular momentum (now in Greek, to emphasize that it's only $L_z$ that's conserved; as in atoms, with lower-case terms denoting single-electron orbitals and upper-case terms denoting multi-electron combinations). This is typically a rigorously-conserved quantity, unless you're looking at a system where nonadiabatic beyond-Born-Oppenheimer couplings between nuclear and electronic motion are non-negligible. (Hint: if you have to ask, then those couplings are negligible.) • For molecules, we also care about the total angular momentum of the overall motion of the system, i.e. of the nuclear positions. This is a rigorously conserved quantity (so long as you're in the gas phase, obviously), giving a guaranteed "good quantum number" for rotational molecular spectroscopy; any experimental limits on this are purely on the instrumentation side - i.e. this is always conserved, but if the system is too large then that might be hard to verify explicitly. I'm unsure about what's the current record in terms of the molecular size and mass at which rotational spectroscopic verification of the conservation of this angular momentum is still feasible. On the other hand, for small molecules, this quantity can be pushed to ridiculous extremes in terms of the value of $J$, which can be pushed all the way up to the hundreds and even a thousand using optical centrifuges (examples: classic, modern). • $\begingroup$ Thank you @Emilio Pisanty for your answer. As I understand, you are confirming my feeling that speaking about azimuthal and magnetic quantum numbers of electrons in atoms and molecules is a (very good) approximation, but they are not rigorously true quantities. Of course, the total angular momentum of any isolated physical system is conserved, if the potential does not depend on an absolute direction (that is a general statement, not limited to quantum mechanics), but how the angular momentum is quantized in a particular atom or molecule is a different matter. $\endgroup$ – Stefano Zunino Jan 21 at 7:55 • $\begingroup$ No, that's not what I'm saying. You haven't given enough details about what you're thinking of to say anything more specific, but generally speaking there's plenty of 'rigorously true quantities' to be had in this area. If you want more details, you should explain in more depth what you have in mind. $\endgroup$ – Emilio Pisanty Jan 21 at 9:13 • $\begingroup$ By "rigorously true quantity" I mean an eigenvalue that can be obtained by applying a quantum operator to a wave function that is an eigenfunction for that quantity. I think that you called that a "good quantum number" (but I may have understood differently from your intention). $\endgroup$ – Stefano Zunino Jan 21 at 10:29 • $\begingroup$ Yes, that is obvious. What's not clear at all is what you mean by "speaking about azimuthal and magnetic quantum numbers of electrons in atoms and molecules" - that's an extremely broad range that encompasses a large number of different usages, with several different statuses. It's impossible to comment in more detail (beyond "most are rigorous, some are approximate") without knowing which usages in particular you have in mind. $\endgroup$ – Emilio Pisanty Jan 21 at 11:07 • $\begingroup$ By "speaking about azimuthal and magnetic quantum numbers of electrons in atoms and molecules" I mean whether such numbers are components of an eigenvalue, or simply good approximations based on the solution of the hydrogenlike ion. $\endgroup$ – Stefano Zunino Jan 21 at 11:10 As an initial remark, the Schrodinger equation can be solved exactly for a variety of potential, not just “hydrogen-like” atoms - the cases of the harmonic potential, the Morse potential, or Poschl-Teller potential immediately come to mind but there are multiple other ones as well that don’t have entries on Wikipedia. Quantization of angular momentum follows exactly from the commutation relation of the angular momentum operators $\hat L_x,\hat L_y$ and $\hat L_z$. These commutation relations do not depend on the number of particles in the system. The simplest example would be two spin-1/2 particles, made famous by various versions of Bell’s theorem. The total spin (or angular momentum) of this 2-particle system remains quantized and it can only take the values $0$ or $1$ depending on how the state is prepared. Another example is the nuclear $su(3)$ model - not so used in chemistry but still quite useful for multi-particle nuclei. The 3-dimensional harmonic oscillator can be solved using the $su(3)$ Lie algebra, and the angular momentum operators are in this algebra. It is also “easy” to construct multiparticle $su(3)$ state using Lie algebraic techniques (just an extension of Clebsch-Gordan technology for angular momentum). For these multiparticle states, angular momenta of the individual constituents are combined in the usual way and remain quantized. More generally, when the potential is not central, angular momentum will not be conserved for individual single particle states and so states will not necessarily be eigenstate of angular momentum. An example of this is the Nilsson model for deformed nuclei. This does not mean that basis states with good angular momentum quantum number cannot be used to start the calculations, just that the final states will be a linear combination of those single particle basis states and that $\Delta \ell\ne 0$. Of course since the total Hamiltonian must be a rotation scalar, the final eigenstates of $H$ can be chosen to have good angular momentum. Angular momentum remains quantized since the total angular momentum operator still satisfy the usual commutation relations. • $\begingroup$ In my question, I'm not speaking about Schrödinger's equation in general (that has several analytical solutions for particular potentials), but specifically Schrödinger's equation for electrons in atoms and molecules, that cannot be solved analitically except for a single electron. The commutation relations between quantum operators only guarantee that the related quantities can have simultatenous eigenvalues (if the commutator is null), not how is the quantity is quantized; so they do not prove that the azimuthal and magnetic quantum numbers are valid in any case. $\endgroup$ – Stefano Zunino Jan 21 at 7:40 • $\begingroup$ I basically agree with your last paragraph. However, if the potential is not central (i.e. it does not depend on a single distance $r$ called radius, but on several distances), this does not mean that the angular momentum of the whole system is not conserved (it is, as a consequence of Noether's theorem), only that the angular momentum of the single electrons is not an eigenstate of the system. $\endgroup$ – Stefano Zunino Jan 21 at 7:48 • $\begingroup$ @StefanoZunino It should be fixed now... Thanks for bringing this important point to my attention. $\endgroup$ – ZeroTheHero Jan 21 at 21:03 To sum up: • the total angular momentum $\vec{J}$ is always conserved in isolated atoms and molecules, since the potential functions do not depend on a particular direction (they are rotational invariant for a global rotation of the system around the origin of the frame of reference, tipically the center of mass for multi-nuclear molecules) and thus the wave function of the system is an eigenfunction of the angular momentum squared magnitude operator $\hat{J}^2=\hat{J}^2_x+\hat{J}^2_y+\hat{J}^2_z$ ($\hat{J}_i$ is the generator of infinitesimal global rotations around the axis $i$). • since it is possible to construct mathematically a angular momentum operator for a single direction $z$ that commutes with the operator $\hat{J}^2$, namely $\hat{J}_z$, also the angular momentum projection $J_z$ is conserved (see e.g. Landau, Lifshitz, Course of theoretical physics, III vol., § 27). • from the conservation of $J_z$ and the mathematical expression of the operator $\hat{J}_z$, it is possible to define a total angular momentum projection quantum number $M_J$, so that $J_z = M_J \hbar$. The number $M_J$ must be integer or semi-integer (in order to have a single-valued probability density). Also the squared magnitude of the total angular momentum can be derived as $\vec{J}^2=J(J+1)\hbar^2$, where $J \geq |M_J|$ is the total angular momentum quantum number, also integer or semi-integer. • the orbital angular momentum of all the electrons $\vec{L}$ and the spin angular momentum of all the electrons $\vec{S}$, whose sum is the total angular momentum $\vec{J}=\vec{L}+\vec{S}$ (disregarding the nuclear angular momentum $\vec{I}$) are approximately conserved separately, i.e. they are conserved separately if the influence of the spin is disregarded in the solution of the Schrödinger equation of the electrons' wave function (if $\vec{S}$ is fixed, the conservation of $\vec{L}$ follows from the conservation of $\vec{J}$). In that approximation, it is possible to define the azimuthal quantum number of the orbital part of the electrons' angular momentum $L$, the magnetic quantum number of the orbital part of the electrons' angular momentum $M_L$, the spin quantum number $S$ and the spin projection quantum number $M_S$, that are good quantum numbers. • it is not possible to separate a conserved angular momentum of the orbit of the single electrons in atoms and molecules, so $\ell$, $m_\ell$ are not good quantum numbers. Also the spin projection quantum number $m_s$ is not conserved separately for any electron (of course, the spin quantum number is always $s=1/2$ because that is an intrinsic property of electrons). However, in single atoms the approximation of electrons staying in separate orbital with definite $\ell$, $m_\ell$, $m_s$ is a good picture (at least for light atoms). • the same is a fortiori true for molecules, where additionally it is more difficult to separate a $L$ term, since the orbital angular momentum must be calculated with reference to the center of mass, not to a single nucleus. Therefore, the orbitals with a specific magnetic quantum number are defined only for simple molecules (e.g. diatomic molecules with an axial symmetry). • $\begingroup$ Your points about the atoms are vaguely correct, though you don't seem to understand the status of the approximations involved. If you think that $\vec L$ is conserved in hydrogen but not in, say, carbon, then you've certainly not understood. What makes you think that there is no spin-orbit coupling in hydrogen? What distinguishes nuclear-spin coupling that allows you to ignore $\vec I$ but somehow forbids you from discarding spin-orbit coupling when it is negligible? $\endgroup$ – Emilio Pisanty Jan 21 at 15:11 • $\begingroup$ @EmilioPisanty I never stated what you are writing in the comment. The "hydrongenlike ion" is a model, I never thought that the hydrogen atom is different from other atoms, except that total and single electron related quantities are the same. $\endgroup$ – Stefano Zunino Jan 21 at 15:16 • $\begingroup$ That's again missing the point, but I'm not sure what could be achieved by repetition at this point. Good day! $\endgroup$ – Emilio Pisanty Jan 21 at 15:26 Your Answer
632033f63bea885f
Cheap PhenoPen CBD Paypal Buy cbd gum florida A million shares were set aside for staff, which led to many staff members buying shares that shot up in value. Heel pain is worsened by bearing weight on the heel after long periods of rest. Fujisawa adopted a technique of opening up the entrances of his first stores Shinjuku, Ueno and Yokohama to allow a large number of the available products to be seen at a glance facilitating high volume sales at low prices. They may not recognize sexual abuse. Gardasil is given in three injections over six months. New Age travelers made summer pilgrimages to free music festivals at Stonehenge and elsewhere. Rapid venom evolution can also be buy sample cbd oil explained by the arms race between venom targeted molecules in resistant predators, such as the opossum, and the snake venom that targets the molecules. The same buy cbd oil 1000mg type of law had been used only for very serious alcoholics in the past. Biruni was made court astrologer and accompanied Mahmud on his invasions into India, living there for a few years. As the psychoactive effects of cannabis include increased appreciation of the arts, including and especially music, as well as increased creativity its influence and usefulness can be found in a variety of where to buy cbd oil asheville nc works of art. The cbd xrp oil capsules for sale card was later modified to double as a video buy sample cbd oil rental card for Iggle Video. Weber scored one of his premier leads as Dr. The buy sample cbd oil term is often used to express eccentricity or peculiarity. Traditionally in Germany, drugs were not discountable and the entire trade with pharmaceuticals was limited to the single channel of the Apotheke. Gili Meno is the middle of Lombok's three northwest coast Gilis. They buy sample cbd oil were allegedly caught in possession of nearly half a kilo of methamphetamines and cocaine. Brazilian gang buy 100 cbd oil uk members have used children to commit crimes because their prison sentences are shorter. It may also contribute to weight gain, however. Once deployed, an area contaminated with ricin remains dangerous until the bonds between chain A or B have been broken, a process that takes two or three days. Skin biopsies of the site show necrosis caused by ischemia. For that timespan, Norway is the overall happiest country in the world, buy sample cbd oil even though oil prices have dropped. By 2005, the most common volatile anesthetics used were isoflurane, sevoflurane, and desflurane. The ethanol component can also induce adverse effects at higher doses; the side effects are the same as with alcohol. Paregoric is currently listed in the United States Pharmacopeia. Except for the treatable types listed above, there is no cure. The large interior open space and the easy access to all floors enhance the function of being buy sample cbd oil a bank. It allows small businesses in China to sell to customers all over the world, resulting in a wide variety of products. In rare cases, repeated exposure to halothane in adults was noted to result in severe liver Best Cbd Hemp Oil To Buy injury. External seepage is also found buy cbd oil in taiwan near the valley edge of the lower intake manifold. Sexual preferences, cultural interests and hobbies can be added optionally. Some people may engage in swinging buy sample cbd oil to add variety into buy sample cbd oil their otherwise conventional sex lives or due to their curiosity. Free groceries, all buy sample cbd oil compliant with the American Diabetes Association guidelines, are provided to those in this program. Showtime has not commented on whether they ever anticipate releasing a region B Blu-ray version, or if any further non-US transmission rights are agreed. buy sample cbd oil It effectively prohibited the use of tablet counters for counting and dispensing bulk packaged tablets. The wave model is derived from the wavefunction, a set of possible equations derived from the time evolution of the Schrödinger equation which is applied to the wavelike probability distribution of subatomic particles. Following every home game, the entire buy sample cbd oil football team gathers on Florida Field and joins fans in singing the Alma Mater while the band plays. Many drugs are taken through various routes. Female Republican voters tend to seek more information about female Republican candidates. All classes of microbes can develop resistance: Aconitine is easily absorbed through the skin, eyes and through the lining of the nose; Death may occur through respiratory buy sample cbd oil paralysis. Within this context, rape has been assessed as a foremost tool of intimidation used by men against women. Kettering's 1951 ASME presentation goes into detail about the development of the modern Unit injector. Spans:Tweens:In the case of ionic surfactants, the counter-ion can be:A wetting agent is a surfactant that, when dissolved in water, lowers the advancing contact angle, aids in displacing an air phase at the surface, and replaces it with a liquid phase. North and South Dakota are the only states where a plurality of the population is Lutheran. The transmitted information can be decrypted only by a subsequent node in the scheme, which leads to the buy sample cbd oil exit node. This can result in pleasurable sensations and can lead to an orgasm in some cases. From Wikipedia, the free encyclopedia Buy Cbd Wellness Brand Cbd Oil Cheap CBD Isolate Crystal Online Visa Buy Cbd Oil Now Buy Cannabis Oil Melbourne How much does it charge me to buy soma online with my netspend free card Edible Cbd Oil For Sale
68317533facedcef
Impact Factor 1.895 | CiteScore 2.24 More on impact › Original Research ARTICLE Front. Phys., 29 August 2018 | Time, the Arrow of Time, and Quantum Mechanics • Institute for Theoretical Physics, Utrecht University, Utrecht, Netherlands It is brought forward that viable theories of the physical world that have no variable at all that can play the role of time, do not exist; some notion of time is one of the very first ingredients a candidate theory should possess. Almost by definition, time has an arrow. In contrast, time reversibility, or even the possibility to run the equations of motion backwards in time, is not at all a primary requirement. This means that the direction of the arrow of time may well be uniquely defined in the theory, even locally. It is explained that a rigorous definition of time, as well as a formulation of the causality and locality concepts, can only be given when one has a model for the physical phenomena described. The only viable causality condition is one that is symmetric under time reversal. We explain these statements in terms of the author's favored deterministic cellular automaton interpretation of quantum mechanics, also to be referred to as “vector space analysis,” and expand on these ideas. It is also summarized how our more rigorous causality condition affects Bell's theorems. What distinguishes quantum systems from classical ones is our fundamental inability to control the microscopic details of the initial state when phenomena are studied in the light of some theoretical model. 1. Introduction; Defining Time The universe as we know it is characterized by a framework called space-time, in which events take place. The events are characterized first of all by their locations in space, and moments in time, all together indicated in terms of coordinates. The number of coordinates needed, usually real numbers, is called the dimension of space-time. The coordinate that indicates time, is a very special one. It is the only coordinate in which it is meaningful to define an ordering in the values given, the order of time. This ordering defines an orientation, called the arrow of time. It allows us to define an ordering (or at least a partial ordering) of all events. Whenever we build models that explain the existence and nature of the events, it is of extreme importance to have such an ordering of events; it allows us to explain them sequentially: one event can be the cause of a subsequent event if its time variable is lower, or it could be the consequence of an event if its time coordinate is higher. It is difficult, probably impossible, to devise a model of our universe, if no ordering is defined for the model to describe the events. This in fact will bring us to provide a definition of time that is more primary, more basic than all other ingredients of our model, including the notion of space. Our universe is known to carry a memory of things that happened in the past. Whenever we build a model of our universe, one that is controlled by “laws of physics,” it should come with a completely unambiguous prescription of the order in which the laws of Nature should be imposed on all events that take place. Regard the laws of physics as a computer program to calculate the next sequence of events. The data that we have to enter into the program may come from events calculated earlier. They may not come from events that still have to be specified, because in that case conflicts may arise: if event A affects the features of event B then event B should not react back to modify event A, otherwise the rules cease to be unique; they will literally be circular, making them either self-contradictory or ambiguous, and for that reason they would not be suitable to explain observed phenomena. Notice that this is the extreme opposite of Newton's action principle: if event A acts on event B with some force, event B should not react back onto A. Newton's action principle, action = reaction, is different because it is in space-like directions, and because it often neglects some minute time delays that are involved: the (re-) action cannot spread faster than the speed of light. The ordering caused by the rule “A affects B but B cannot affect A,” is one we cannot do without. Assuming indeed that the universe allows for the existence of such an elementary action ≠ reaction principle, we obtain a unique definition of time: Time is the order in which our models for nature predict, prescribe or explain events. Notice that this definition of time supposes that we construct models to explain our universe. If one only would collect data, without attempting to explain them, we would not need any notion of time. After all, the data could have been presented to us in “non chronological” order. It is our model that definitely requires an order. Any parameter, any coordinate that increases monotonically in that order, will be a useful time coordinate. Notice also that quantum mechanics provides no exception to our rule; it also requires a definition of an ordered time coordinate. We can say this because the Schrödinger equation1 involves exclusively a first order derivative in time. Therefore only one boundary condition is needed, taken to be the situation in some distant past, to determine the situation in the future. The primary definition of time given above, only defines the time ordering, but does not attach real numbers to time. In fact, the use of integers, so as to count the events that we calculated, would have been more appropriate. Considering the humongous size of our universe, and the extremely short time sequences expected to be relevant at the Planck scale, one may expect these integers, if they exist at all, to be extremely large, larger than ~1060. Scaling these numbers down for practical use, probably suffices to explain why, at present, real numbers seem to be more useful than integers to indicate time. According to special relativity, one can have events that are space-like separated. This means that there may be events A and B such that our model allows us to calculate what happens in A and in B without the need to specify their order. The importance of this is that the definition of time given above is not unique; it is a feature of the notion of time that will have to be taken into consideration when building more advanced models, but it seems to be less basic as far as first principles is concerned. Among the questions asked to the author was one concerning the theory of special relativity. Issues concerning special relativity in relation to the question of time and its arrow, are discussed in Appendix B. 2. Quantum Mechanics The theory of quantum mechanics is arguably one of the greatest discoveries of physics; it revolutionized our understanding of molecules, atoms, radiation, and the world of the sub-atomic particles. Yet even now, almost 100 years later, there is still no complete consensus as to what the theory tells us about reality, or even whether “reality” exists at all. Some authors adhere to the idea that all “realities” exist somewhere in some alternative universes, and that these universes evolve together as a “multiverse2.” The present author does not go along with such ideas. Quantum mechanics is a superb description of the world of tiny things, but, on the face of it, quantum mechanics seems merely to reflect humanity's ignorance. We do not know which reality it describes, and as long as this is the case, we should not be surprised that, in a sense, all possible realities play a role whenever we try to make the best possible prediction of the outcome of an experiment. The fact that many of us have technical difficulties implementing such a thought in the equations known to work best today, may well be due to lack of imagination as to how eventually the correct view will be found to emerge. The author has made his own analysis of the known facts, and came to the conclusion that the Copenhagen doctrine, that is, the consensus reached by many of the world experts at the beginning of the twentieth century, partly during their numerous gatherings in the Danish capital, has it almost right: there is a wave function, or rather, something we call a quantum state, being a vector in Hilbert space, which obeys a Schrödinger equation.3 The absolute squares of the vector components may be used to describe probabilities whenever we wish to predict or explain something. Powerful techniques were developed, enabling one to guess the right Schrödinger equation if one knows how things evolve classically, that is, in the old theories where quantum mechanics had not yet been incorporated. It all works magnificently well. According to Copenhagen, however, there is one question one should not ask: “What does reality look like of whatever moves around in our experimental settings?” or: what is really going on? According to Copenhagen, Such a question can never be addressed by means of any experiment, so it has no answer within the set of logical statements we can make of the world. Period, schluss, fini. Those questions are senseless. It is this answer that we dispute. Even if this kind of questions cannot be answered by experiments, we can still in theory try to build credible models of reality. Imagine the famous detective Sherlock Holmes entering a room, with a dead body lying on the floor. The door is open, and so is the window. A crime has been committed. Did the perpetrator come through the window or through the door? Or did something altogether different happen? Sherlock Holmes ponders about all possibilities, but he will not say: the perpetrator came through the window and through the door, using a wave function, etc. etc. Clearly such answers are not accepted in the ordinary world. Sherlock Holmes may well conclude that he cannot derive the answer with certainty, but what he can try to find out is what could have happened. Have we been brainwashed to accept wave functions in the world of the atoms? Should we not, here also, ask what it really was, or what it could have been, that has been going on? Perhaps we are using the wrong language. Maybe atoms and molecules do not exist in the form we imagine them. Maybe Nature's true degrees of freedom are very different, and only when we consider the statistics of many atoms, our language that assumes these to be particles obeying quantum equations may be seen to work out correctly. When early attempts to construct such models failed, investigators tried another path: maybe one can prove that there exists no reality at all whose probabilities can be caught in terms of a Schrödinger equation? Suppose that we impose conditions on such models such as locality and causality. Can one prove or disprove that realistic models exist? What then happened is well-known. The first to consider such an option was Einstein, together with his co-authors Podolsky and Rosen [1] and Jammer [2]. They conceived of a Gedanken experiment to show that quantum mechanics cannot exactly provide a local description of what is going on. This conclusion is in fact somewhat contradictory, because quantum mechanics was used to describe as accurately as possible what predictions can be made, and that result was rarely disputed by anyone; indeed it was confirmed later by real experiments. The setup was revised by a somewhat more realistic scenario using particle spins, by Bell, and he gave the apparent contradiction in a more precise wording: Bell's theorem: the outcome of a quantum mechanical calculations of some non-local correlations contradicts any acceptable “classical” explanation by at least a factor 2. The inequality, called Bell inequality, was subsequently generalized and made more precise [3]. 3. Causality, Correlations and Quantum Mechanics This finding did not go undisputed. Many authors attempted to locate the flaw in Einstein's and Bell's argument, but logically it seemed to be impeccable. Bell assumed that determinism means that one can build a model, any model, in which classical equations control the behavior of dynamical variables, and where, at the tiniest scales where these variables describe the data, the evolution laws do not leave the slightest ambiguity; there are no wave functions, no statistical considerations, as everything that happens is controlled by certainties. Moreover, there is some sense of locality: the laws control all processes using only the data that are situated at given localities, while action at a distance, or backwards in time, are forbidden. The classical degrees of freedom that “really” exist were called “beables.” Here, the first topic for discussion arises: what does “action backwards in time” mean? In “La Nouvelle Cuisine,” Bell [4, 5] formulated as precisely as he could what “causality forward in time” means: A theory is said to be locally causal if the probabilities attached to values of local beables in a space-time region 1 are unaltered by specification of values of local beables in a space-like separated region 2, when what happens in the backward light cone of 1 is already sufficiently specified […] Region 2 is assumed to be completely outside the past light cone of 1, so what happens there, must be immaterial. It sounds fine, and many researchers agree with it, but there is a problem: Region 2 also has a past light cone, and if we consider some modification of the events in 2, these may disagree with what we postulated to have in region 1, since the two past light cones overlap. Consequently, correlations between the data in region 1 and region 2 cannot be excluded. In fact, such correlations are known to occur ubiquitously in the physical world, so what does “Bell-causality” really mean? What Bell needed to have said is that, in any model describing the laws of nature, only the data in the past light cone of 1, should determine what happens in 1, while he should not have referred to correlations. Yet Bell's inequalities are about correlations, and these are assumed to be absent outside the light cone. In the same vein, “backwards causality” is rejected: the past should not depend on the future. This is true in the following sense: our model should not require knowledge of the data in the future, to prescribe the data at present (it should only require data in the past light cone). Correlations do occur. In fact, if our model reflects reversibility in time—which most models do—then the data inside the future light cone can be used to determine, that is, to reconstruct, the present or the past, back from the future. In the above, the words our model were emphasized. What is important here is that causality cannot be a feature or property of the physical data themselves, but rather a property of the equations of motion with which we try to mimic these data. If two different theories can be used to describe the same set of data, then one of these theories might have causality and the other not. This is an element of the Bell “paradox” that may not have been emphasized sufficiently. Most models of nature are reversible in time; we can run the basic equations backwards in time as easily as forwards in time4. This implies that theories with causality forwards in time must also have causality backwards in time; this was ignored by Bell. There is nevertheless a good reason why Bell's profound result is considered irrefutable by most researchers today. The actions of observers in quantum experiments, are considered to be completely classical, and they reflect the observers' free will. To overrule Bell's theorem, the observers' free will must be correlated with quantum data in the past. This is considered “absurd” by most researchers. In the next section, and in Appendices A and C, this author's response, as to why these correlations may be not so absurd after all, is further illuminated. The theory used by the author was called “Cellular Automaton (CA) Interpretation” [6], but perhaps a preferable denomination is “vector space analysis5.” It is the idea that a classical system may be analyzed by associating any state of the system by a vector, such that all states together form an orthonormal basis of a vector space called Hilbert space. “vector space analysis” consists of the mathematical procedures made possible by performing any kind of transformations in this vector space. One ends up with a Schrödinger equation exactly as in quantum mechanics. Thus, vector space analysis contradicts Bell's theorem. Our theory consists of the assertion that what we call quantum mechanics today can be the result of a vector space analysis of some classical system. The “CA Interpretation” of quantum mechanics consists of the assumption that this is true, while we refrain from further attempts to identify the classical system underlying it. The author hopes however that the search for appropriate classical models will continue, and that it will bear fruit. We end this section with the remark that a restriction exists called “causality,” that can be imposed on any model for elementary particles. It is not disputed, but in fact used a lot in quantized field theories. This condition considers operators ϕ(x) in quantum field theories, describing (elementary or composite) fields ϕ at 4-space-time coordinates x. Let x1 and x2 be space-like separated. Then we have for the commutator, (x1-x2)2>0[ϕ(x1),ϕ(x2)]=0 .    (3.1) This says that any operation ϕ(x1) acting on any quantum state at space-time point x1, cannot affect the result of any dynamical effect of ϕ(x2) occurring at x2. In the Standard Model for the elementary particles, this condition, “no Bell telephone,” is found to hold true, and it has important applications in calculations. However, this condition does not distinguish causal relations in the forward time direction from ones in the backwards time direction, so that it could not be used to derive inequalities such as Bell's. The “no Bell telephone” condition does not depend on the arrow of time. 4. The Bell and CHSH Inequalities Bell's Gedanken experiment is in essence much the same as the Einstein Rosen Podolsky set-up. A local device is constructed that can emit two entangled particles, α and β, which leave the machine in opposite directions. Alice (A) and Bob (B), both choose whether to measure property X or property Y of the particles they can see. Alice chooses setting a to measure α and Bob setting b to measure β. The correlations needed to explain the quantum mechanical result require that the settings a and b chosen by Alice and bob, must be correlated with one another as well as the (classical) spins of the two entangled particles. The author calculated the minimal amount of correlation that is needed to produce the quantum result. We found the following distribution [6]: W(a,b,λ)=C|sin(2a+2b-4λ)| ,    (4.1) where a is the angle chosen by Alice for her measurement, b is Bob's angle, and λ a parameter describing the polarization of the entangled photons produced by the source—and detected by Alice and Bob. W is the probability distribution, and C is a normalization constant. It features a 3 body correlation: whenever we integrate over all values of a, or all values of b, or all values of λ, we get a flat distribution. To show rigorously that such correlation features are unacceptable for any theory that generates quantum mechanics from classical mechanical laws, Bell had to formulate his definition of causality. We indicated above that his definition does not apply for physical systems, so one could terminate the discussion here and now, since correlation functions are not bounded by light cones. Yet the correlation function (4.1) is considered unacceptable by most investigators. How can it be that decisions by Alice and Bob, made out of free will, can yet be correlated with something that happened earlier–the polarization chosen by the entangled photons emitted by the source? Did these photons “know” what settings Alice and Bob would later choose, or is this a case of “conspiracy?” How can a single photon guide the classical dynamical variables a and b? To explain this, we now summarize how vector space analysis works. Suppose we have a classical theory at, for instance, the Planck scale, 10−33 cm. This would be typically a cellular automaton. It can be in 21099 states in every cm3, typically. Every one of these states is called “ontological,” which means it is realized or it is not realized, but superpositions do not exist. It is precisely the thing that Einstein, Bell and others wanted to disprove. Just in order to do mathematics, we now attach a basis vector to every one of these ontological states. They are set up such that they form an orthonormal basis of a 21099- dimensional vector space, at each cm3. At the beat of a clock, typically with the Planck frequency of some 1044 Hertz, these states evolve into other states. This we write using the evolution matrix, which consists of one 1 in each row and in each column, and zeros everywhere else. The math we use consists of diagonalizing this matrix. This gives us the eigen states of the energy, i.e., the Hamiltonian. One finds that the states of this model obey the Schrödinger equation. Now all energy eigen states are superpositions of the ontological states, and if we limit ourselves to states with energies below 1 TeV for every excitation, then this corresponds to a very tiny subspace of the entire Hilbert space, while every state we can use is a superposition of ontological states. Without loss of generality, we can interpret the coefficients of these superpositions by taking their absolute squares to indicate probabilities. This is further elucidated in Appendix A. Here it is important to observe that “reality” is always described as one of te original ontological states, and never a superposition, yet we may use the Schrödinger equation to describe both the ontological states and the superpositions. The elements of the ontological basis always evolve into other elements of this basis, and superpositions into superpositions. We call this the law of conservation of ontology. There is a good reason why many attempts at making realistic models explaining the violation of Bell's inequalities failed, which is that, in these models, it was attempted to mimic superpositions of particular modes in terms of other valid modes of an automaton. It is much better to keep superpositions as what they are, superpositions of valid automaton modes which, for that reason cannot by themselves act as ontological states. What happens instead is that, if one considers some superposition of physical states, one is actually considering a probabilistic mixture, but what exactly the true, unmixed, physical states are differs from one experiment to the next, in such a way that the final state can never be in a superposition. Because this feature is of tremendous importance, we explain some technical details of this point in Appendix A. Now we can see that, in deriving their inequalities, Bell and CHSH had to make assumptions that we cannot agree with. Their main assumption is that Alice and Bob may choose what to measure, and that this should not be correlated with the ontological state of the entangled particles emitted by the source. However, when, in choosing their settings, either Alice or Bob change their minds ever so slightly, their classical settings represent a different ontological state than before. The photon they look at now, will be a superposition of the old photons that they wanted to detect, but the entire state, photon plus settings, will be orthogonal to the previous one. In particular, because of the ontological conservation law, the new photon they look at must be an ontological one. Alice and Bob do not have the free will to look at photons that are not ontological. So, while changing their minds, Alice and/or Bob had to put the universe in a different ontological state than the previous state, and this modification goes back billions of years, all the way to the origin of the universe. One could call this retro-causality, but it is merely due to the fact that the (classical as well as quantum) equations can, in principle, be solved backwards in time. As a consequence, Alice's and Bob's settings can and will be correlated with the state of the particles emitted by the source, simply because these three variables do have variables in their past light cones in common. The change needed to realize a universe with the new settings, must also imply changes in the overlapping regions of these three past light cones. This is because the universe forces itself to stay ontological at all times. The restriction that the universe must be in an ontological state at all times, is the only restriction. This implies that Alice and Bob still have free will in the classical sense; they can choose any of the ontological states of the universe, no matter what kind of random number generator or lotto machine they were using. But they cannot put the universe in a superposition of states, which is only something we can do in our mathematical models when studying probability distributions, wishing to bring these in a form such that we can apply Schrödinger equations. So let us emphasize and summarize this essential point: Whenever observers seem to be using their “free will” to choose the settings of the detectors they use, they cannot ‘change their minds' unless microscopic data at all times in the past are modified as well. Among others, the (entangled) photons in Bell's experiment will be re-arranged into some other quantum state in such a way that the photons eventually measured will always be in an ontological state: they cause a detector either to click or not to click, but they can never cause detectors to go into a superposition of states. In particular, if we assume that the universe started with a given, fixed state at t = 0 (the Big Bang), then there is no option anymore for any observer to change his mind; his actions are fixed, even if he thought to have free will. The settings a and b are correlated with the photon polarizations λ, which should not be confused with “causation backwards in time.” A related quantum paradox that has been put forward as another illustration of quantum weirdness, is the so-called GHZ paradox. This paradox is of interest because its resolution can be phrased in terms of an over simplified model of the universe, illustrating the important role of the observer as being part of the system. In Appendix C, we explain what happens in the cellular automaton theory when this Gedanken experiment is performed. 5. Information Loss and the Arrow of Time Most well-known physical theories that explain the apparent absence of time reversal symmetry contain elements of thermodynamics and entropy. Actually, in these descriptions of nature, one can explain the absence of this symmetry elegantly by blaming it to an asymmetry in the boundary conditions. When writing differential equations for the laws of nature, one always has to add what we know about the boundaries. As for the boundaries in the space-like directions, little is known, since the universe looks very homogeneous, and no boundary effects have ever been detected. The universe is either strictly infinite in the space-like directions, or we live on a spatially compact manifold such as a 3-sphere or a torus. These boundary conditions show much symmetry. In the time-like direction, however, there cannot be complete symmetry. The universe appears to have started extremely small, conceivably it all started in a single point. That point must have been highly ordered, having total entropy very small or possibly zero. This is a reasonable boundary condition at the origin of time. Yet at the other end, when time grows to be very large, we see no need of any boundary condition; the universe may simply continue to expand forever, undergoing perpetual increase of entropy. Thus we have equations that are symmetric under time reversal but asymmetric in their boundary conditions. This suffices to explain the time asymmetry we see today. However, there are examples of mathematical systems where features exist that can be attributed either to the bulk of the system or to the boundary6, so that relegating all time symmetry violating effects to the boundary may conceivably not always work. As long as we adhere to the quantum mechanical description of all microscopical dynamical laws, we find the CPT theorem on our way, which implies that if we combine time reversal T with parity reversal P and particle-antiparticle interchange C, then this symmetry is perfect. We could well stick to our verdict that Nature's boundary conditions in the time direction suffice to explain the arrow of time. One may observe however that another source of time reversal asymmetry can be contemplated. As explained in previous sections, this author does not believe that “quantum mechanics” will be the last and permanent framework for the ultimate laws of nature. If we drop it, to be replaced by some classical ideas, the need for time reversal symmetry also subsides. We could opt for an underlying theory where information, in the classical sense, can disappear. Considering cellular automata, systems where information does get lost are much more general than the ones where information is conserved, so that switching the direction of time brings about much more dramatic changes. How can such models lead to effective quantum theories? Does local time reversal symmetry re-emerge? We claim that, for an automaton, the possibility to generate statistical correlations that are solely based on vector space analysis, that is, vectors evolving in Hilbert space, which lead to quantum mechanics, may be quite generic, and include models featuring information loss. The way to deal with information loss in this context is very straightforward in principle, while extremely difficult in practice. The way to handle this in principle is by the introduction of information classes: we identify the elements of an orthonormal basis of Hilbert space not with single states of the automaton, but with information classes. An information class is defined to be a class of states in an automaton that have the property that, after a finite amount of time, they all evolve to become the same state in the automaton. In principle, such classes may become extremely large, but in practice the odds of two states that resemble one another at one moment in time, to evolve into exactly the same state in the near future, might rapidly go to zero as time proceeds, so that the information classes may continue to be manageable. Formally, they might become big enough to form states that can be distinguished by only inspecting the data living on a boundary surface rather than specifying what happens in the bulk. This is what we see in the physical equations for black holes, called holography, so that this may be seen as an indirect piece of evidence favoring underlying models with information loss. In underlying models with information loss, the act of time reversal takes a very interesting shape: the time reverse of ontological states in Hilbert space (beables) tend to form quantum superpositions of beables in the time-reversed Hilbert space. This may perhaps explain why superpositions follow the same laws of nature as ontological states, but for the time being we just regard these generic observations to be something to keep in mind when, much like Sherlock Holmes, we attempt to figure out, in terms of models, what it might have been that actually took place, when all information we have been able to acquire, takes the shape of quantum superpositions. Author Contributions Conflict of Interest Statement The author thanks T. Maudlin, P. W. Morgan, T. Myers, T. Norsen, and many others, for extensive discussion on these and related issues on weblogs. I also thank the editors and referees who insisted upon further clarifications to improve the original manuscript. 1. ^Here, and in what follows later, all equations of the form ddt|ψ=iH|ψ, where H is a hermitian operator, are referred to as Schrödinger equation, regardless whether they act on wave functions or more general vectors in Hilbert space. 2. ^‘Multiverse' can mean different things. In cosmology, it means that there may be different regions of our universe where the inflation rates and perhaps also the effective laws of nature, vary. In quantum mechanics, one might view the ‘many worlds' together as a multiverse. 3. ^See footnote 1. 4. ^We refer here to the equations at small time intervals and accordingly acting at small distance scales. Thermodynamics on the other hand, valid for large time intervals cannot be easily inverted in time. 5. ^The phrase “vector space analysis” is used in information technology; it is the same mathematics that is used there. We add to that procedures involving unitary transformations. 6. ^The θ angle in QCD is a case in point. One can describe that as a lack of invariance under topological gauge transformations, which can be entirely attributed to the boundary. Equivalently, one can regard this effect as a PC violating term in the action, which is local. 7. ^In physics, the most spectacular application is the solution of the 2-dimensional Ising Model, by Onsager and Kaufman [9]. They turn the classical model into a quantum field theory that happens to be integrable. 8. ^We often get the question whether taking the absolute squares of _i as being the probabilities, doesn't change everything. The answer is no, because the coefficients do not change at all during the entire evolution, as long as we stay in the ontological basis. 1. Einstein A, Podolsky B, Rosen N. Can Quantum mechanical description of physical reality be considered complete? Phys Rev. (1935) 47:777. Google Scholar 2. Jammer M. The Conceptual Development of Quantum Mechanics. New York, NY: McGraw-Hill (1966). Google Scholar 3. Clauser JF, Horne MA, Shimony A, Holt RA. Proposed experiment to test local hidden-variable theories. Phys Rev Lett. (1969) 23:880–4. doi: 10.1103/PhysRevLett.23.880 CrossRef Full Text | Google Scholar 4. Bell JS. On the Einstein Podolsky rosen paradox. Physics (1964) 1:195. Google Scholar 5. Bell JS. La nouvelle cuisine. In: Bell M, Gottfried K, Veltman M, editors. On the Foundations of Quantum Mechanics. Speakable and Unspeakable in Quantum Mechanics. Ann Arbor, MI: University of Michigan (2001). p. 216–34. Google Scholar 6. 't Hooft G. The cellular automaton interpretation of quantum mechanics. In: Fundamental Theories of Physics, 1st Edn., Vol. 185. Cham: Springer International Publishing (2016). p. 298. Google Scholar 7. Mermin ND. Quantum mysteries revisited. Am J Phys. 58:731 (1990). doi: 10.1119/1.16503 CrossRef Full Text | Google Scholar 8. Greenberger D, Horne M, Zeilinger A. Going beyond Bell's theorem. In: Kafatos M, editor. Bell's Theorem, Quantum Theory, and Conceptions of the Universe. Dordrecht: Kluwer Academic (1989), p. 69–72. Google Scholar 9. Kaufman B. Crystal statistics. II. Partition function evaluated by spinor analysis. Phys Rev. (1949) 76:1232. Google Scholar A. Superpositions and Born's Probabilities Whenever theories with classical logic are proposed to explain quantum phenomena, the following questions are often raised:Question 1: In Bell's experiment, a pair of particles—call them photons—is in an entangled state. In an ontological theory, it seems as if this pair of particles “knows ahead of time” which superposition of states will later be chosen by Alice and by Bob for their measurements. Why does this not violate causality? Question 2: How come that the squares of amplitudes exactly represent the probabilities for the outcomes of measurements? (Born's rule) And question 3: What happens when a wave function collapses? And what happens when a measurement or observation is made? These questions are all strongly related, and they can be answered together in what was advertised earlier as the Cellular Automaton Interpretation of Quantum Mechanics [6]. The basic idea is that, at the tiniest distance scale that is meaningful in physics, presumably the Planck scale, around 10−33 cm, there are laws of physics which are most efficiently formulated by not giving any reference to Hilbert space, quantum superpositions, qubits, or even action-at-a-distance. We have a cellular automaton there, or something that resembles this very much. A cellular automaton can best be regarded as a basic computer program, where, in a massive venture of parallel computing, digital data that are localized on some sort of grid, are being updated at the beat of an extremely fast clock. The speed of the clock may vary at some points, but these are details that we do not want to go into. Most importantly, information spreads with a limited velocity, basically the speed of light, and all this information is classical. Temporarily, for simplicity, we assume the system to be reversible in time, although, as was explained earlier, this might not be necessary. This is clearly the kind of theory that Einstein, Bell, and many others thought they could disprove, but as we shall explain now in more detail, this is not quite the end of the story. There are various aspects of the system that need much more scrutiny, in particular the ubiquitous presence of very strong correlations at the micro-scale, which permeates to macroscopic distances, and the fact that it is fundamentally impossible to compress (to “zip”) the system into a more course-grained model that reproduces all details. As soon as one tries to compress anything, uncertainties emerge that manifest themselves by looking like quantum superpositions. But I am running ahead of my arguments, let us consider the situation in a meaningful order. The more complete story is presented in 't Hooft [6]. In principle, the automaton can be in a huge number of distinct states, roughly 21099 states in every cubic cm (a number obtained by assuming one boolean degree of freedom in a cubic Planck length). Only if we consider all of these states, the system can be seen to be deterministic. Every single one of these states is important, but, because of strong correlations, we perceive our world as if there are much fewer states possible, typically 21050 in a cm3 (one boolean degree of freedom in 1 TeV−3). Yet compressing the system cannot be done without losing information; a more powerful technique is required. It so happens that a more powerful technique does exist; we call it “vector space analysis.” In mathematics, this is not new 7. For instance, in group theory, it turned out to be useful to give matrix representations of elements of a group. Consider a subset of a permutation group. The elements of the set in which the permutations take place are represented as orthonormal vectors in our vector space. The dimensionality of this vector space equals the dimension (number of elements) of the set. It can be finite or infinite. This vector space is our Hilbert space. One now can use all mathematical tricks available for vectors to investigate the properties of the group. For instance, one can diagonalize the matrices. This involves orthogonal (unitary) transformations of all sorts for the vectors. It is now assumed that we can do the same in the set of states of the automaton. After a number of transformations, we get matrices representing the evolution that are diagonal or almost diagonal. The effective dimensionality of our Hilbert space can now be considerably reduced because large parts of it factorize. However, they do not factorize along the original dividing lines of our orthonormal set. We get different kinds of vectors, all of which are now superpositions of vectors of the original set. All of this is just mathematical manipulation; the physics is kept as it was. In particular, the evolution law is an ontological matrix in terms of the original ontological states; an ontological unitary matrix is a matrix containing only one and for the rest zeros in all its rows and all its columns (arbitrary phase factors are allowed, as long as each row and each column only contains one element with absolute value one, while all other matrix elements vanish). After some combination of extensive linear superpositions, our matrices will look much more generic than before. While every one of our 21099 states evolves into another state within time units as small as the Planck time, being of the order of 10−44 seconds, we will find superpositions of states that evolve much slower. The effective time unit will now be the inverse of the energies of the most energetic particles in our particle accelerators. These energies are many orders of magnitude lower than the Planck energy, so indeed, we have a much smaller Hilbert space than the original one. What is known about physics today is the evolution laws of this tiny subspace of Hilbert space. Since the time dependence is much slower here, we can write the evolution law in terms of a hermitian hamiltonian: the Schrödinger equation. We only postulate determinism in the original cellular automaton model with its humongous number of states, not in the effective, reduced model that is called physics today. Can this system violate the Bell/CSHS inequalities? First we need to specify how an observation is made, in terms of the states of the original automaton. Suppose we want to establish the presence of a planet. In the interior of the planet, atoms and molecules are densely packed, so that the world in there looks quite different from the vacuum state. We now assume that the vacuum state is represented by states in the automaton that show different statistical abundances and correlations than the states that represent densely packed atoms and molecules. Locally, the statistical differences between these states may be minute; our ability to distinguish the vacuum state from the rocky material may be far from perfect; say that, inside a small volume of a mm3, a given state has a likelihood of (1 − ε) / (1 + ε) of being a vacuum rather than a rock. For the whole planet, we have to raise this number to a power equal to the volume of the planet measured in mm3. Thus one finds almost with certainty that there is a planet rather than a vacuum in that neighborhood. The planet is a classical object. What we just found is that such classical objects are bound to be sufficiently well identified and characterized in terms of the original states of the automaton. Let us assume that this holds for all objects that we normally call “classical,” not necessarily as large as planets. When we do a measurement or make an observation, we must be looking at a large subset of the classical states of the automaton. Now consider a quantum experiment. We can't use the entire Hilbert space, because it contains far too many states. So we use the strongly reduced subspace of Hilbert space that represents only low-energy particles. All these states are superpositions of cellular automaton states. Specifying our initial state |ψinit as well as we can, we still represent it as a superposition of ontological states |ont〉i: |ψinit=iαi|ont,  initi ;i|αi|2=1.    (A1) At this point we merely need to define that |αi|2 represents the probability that the ontological state |ont〉i is our initial state. From the mathematics of linear representation theory, it would be hard to deduce any other link between probabilities and amplitudes than that one. In any case, in what follows, we shall see that what holds for the initial state will continue to hold for all states arrived at in later times. So let us consider the evolution of this state. Our mathematical procedures for the decompositions of our state vectors never affected the physical evolution law for the ontological states. This means that, as long as we use linear Schrödinger equations, also at later times, relation (A1) continues to hold, up to the final state: |ψfinal=iαi|ont,  finali ;i|αi|2=1.    (A2) Note that the basis of states will have changed, but the superposition coefficients _i have stayed exactly the same, and hence also the probabilities stayed the same8. And now consider the measurement. We compare the final superimposed state with the ontological states the system should end up in. They are again the ontological states |ont, final〉i of Equation (A2). Now the αi are finally recognized as representing the probabilities for the final state. Born's probability rule is the simple consequence of the mathematical representation theory. The answer to the question where Born's probability rule comes from is that, if we put it in for the initial state, Born's rule stays the same during the entire evolution. Note now that, if we started with one single ontological state |ont, init〉1, then the final state will automatically also be a single ontological state |ont, final〉1. This continues to be true if we use the Schrödinger equation to describe the evolution. Consequently, the Schrödinger equation will automatically cause the final state to collapse into a single ontological state, if the initial state was a single ontological state. The reason why this appears not to happen in ordinary quantum mechanics is that we do not use the full Schrödinger equation for all states, but only for the lower energy states where the equation is known, and we idealized the initial state, involuntarily replacing the ontological initial state by a superposition, hence a probabilistic distribution of initial ontological states. It is often claimed that quantum probabilities should be seen as fundamentally different from the classical uncertainties that are due to lack of knowledge of the initial state; in our approach however, the quantum probabilities are there for exactly the same reasons as in classical theories. Now consider the EPR/Bell experiment. We do not explicitly construct a microscopic, classical model for all Standard Model interactions. Although general strategies for such a construction have been proposed, it is still too difficult to reproduce all symmetries of Nature. We do however claim that any contradiction with the Bell/CHSH inequalities has disappeared. When Alice and Bob perform their observation, they cannot select a superposition of photon states, but only one ontological photon. The outcome of Alice's measurement is always an ontological state of the form |a, Aont, where a is the setting chosen, given by an angle, and A = ±1 is her finding. Together with Bob's finding, the final, classical state is |a, A, b, Bont. In our model, the calculation gives a superposition, ψfinal=α1|a,+, b,+ont+ 2|a,+, b,ont    (A3) + α3|a,, b,+ont+ α4|a,, b,ont,    (A4) The observed outcome is never a final state of the form (A2) or (A3), but always one specific ontological state, |ont, final〉1. The model calculation gives an entangled superposition of the ontological state |a, b〉 combined (multiplied) with a superposition of the four states |+, +〉, |+, −〉, |−, +〉, and |−, −〉. If we modify the initial state, the calculated final state will be a different entangled superposition, but the ontological state will be in the basis of the angles a, b and the measurements A and B. Modifying the initial ontological state will always lead to a single final ontological state, never a superposition, since the coefficients αi never change. What was misleading in Bell's exposition of the experiment is that he thought that a modification of the settings a and b would lead to a different superposition of the measurements A = ± and B = ±. In our vector representation, any modification of a and b, regardless how tiny, requires a modification of the initial ontological state. The new ontological state will be orthogonal, hence totally unrelated to the previous one, so that the two photons emitted by the source cannot be related to the photons emitted previously. Thus, the idea that one can modify the settings (a, b) without modifying the polarization of the entangled photons emitted by the source, is an illusion. One can also say that the settings a and b emerge to be entangled with the polarized photons. As soon as the settings are fixed, the photons will only be in a single ontological state. I won't push this description too much, because at the end we should have just a single setting and a single ontological photon state. The most important difference between our presentation and the usual treatment of Bell's observations is that the observers Alice and Bob, together with the settings a and b chosen by them, are parts of the physical system. Any modification of the settings (a, b), whether done out of “free will” or otherwise, will require a different initial ontological state. B. Causality and the Arrow of Time in Special Relativity Within the CA interpretation of quantum mechanics, special relativity is difficult to handle in this procedure, since the Lorentz group, or the Poincaré group, are notoriously difficult to implement, as these groups are not compact. It is quite conceivable that Poincaré transformations link ontological states not to other ontological states, but to superpositions of ontological states. Yet the presence or absence of symmetries should not be our immediate concern. We may for instance assume that only the homogeneous part of the Lorentz group is a genuine symmetry at the ontological level, or possibly an approximate symmetry. The more important feature of special relativity is the fact that it gives a limit to the propagation speed of signals. Now this is quite easy to impose on CA models or theories. We just assume that, at the beat of our clock, the contents of a given cell of our automaton can only be passed on to a neighboring cell. Signals then can never propagate faster than the speed of this process. Outside the associated light cone, the validity of “No Bell Telephone,” Equation (3.1), is then guaranteed. As we stated in section 3, this is the only acceptable causality condition for physical models, classical as well as quantum. It implies that the time ordering is only a partial ordering – for space-like separated events the time ordering is irrelevant. The arrow of time is defined as the order in which the equations for our models (classical, quantum, cellular automaton of continuum field theories) are to be applied in our model simulations. Thus, relativistic theories will have an arrow of time as much as non-relativistic ones. As we emphasized in section 1. the fundamental definition of time, as well as its arrow, can only be applied to our models of Nature, not the physical data themselves. This also holds for the concept of causality. The difficulty of imposing Lorentz- and Poincaré symmetry for CA models continues when time reversibility is broken at the ontological level, but models where the propagation speed of information is limited can easily be extended to being non-reversible in time as well. This happens almost automatically. C. The GHZ Paradox and the 6-Bit Universe There are many newer versions, generalizations and refinements of the original Gedanken experiments considered by EPR and Bell. Sometimes, the paradoxes concern not only probabilities, but even certainties where clashes with “classical” physics are seen to occur, but they all have in common that one or more observers choose between two or more different settings that measure properties of quantum objects, whose operators do not commute. An interesting case, where the magic mystery seems to reach new heights, is the GHZ paradox. We briefly recapitulate the setup, which is explained in more detail in the literature [8, 7]. A source is constructed such that it emits three entangled particles, each having two possible spin states, ±1. The quantum state produced is ψ=12(|+,+,+|,,).    (C1) The operators to be considered are σx,ya,b,c where a, b and c refer to the three particles a, b and c, and σxa|±,=|,  σya|±,=±i|,    (C2) while σx,yb and σx,yc act similarly on particle b, and on c, respectively. It is not difficult to derive that these operators obey XXX σxaσxbσxc = 1XYY   σxaσybσyc  = 1YXY   σyaσxbσyc  = 1YYX   σyaσybσxc  = 1.    (C3) The three Pauli matrices σi acting on the same particle, anti-commute, σxa,σya=-σyaσxa, while two Pauli matrices acting on different particles commute. Thus one derives that if we permute two pairs of σ operators in Equation (C3), two minus signs emerge, which enables us to derive easily that all four operators in Equation (C3) commute with one another. Therefore, all operators in Equation (C3) can be measured simultaneously, and the result always obeys (C3). Now, the three particles are sent to three different observers, who sit in three different, sealed rooms. Each observer decides, “at his free will,” to choose to measure either X = σx or Y = σy. The observers cannot communicate with each other, so they do not know what the others choose. They just meticulously write down whether they measure X or Y, and what their outcome is, +1 or −1. After having done a long series of measurements, they come out of their rooms, and compare notes. All observers, on average, found as many pluses as minuses, because the expectation values of X = σx and Y = σy are zero. Also, there is no pair correlation, since for every pair, the expectation values for XX, XY, YX and YY are also zero. But the three observations are correlated: the three-point correlations, given in Equation (C3), are very strong. Moreover, they seem to contradict classical logic. The list of observations will obey (C3). But at every run, one might have asked: what would this observer have found if (s)he chose the other setting, or more generally, given a particle entering his room, and (s)he measures either X or Y, what would the outcome in either case have been? So we add to the list of observations, at each run, all possible answers: XXX, XXY, ⋯ , YYY. Now take the last three equations of (C3). Take their product. Since every Y occurs exactly twice in the product, the Ys together always contribute +1 in the product. What is left is the three Xs. So we get XXX = +1. But this is wrong, it violates the first equation of Equation (C3). One must conclude that the three entangled particles know, ahead of time, whether their observers will have chosen X or Y. Apparently, the observations that were not actually made, do not have well-defined values for X or Y at all. These are called counter factual. Quantum mechanics forbids counter factual observations. How can this happen in a cellular automaton? In this case, vector space analysis suggests that a simple model can be constructed of the entire universe. There are just 6 binary dynamical variables in this universe. A priori, this universe could have started choosing any of 26 = 64 distinct initial states. Like our real universe, this model universe may have started out with a big bang. At that moment, not all possible states have been realized. Only 48 of the 64 initial states were allowed. During a period of chaos, the 48 states may have been scrambled many times, but there are 16 states that cannot be realized at any time. This is how the laws of nature for the model universe are programmed. At the beginning of the experiment, three particles are selected. These are three of the 6 bits. All of them can be +1 or −1. Now we have the three observers, A, B, and C. Each of them has to decide whether to choose X or Y. They each grab the one bit they can find in their rooms. That bit represents their free will. It can be anything, but its properties are determined by laws of nature. Each observer knows that the probability for this bit to be +1 or −1 will be equal, so the observers will be convinced they are acting out of free will. There are 23 = 8 possible terms in the sequence XXX, YXX, ⋯ , YYY. In 4 of these (where the number of Ys is even), there is a constraint: only 4 of the 23 = 8 possible answers are allowed. Therefore, 4 × 4 = 16 outcomes are forbidden. This is what the laws of nature tell you here: of all ontological states, 16 are forbidden. Thus, we claim that classical laws of nature in the 6 bit universe can perfectly well reproduce the GHZ “miracle,” but we must accept that the observer's free will is controlled by laws of nature as much as all other phenomena. Of course, quantum physicists object that this is unfair: “you have used ‘retro-causality' to establish your laws of nature.” Well, the view presented in the main body of this paper is, that the laws of nature are usually time-reversal invariant, and this means that if a complete state of the universe is known at present, it also causes limitations to the allowed states in the past, and that is where our constraints come from. We simply cannot expect “perfect” free will in our universe. Maybe you think this is “conspiracy.” So be it, but the laws of nature in our approach are foremost classical. Keywords: arrow of time, quantum mechanics, time, 6-bit universe, information loss, GZH paradox Citation: 't Hooft G (2018) Time, the Arrow of Time, and Quantum Mechanics. Front. Phys. 6:81. doi: 10.3389/fphy.2018.00081 Received: 20 April 2018; Accepted: 17 July 2018; Published: 29 August 2018. Edited by: George Jaroszkiewicz, Independent researcher, Walton on the Wolds, United Kingdom Reviewed by: Yutaka Shikano, Institute for Molecular Science (NINS), Japan Sisir Roy, National Institute of Advanced Studies, India Hans-Thomas Elze, Università degli Studi di Pisa, Italy *Correspondence: Gerard 't Hooft,
9d424b4ce2e2e387
Skip to main content Skip to footer site map Carl E. Mungan, Professor Carl E. Mungan, Professor Physics Scholarship I wrote the following documents as a result of questions related to courses I have taught or interesting issues discussed on the PHYS-L email list. Most are in PDF; a couple are in HTML. Comments and corrections are invited. Occasionally I refer to specific textbooks or acknowledge my sources in a highly abbreviated fashion; no slight or plagiarism should be inferred by any omissions--you should instead assume I am merely being lazy about credit. Within each category, they are listed in reverse chronological order of writing. • Four Conditions that a Physically Meaningful Spatial Wavefunction must Satisfy   A wavefunction must be both normalizable and smooth. Each of these requirements leads to a pair of conditions that restricts the acceptable solutions of the Schrödinger equation so that we know what kinds of general wavefunction forms to write down in each region of space. • Rotating Disk Puzzle  A disk rolls without slipping around another disk of different size. How many rotations does the first disk make until two points on the disks that were initially coincident come back into contact with each other? • Area of an Ellipse in Polar Coordinates  To describe an ellipse in polar coordinates and integrate its area, it is helpful to introduce the eccentric anomaly. • Phase and Group Velocity of Matter Waves  A discussion of various calculations of the phase and group speeds for a monoenergetic beam of particles in free space. • Uniqueness of Brachistochrone Solution  A subtle but elegant proof that a cycloid is the unique analytic functional shape of track that has minimum descent time between given initial and final points along a frictionless track starting from rest. • Fastest Descent along a Ramp and Horizontal Track  A particle slides frictionlessly starting from rest down a ramp and then along a horizontal track. For fixed vertical and horizontal distances between the starting and ending points, what ramp angle minimizes the total travel time of the particle? • Pressure Exerted by a Rotating Cylinder of Fluid  A cylindrical can of water is rigidly rotating about its vertical axis of symmetry, such that water makes contact with the top surface of the can. What pressure does the water exert on that surface? • Inertia Ball  A weight is hung from a fixed support by a light string. An identical string hangs from the bottom of the weight. If you pull slowly on the lower string, the upper string breaks first. But if you jerk the lower string, it breaks first. I analyze this well-known demo both analytically and numerically using Hooke's law, Newton's second law, and kinematics. • Polar Form of an Ellipse  Algebra is used to derive the polar form of an ellipse, the relation between the semilatus rectum and angular momentum, and the construction of an ellipse by looping a string around two thumbtacks and a pencil moving in such a way as to keep the string taut. Other than definitions, the only needed ingredients are the rectangular form of an ellipse, and conservation of angular momentum and mechanical energy. • Magic Newton's Cradle  Three balls are arranged so they make 1D elastic collisions. If the balls have relative masses of 1, 0.236, and 1 in order, then sending the first ball in to impact the others at rest will eventually lead to the third ball coming out with all the initial momentum, after the middle ball has bounced back and forth making four collisions with the two end balls. • Velocity of an Initially Stationary Target after a Projectile Impacts it Head On  The final velocity of an initially stationary target that is impacted head on by a projectile varies linearly with the coefficient of restitution. • The Twirling Rope  A rope hanging from one point is set into rotation about that point. A nonlinear second order differential equation is derived that describes the equilibrium shape of the rope. It is solved numerically for one example set of parameters. • Floating Cork in an Accelerating Elevator  Does a cork float higher or lower in a beaker of water when it is accelerating upward compared to when the elevator is stationary? • Direct Harmonic Balance  Several different published methods to find the frequency of oscillation of a nonlinear oscillator are compared for five different example problems. • The Marble Loop-the-Loop on Angle Track  A marble rolls without slipping on a 90-degree angle bracket bent into a loop-the-loop. From what minimum height must the ball start to just make it around? • Gravitational Force due to a Spherical Shell  An elegant but subtle proof that the field outside a uniform shell is the same as that of a point mass at its center is summarized. • Rolling Cylinders Race Down an Inclined Plane  A solid cylinder, a hollow cylinder, and an iron cylinder bonded to a cylindrical wooden sleeve (shaped like an optical fiber preform) race down an incline starting from rest. What is the order of arrival of the objects at the bottom? • Oscillatory Motion with and without Damping and Driving  I review the familiar solutions for undriven undamped oscillations, undriven underdamped oscillations, and sinusoidally driven damped oscillations in steady state. Only basic differentiation and trigonometry is used to verify that these solutions satisfy Newton's second law. Graphs of the frequency-dependent amplitude and phase difference are included for the third (resonant) case. • Model of a Viscoelastic Solid  A spring in series with a dashpot is modeled and shown to be in good agreement with experimental data for the bounce of a steel ball off a cylinder of silly putty. • Review of the Brachistochrone Problem  I review the derivation of some key formulae for the brachistochrone problem, including the Beltrami identity, the parametric solution, the descent time and arclength, and the relation to the isochrone problem. • Perfectly Inelastic Collision Between a Disk and a Stick  A disk strikes a stick at an acute angle (relative to its axis) between 0 and 90 degrees. If the collision is perfectly inelastic, a different fraction of the system's kinetic energy is lost if the disk adheres to the stick than if it does not. This example illustrates that one should NOT define "perfectly inelastic" to imply the colliding objects stick together. • Shooting at a Constant-Velocity Target  As in a video game, you can fire constant-speed bullets at a constant-velocity target. Knowing the velocity and initial position of the target, and the speed of the bullets, at what angle should you fire and how long will it take until you get a hit? Assume the bullets move faster than the target, so as to guarantee a hit. • Number of Galaxies in the Universe  A simple but accurate back-of-the-envelope estimate of the number of galaxies in the universe is found based on its age and on the typical number of stars in a galaxy. The key assumption is that the visible mass of the universe is just enough that it is flat. • Derivation of Kepler's Third Law and the Energy Equation for an Elliptical Orbit  Introductory textbooks usually derive Kepler's third law and the total mechanical energy of a satellite orbiting a much heavier body only for the special case of circular orbits. However, using algebra alone, one can obtain the generalized expressions valid for an elliptical orbit. • The Lagrangian Method in the Introductory Course  One can derive Newton's second law starting from conservation of mechanical energy. I call this the "Lagrangian method" because it is equivalent to solving the Lagrange equation. It is a nice method for solving problems that involve forces of constraint. • A Semiclassical Derivation of Eigenenergies  Using de Broglie's relation and standing wave conditions, together with classical mechanics, one can easily derive the energy levels for a particle in a box, a hydrogen atom, and a simple harmonic oscillator. • Transformation Equation for Center-of-Mass Work  The equation is derived that relates work computed in an inertial frame to that computed in some other frame that translationally accelerates relative to it. An important special case is the transformation between two different inertial frames, in which case the impulse is frame invariant but the work in general is not. • Rolling Stack of Cans  Three cans are stacked in a horizontal triangular pile and released from rest. Ignoring friction, what is the speed of each can as the top one strikes the table? • Maximum Bob Height of an Interrupted Pendulum  The swing of a simple plane pendulum is interrupted by a peg around which the string bends. What is the position of the highest point of the swing as a function of the height of release of the bob, for a given peg position? This is a nice review exercise combining concepts from 2D kinematics, centripetal force, and conservation of mechanical energy. • Speed and Amplitude of a Tsunami  Some rough estimates using Newton's second law and the equation of continuity enable one to derive the well-known result that a shallow water wave has a speed equal to the square root of the acceleration of gravity and the water depth. One can use this to predict that the wave pulses will get narrower and taller as they approach shore. • Coriolis Correction to Freefall  If a rock is dropped down a mineshaft on Earth, its flight path gets deflected from a plumb bob hanging from the initial location due to the Coriolis force. One can calculate exactly the easterly and southerly deflections (in the northern hemisphere). • Wave Pulse on a Hanging Rope  A heavy hanging rope is given a brief shake. How long does it take for the resulting pulse to travel up and down the rope? An approximate solution is discussed for both traveling and standing waves set up on the rope. • Perpendicular-Axis Theorem for Volumes  The perpendicular-axis theorem is generalized from laminae to volumes. The result has applications to the computation of moments of inertia of spherically symmetric objects and to the calculation of moments about rotated coordinate axes. • Kepler's Equation  A satellite is in an elliptical orbit about a planet. Given the eccentricity and period of the orbit, find the relationship between the angular position and the transit time of the satellite, both measured starting from the position of closest approach between the satellite and planet. • Jumping Frog  What is the minimum speed a frog needs to jump over a log of circular cross section, if it can leave the ground from any point it likes? (Hint: The frog will NOT brush the top of the log.) • Evaluating a Common Integral  I discuss the pros and cons of two different methods of solving the second integral of the Euler-Lagrange equation that arises in the well-known soap-bubble problem. The curve of revolution is a catenary (hyperbolic cosine). • Synchronous Orbit of a Satellite about a Binary Star  A lightweight satellite synchronously orbits an equal-mass binary star system. The satellite and the stars are in circular orbits about their common center of mass. What are the possible positions of the satellite whereby this can occur? • Potential Energy of Stretched Springs  In this paper, I discuss the conditions under which the elastic potential energy of a set of stretched springs can be calculated from the displacements of the ends of each spring from their equilibrium (as opposed to their relaxed) positions. I consider both longitudinal and transverse displacements. Being able to confidently write down this potential energy is crucial to the solution of coupled oscillator problems. • Power to Create a Water Jet  How much power is required to launch a vertical jet of water with given base radius to some specified maximum height? This is problem 8.91 in Giancoli and is a nice exercise in the application of the concepts of energy and force. • Maximum Range of a Projectile Launched from a Height  It is well known that a surface-to-surface projectile has maximum range equal to the initial speed squared divided by the freefall acceleration when launched at 45 degrees. What are the maximum range and optimum launch angle when the projectile instead starts above the landing surface? • Theory of Holonomic Constraints  This is a half-page summary of how to find the equations of motion and the generalized constraint forces for a system subject to holonomic constraints using the Lagrange equations. • Box Pulled on a Rough Surface by a Winch  A box on a rough horizontal surface is being pulled from above by a winch being cranked at a constant angular speed. Find the acceleration of the box and the tension in the pulling cable as a function of the horizontal distance to the winch. • Newton's Laws  This document is intended to guide a one-lecture introduction to Newton's three laws of motion in a first course for majors. I try to be a bit more careful than typical textbooks about the definitions of force, mass, and reference frame, yet without getting mired in conceptual quicksand. • Formal Derivation of Centripetal Acceleration  This is in effect a simplified version of the document below entitled "Acceleration Components in 2D" for the special case of UCM. It is suitable as a derivation of the standard formula for centripetal acceleration in a calculus-based course early in the semester before rotational motion or polar unit vectors have been properly introduced. • Constant Acceleration in Special Relativity  An object experiences a constant acceleration for an extended period of time. Decide what this statement means and then calculate its relativistic speed at the end of the time interval if it started from rest. Consider both the cases where the time interval is measured in the lab frame and in the object's proper frame. • Rolling Friction of a Free Wheel  A free wheel which is perfectly round and rolls on a rigidly flat surface cannot experience any contact friction. In reality, both the wheel and road deform slightly. It is then possible to introduce rolling friction with its own coefficient to simultaneously account for both the translational and rotational deceleration. Such deformations are not needed for a driven or braked wheel, however. • Rotating Space Station  If you drop a ball while standing in a space station which is rotating to provide artificial gravity, does the ball land at your feet? For that matter, if you drop a ball from a tall tower on Earth, does the ball land at the base of the tower? • The Black Hole Shredder  An ordinary brick falls into a black hole. Will the large tidal stresses tear the brick apart before it winks out of view behind the event horizon? • Multiple Strings and Pulleys  In order to apply Newton's second law to problems involving multiple pulleys, blocks, and strings, it is necessary to find equations relating the accelerations of the various masses to each other. I give an example of how to do this for a quasi-1D problem; the idea is to express the positions of the masses in terms of the fixed lengths of the strings and then differentiate twice. • Acceleration Components in 2D  Four adjectives are commonly used when discussing acceleration in plane polar coordinates: centripetal, radial, tangential, and azimuthal. Most introductory textbooks only use some of these; others use all of them but without clearly explaining the differences. In fact, none of them are synonyms and each is packed with content. In this handout I explain all four terms both qualitatively and mathematically; a brief appendix considers two possible definitions of the word "tangential." • Displacement and Pressure Amplitudes of Sound  The pressure oscillations in a longitudinal sound wave are in phase with the velocity oscillations of the fluid molecules. I summarize three derivations of this relation including the constants in the proportionality. This is a useful exercise in distinguishing the wave and molecular speeds. • Kinetic Energy of a Rigid Body  Under what circumstances should the KE of a rigid body be calculated using the translational formula, the rotational formula, or the sum of the two? I remind you of how to prove the fact that, when properly applied, all three choices give the same answer. A couple of practice exercises from Serway are suggested to illuminate this. • Formulae for Collision Problems  This handout briefly summarizes the formulae for the three standard kinds of collisions (elastic and perfectly or imperfectly inelastic) in both 1D and 2D. This is not intended to be memorized or carried into exams, but instead to help a student find the bottom line in the morass of equations thrown at them in the momentum chapter of typical intro textbooks. • Local Vertical on Earth  I review the solution of the problem of finding the direction of a freely hanging plumb bob on a spherical rotating planet, correcting an apparent typo in Arnold Arons' book. • Small-Angle Oscillations of an Arc  Here is a lovely problem for introductory mechanics. Find the period of SHM of the physical pendulum consisting of a uniform circular arc balanced at its midpoint on a knife edge. Remarkably, the answer is independent not only of its mass but also of what fraction of a complete circle it happens to be! • Two Rotational Equilibrium Problems  1. A ladder of uniform mass density on a rough floor leans against a rough wall. What is the minimum angle the ladder can make with the floor and not slip? A number of textbooks state this problem is indeterminate. I leave it to the reader to decide if my solution is flawed. 2. Two balls (not necessarily identical) are stacked inside a hollow cylinder. Under certain conditions, the cylinder topples over. What are these conditions on the masses and radii of the balls? • Falling Ball Puzzle  A ball is launched horizontally into a semi-cylindrical depression, makes a perfectly elastic collision with it, and is observed to rise straight up. How high will the ball rise above the lip of the depression? Your answer is to be expressed in terms of the radius of the depression only. This is a good test of how organized you are at problem-solving. • Talking like Donald Duck  Why does your voice sound high pitched when you inhale helium gas? One sometimes hears it said that vocal chords are like tuning forks. Unfortunately the frequency of a tuning fork is independent of the gaseous medium it is in--think for example of the usual introductory lab where such a fork is struck over top of a column of variable length. • Mass on a Vertical Spring  When simple harmonic motion is discussed in introductory textbooks, the mass connected to the spring usually slides on a frictionless horizontal surface. Unfortunately, it is pretty hard to demonstrate that in the real world, so we arrange things vertically. Then we assign some homework problems where the mass hangs vertically. But um professor sir, doesn't gravity mess up the force and potential energy considerations when you do this? • Momentum Carried by Mechanical Waves  This is a thread from PHYS-L discussing whether mechanical waves carry net momentum. Five articles from AJP are cited which I strongly recommend. It is interesting to see how much disagreement there is on such a topic among physics educators and professionals. • Classical Doppler Effect  I derive and summarize the classical Doppler shifts in the wavelength, frequency, and wave speed for three special cases: only the source moving, only the observer moving, and only the medium moving. In each case, one of the three wave parameters is unchanged. Finally, I put the three effects together to derive the shifts when everything is moving. • Completely Inelastic Collisions  Prove that a maximum amount of kinetic energy is lost in a completely inelastic collision between two point masses. Use only high school algebra, conservation of linear momentum, and the definition of kinetic energy to do it in 1D. Then use partial differentiation to do it for the fully general case of 3D. • Elastic Collisions in 1D  Many students have trouble remembering the quadratic formula. Hey, I often have the same problem and besides it usually creates a computational mess and having to choose between two solutions is a nuisance. Here is a neat trick for solving one-dimensional elastic collisions to get general algebraic expressions for the two final velocities. • Two Circuits of Switches and Bulbs  First I consider an N-way switching circuit for a light bulb. Understanding this circuit could help troubleshoot wiring problems in your house. Second I present a sneaky series circuit involving two switches and two bulbs where each switch independently controls one of the bulbs. The magic behind this circuit was demonstrated to me by Hans Pfister at Dickinson College. • Number of Independent Kirchhoff Equations  I present a method for counting the number of independent Kirchhoff Current Junction and Voltage Loop Rule equations by inspection of a circuit. A trick is required if some wires cross over other wires without electrical contact. • Equivalent Resistance of a Wheatstone Bridge  An excellent illustration of the method of nodal potentials for circuit analysis is to find the equivalent resistance of a Wheatstone bridge of resistors. • A Surprising Circuit Symmetry  A problem in Halliday, Resnick, and Walker involves showing that an ammeter reading does not change when the ammeter and battery in a circuit are interchanged. I present three different solutions of this surprising situation. • Charged Parallel Metal Plates  Arbitrary charges are placed on two large parallel metal plates. Find the electric fields everywhere in space. (For specificity, suppose +2 C is put on the left plate and -3 C on the right plate.) Spell out all physical assumptions needed in your solution. • Electric Field of a Uniformly Charged Straight Rod  There is a simple identity that makes it easy to calculate the field of a uniform rod at an arbitrary point in space. After deriving the general result, I specialize it to various cases of interest, including semi-infinite and infinite rods, and a point on the axis of a finite rod beyond one of its ends. • Surfaces of Zero Potential Around a Quadrupole  The two cones surrounding a zz quadrupole on which the electrostatic potential is zero are derived and graphed. The half-angle of the cones relative to the z axis is 55 degrees, but they deviate near the quadrupole so that they do not intersect, passing between the positive and negative charges of each opposing dipole pair making up the linear quadrupole. • Inductance of a Flat Circular Coil  The self-inductance of a flat circular coil from PASCO is estimated using the Biot-Savart law, with the two integrals computed in Mathematica. The results are in fair agreement with an experimental measurement of the actual inductance. • Induced Electric Field for a Solenoid of Uniformly Increasing Current  The nonconservative electric field induced everywhere in space for ideal solenoids of both circular and square cross section are calculated when the current in the windings is increased at a constant rate. Contour lines and field lines are instructively plotted. • Relation between the Ampere-Maxwell and Biot-Savart Laws  For a charge density in uniform motion, the Ampere-Maxwell and Biot-Savart laws can be shown to be equivalent, in analogy to the fact that Gauss' and Coloumb's law are equivalent in electrostatics. • Magnetic Field of a Current Loop in the Plane of the Loop  The magnetic field in the plane of a current loop is calculated by numerically integrating the Biot-Savart law. The results are graphically compared to the standard results at the center of the loop and at large distances from the loop. • Four Derivations of Motional EMF  As most instructors of E&M know, the familiar example of a conductor moving in a magnetic field is a rich playground for exploring forces, electromagnetic fields, and energy. In this document, I have taken the various bits and pieces and tied them together into one continuous story suitable for a standard classroom lecture. • Inductance Calculations  A recent paper in AJP gives a formula for computing the inductance of a device in terms of the self and mutual inductances of the elementary filamentary loops making up the circuit. I show for example how to apply the formula to calculate the inductances of a solenoid and of a coax cable. • Further Thoughts on the Ideal Solenoid  Here is a second method (simpler than that presented below) of rigorously deriving the magnetic field inside and outside an ideal solenoid of arbitrary cross-sectional shape. I end with a list of AJP references on this topic. • Time Constant for Charging a Pair of Capacitors  Giancoli P26.47 considers a resistor and capacitor in parallel connected to another parallel RC pair. What is the time constant of this circuit? A straightforward approach is to use Kirchhoff's rules. • Rewired Capacitors  Tipler problem 25.59 asks one to wire together in series three previously charged capacitors and find the final charges on and voltages across each. This is a nice application of Kirchhoff's rules and a counter-example to the "memoroid" that capacitors in series must have the same charges! • Double Coil Inductance  Tipler problem 30.54 asks one to calculate the inductance of a series pair of coaxial solenoids carrying the same current in opposite directions. The difficulty is that the two solenoids do not have the same length. In this note, I derive a particularly simple approximation to the solution by neglecting the end effects. For the actual numbers given in the problem, this is not very well justified and some interested party is invited to come up with a next order approximation to the solution. • Magnetic Moment due to Electron Spin  The magnetic dipole moment of a circulating charge is equal to the product of the gyromagnetic ratio and the angular momentum. The standard introductory derivation of this ratio comes out too small for a spinning electron by a factor of 2 (neglecting radiative corrections due to virtual photons). In this paper, I summarize two approaches that have been previously proposed to account for this factor within the contexts of introductory E & M and QM courses. • Magnetic Field Outside an Ideal Solenoid  The usual textbook arguments for why the field outside an ideal solenoid is zero are not very convincing. In this handout I directly integrate the Biot-Savart law by interpreting an ideal solenoid as a semi-infinite sheet of current rolled into a cylinder. That is, I solve Griffiths P5.44. The results provide a nice way to think about the issue, analogous to why the electric field inside a cylinder of uniform surface charge is zero. • Resistivity of Copper  A microscopic model for the resistivity of a metal is developed in order to prove Ohm's law. A classical model using Maxwell-Boltzmann statistics and hard-ion-core collisions correctly gives the field independence and ballpark magnitude. However, more detailed agreement and the linear temperature dependence requires using the Fermi-Dirac distribution and scattering only from lattice imperfections. Amazingly, the only material parameter needed in the final result is the atomic number density. Good agreement for the resistivity and temperature coefficient of copper are obtained. • Two Capacitance Formulae used in Lab  First an expression is derived relating the dielectric constant of some material to three capacitance measurements using an appropriate meter: the capacitance at minimum plate separation (which is nonzero because of the spacers used), the capacitance with the sample in place, and the capacitance with the sample removed but the plate separation unchanged. Secondly a formula is deduced in three different ways for the measured force between two plates as a function of the potential difference applied across them. • Infinite Square Lattice of Resistors  A well-known problem is the equivalent resistance of a semi-infinite ladder of one-ohm resistors. What is the resistance between two adjacent nodes of a 2D lattice of such resistors? The answer can be easily found using superposition of potentials, although some care is warranted when this idea is examined closely. • Electrostatic Equilibrium of Two Hanging Spheres  Two pith balls of equal mass but different charges (of the same sign) hang on equal-length threads from a common point. What is the relationship between the two angles they make with the vertical? Suppose we decide to solve this problem by drawing a free-body diagram and invoking Newton's third law and translational equilibrium. If you choose a coordinate system with horizontal and vertical axes, you get an equation which is very hard to solve. However, if you choose a different coordinate system for each sphere, with one axis oriented along the string, the two solutions are easy to find. This is an excellent example to throw out to students who insist on always using horizontal-vertical coordinate systems. • Hand-Cranked Generator and Capacitor  A Pasco hand-cranked generator is connected to one of those amazingly compact 1-F capacitors. After charging it up briefly, you stop turning the crank and then release your grip on it. The generator now spins as a motor with the capacitor serving as a battery. However, even though the current in the circuit has reversed, the crank continues to turn in the same direction! Isn't this a violation of energy conservation: how does the crank "know" that you took your hand away? • Charges on Series Capacitors  Introductory textbooks do not properly explain why the charges on (initially discharged) capacitors wired in series (to a low-frequency source) must all be equal. This is not exactly true for real capacitors; some unspecified idealizations have been assumed. In this thread from PHYS-L, the nature of these idealizations are clearly identified. • Two-Slit Interference Using a Thermal Source of Light  I explain how it is possible to perform two-slit interference with light from the sun or an incandescent light bulb even without passing that radiation through a pinhole first. The key idea is that the difference in pathlengths from a point on the source to the two slits needs to be less than the coherence length, which is on the order of the peak wavelength for a blackbody. This condition is easily satisfied. • Highway Mirages  A differential equation is obtained for the trajectory of a light ray descending diagonally through air whose index of refraction varies with height. It is applied to two examples in which the index decreases montonically with decreasing altitude above a hot road. In one case, that causes the ray to eventually hit the critical angle and turn around, giving rise to a mirage of water on the road ahead. In the second example, however, the ray asymptotically approaches the horizontal without ever reaching it, much less curving upward. • Single-Slit Diffraction  Approximate and exact values of the angular positions, peak intensities, and integrated areas of the secondary maxima for single-slit Fraunhofer diffraction are compared. • Location of a Position-Sensitive Detector on a Rotary Plate  A laser beam is incident on a PSD on a rotating platform. By measuring the position of the beam spot on the array detector as a function of the angle of rotation of the plate, one can determine the absolute location of the detector relative to the plate's axis. • Summary of Thin-Film Interference  I summarize the standard introductory discussion of thin-film interference with a single master formula. This contrasts with the variety of special-case formulae that typical physics textbooks present instead, leading students to misapply them in other situations. • Phase Change upon Reflection  The phase changes for reflection at normal incidence from an interface between nonmagnetic, nonconducting media are needed in the discussion of thin-film interference. Many introductory textbooks "justify" these phase changes using an analogy to free vs fixed boundary conditions for a mechanical wave reflecting off the end of a string. This is a bit strange considering that Maxwell's equations were usually discussed just a couple of chapters previous. Given these equations, it only takes a simple diagram and a few lines of algebra to formally derive the phase changes. As a bonus, index matching naturally arises as the third possibility when comparing the refractive indices of the two media. The reflectance and transmittance can also be easily introduced in this context. • Optical Length of a Day  Because of atmospheric refraction, the length of a day is a bit longer than the geometric duration it would have in the absence of this effect. I make a crude estimate of the extra daylight we would get by modeling the atmosphere as a homogeneous spherical shell of air and compare it to the actual value of about 4 minutes. • Hamiltonian Formulation of Geometric Optics  I review a recent Hamiltonian formulation of optics starting from Snell's law and photon concepts which naturally leads to the concept of group index of refraction. I briefly contrast this with an alternative formulation starting from Fermat's principle. • Optical Molasses  Some simple numerical values describing laser cooling of atoms are easily calculated, making a nice homework problem in a modern physics course. Specifically, one can estimate the slowing per photon, the net radiative force, the atomic stopping time, the minimum temperature, and the required laser intensity. • Cone Artist  Place a prism on a sheet of paper and trace its base. Lift off the prism and shade in the resulting rectangle. Call that the object. Now replace the prism but flip it over vertically so that its apex is in contact with and bisects the object. Look through the base. The question is: What is the width of the resulting image? (Hint: There is both an inverted and an uninverted image, depending on the angle at which you look into the base. What is the minimum width of the uninverted image?) • Light Output of a Three-Way Bulb  Light bulb packages usually state not only the electrical wattage but also the useful light output in lumens. How is the latter related to the former? In this paper, I consider the example of a GE soft white luminescent 50-100-150 W bulb. • Angle of Minimum Deviation through a Prism  Introductory texts often allude to or draw a diagram of a ray refracting symmetrically through a prism but seldom prove that this occurs at the angle of minimum deviation. In this note, I show this two ways. First by brute force, and secondly using an elegant argument based on optical reversibility. I end with a reminder of how a measurement of the angle of minimum deviation can be used to find the index of refraction of a prism. • Cat's Eye Retroreflector  A lens with a mirror at the focal plane (such as in the eye of a cat) retroreflects light. In this note, the retroreflection efficiency is computed as a function of angle of incidence and numerical aperture for a simple model. The results could be used to estimate how bad "red eye" might be. • Moon Tans  The ratio of the luminosities of the sun and full moon at zenith are calculated. I take the opportunity presented by this delightful problem to review the basic concepts of radiometry: the bidirectional reflectance distribution function, irradiance and radiance, albedo, and the properties of Lambertian surfaces. Actual lunar data from the Global Ozone Monitoring Experiment are used. I discuss the poorly known fact that while the sun is a Lambertian emitter, the Moon is not a Lambertian scatterer. The final result is in excellent agreement with the known luminosities and imply that to get a good moon tan, you would have to bask about a million times longer in its light than in that of the sun. • Rainbows  This is a quick summary of how to derive the angles at which the primary and secondary rainbows for water droplets in air are seen. The key point is that they arise at the angles of minimum deviation. A negative angle for the secondary bow explains its inverted color spectrum. Successive orders of rainbows dominate for droplets whose truncated index of refraction is the successive integers. • Parabolic Mirror  Introductory textbooks often mention that non-paraxial rays result in spherical aberrations and that this can be avoided by using a parabolic mirror. It is very simple to prove the validity of this claim using a sketch, as well as to quantify the size of the blur spot for a spherical mirror. The following concepts arise naturally in the discussion: the relation between the focal length and geometrical shape for both a parabolic and a spherical mirror, the directrix for a parabola, and where the non-paraxial rays strike the focal plane for a spherical mirror. • Radiative Coupling between an Object and its Surroundings  An object is placed inside an opaque cavity. What is the radiative heat load on the sample? Most textbooks incorrectly state that the answer is independent of the nature of the cavity. In this four-page paper, I review the correct analysis of the situation and apply it to two specific geometries: coaxial cylinders and infinite parallel planes. I end with two practical considerations. First, the fact that Kirchhoff's law for the equality of the absorptivity and emissivity holds not just for the integrated values but also wavelength by wavelength and angle by angle implies that the thermal radiation emitted by a sample is polarized. Second, I derive an expression for the exponential relaxation of a sample to its final temperature when it is optically heated or cooled. • Faraday Isolators and Kirchhoff's Law  Want a challenging puzzle to chew on? Consider a Faraday isolator: two linear polarizers whose transmission axes are oriented 45 degrees relative to each other, between which is located a magnetic rotator which rotates the plane of polarization of a beam by 45 degrees in the same direction regardless of the direction of propagation of the light. This constitutes a one-way light valve, used to protect lasers from harmful back-reflections. Now place a sample inside a cavity whose walls are made of this stuff. Light gets out but not back in, right? IF SO, THE SAMPLE WILL RADIATE AWAY ALL ITS ENERGY AND COOL DOWN TO ABSOLUTE ZERO! Save thermodynamics (and the principle of optical reversibility) for us, will you? • Lensmaker Formula  Any two points on a plane wavefront travel the same optical pathlength when brought to a focal point by a lens. (For example, this arrangement is typically used to attain the Fraunhofer regime for single or multiple slit diffraction.) Using this concept alone, one can derive the laws of lenses such as the lensmaker formula for a thin lens in the paraxial approximation. • Fresnel Boundary Conditions  Continuity of the tangential components of the electric and magnetic fields (expressed in complex form) across an interface are easily shown to lead to three results: the law of reflection, Snell's law, and continuity of the tangential components of the field amplitudes (from which the Fresnel equations for the reflection and transmission coefficients are derived). • Additively Combining Two Samples of Helium Gas  If we open a stopcock separating two flasks of a monatomic ideal gas, must the final equilibrated value of any extensive state variable be equal to the sum of the initial equilibrated values for the two samples? • Computing a Partial Derivative for a Van der Waals Gas  Computing partial derivatives is a common problem in thermodynamics and can be difficult for novices. Here I discuss an interesting example from the literature and present a simple solution to it. • Thermodynamics of an Open System  For an open system, one cannot simply identify TdS with dQ and μdN with the energy transfer due to particle exchange. A simple example involving a monatomic ideal gas makes the issues clear. • Entropy of a Classical Ideal Gas of Distinguishable Atoms  The usual textbook formula for the entropy of a classical ideal gas of distinguishable particles is wrong because it violates the second law of thermodynamics. In actuality, the standard Sackur-Tetrode expression applies to either distinguishable or indistinguishable particles. • Brightness Temperature of a Laser  The brightness temperature of a helium-neon laser is calculated to be on the order of 10 billion kelvin by finding the temperature of a blackbody of the same emitting area that has been filtered spectrally and angularly to match the laser beam. • Legendre Transforms for Dummies  I review a way to introduce the Legendre transform via partial derivatives. Examples from thermodynamics and classical mechanics are included to illustrate the method. • Thermal de Broglie Wavelength  I show that the reciprocal cube root of the quantum concentration (appearing in the partition function of an ideal gas) is equal to half of the de Broglie wavelength thermally averaged over the Maxwell distribution of molecular speeds. • Connecting the Work-Energy Theorem to the First Law of Thermodynamics  Conventional texts develop topics in the order: forces, work, energy, thermodynamics. Within the context of this particular sequence, how can one bridge from the work-energy theorem in mechanics to the first law in thermodynamics? I compactly summarize one logical approach in a single page, deferring some of the subtleties to footnotes. • Model for the Atmosphere  A simple atmospheric model is summarized which makes realistic predictions for the temperature, density, and pressure variations with altitude up to about 15 km, without making ad hoc assumptions common in typical intro physics textbooks. • Density of States of a Particle in a Box  By differentiating the volume of a hypersphere (as calculated in the Mathematical Physics section), I derive the general formula for the DOS of a particle in a hyperbox of arbitrary dimensionality. • Density of Air down a Bore Hole  Gases are usually distinguished from condensed media by a factor of about 1000 difference in density. But what if we lived in deep underground caverns rather than at the earth's surface: At what depth would an ideal gas have unit specific gravity? I answer this question both with and without accounting for the change in ambient temperature with depth. • Adiabatic Expansion of Soda Pop  A small amount of water is placed in the bottom of a corked bottle which is pressurized to the point that it blows its top. Under appropriate circumstances, a mist forms in the bottle, as presented for example in Demo 15-04 of the Video Encyclopedia of Physics Demonstrations. In this note, I analyze the pressure and temperature changes to explain why condensation occurs. • Van der Waals Equation of State  Introductory thermodynamics and chemistry textbooks often "justify" the Van der Waals equation of state with a few sentences of nonsense. Here I try to do a slightly better job of motivating the formula by relating the two constants in the equation in an approximate manner to molecular constants of the gas in question. • Thermally Induced Agitation of a Simple Pendulum  The equipartition theorem is generalized to handle cases where the energy is proportional to the generalized coordinates to any positive power (i.e., not necessarily two). As an example, I show that different choices of coordinates for a simple pendulum (to wit, the angle and angular momentum, the horizontal displacement and linear momentum, or the vertical displacement and linear momentum) all give the average energy to be kT. Another nice case is a photon, for which the energy is linearly proportional to the momentum. • Thermodynamics of a Classical Ideal Gas  The purpose of this paper is to remind you of how to calculate the entropy and chemical potential of a classical ideal gas. That is, it is assumed the gas obeys classical statistics for the translations and rotations. (The vibrations are presumed to be frozen out.) For example, the entropy in the monatomic case is known as the Sackur-Tetrode equation. The chemical potential is related to the average occupancy of available states. • Internal Energy of a Nuclear Gas  Consider a gas of nucleons in an infinite-square-well potential whose radius is proportional to the cube root of the mass number. An expression for the internal energy per nucleon is found as a function of temperature, by first deriving a formula for the chemical potential. The results clearly highlight the limited validity of this common model for the nuclear potential. • Important Corrections to Stowe  This is a list of serious flaws in Keith Stowe's otherwise very readable textbook, "Introduction to Statistical Mechanics and Thermodynamics." The list was prepared by Dan Schroeder (who has himself written the introductory book "Thermal Physics") and divides into three categories: chemical potential, multiplicity of a classical system, and counting polarization states. • Density of States via the Heisenberg Uncertainty Principle  Stowe gives two different expressions for how the density of states of a system should vary with its internal energy and number of degrees of freedom. Both are wrong. I derive the correct expression by appealing to Heisenberg's Uncertainty Principle. The examples of a particle in a 3D box, and of a simple harmonic oscillator in 1D, 2D, and 3D are explicitly considered. • Thermal Processes  A one-page chart summarizing the work, heat, change in internal energy, and pressure-volume graph for the standard four thermal processes applied to ideal gases, as derived in a typical introductory physics course. No calculus is invoked. • Algebraic Proofs of Two Equations  In this one-page handout I present proofs of two important equations related to thermal processes: the work done during the isothermal expansion of an ideal gas, and the relationship between pressure and volume during the adiabatic expansion of an ideal gas. Both are presented without proof in Sec. 15.5 of Cutnell & Johnson. I use College Algebra alone, so that bright non-calculus students can follow it. This is a nice reminder of the power of logarithms and exponentials. • Glossary of Thermodynamic Terms  A glossary of terms for the typical introductory physics course. This is intended to help students navigate the minefield of familiar words used in very precise ways. Both macroscopic and microscopic definitions are included. • Parallelogram Numbers Divisible by Eleven  Any four-digit number formed by tracing out a parallelogram on a numeric keyboard (such as 5621) is divisible by 11. • A Representation for any Prime Number  Show that any prime number (greater than or equal to 5) can be written as the square root of 1 plus some integer times 24. • Tile a Square With Four Nonidentical Isosceles Triangles  Divide a square into four isosceles triangles, none of which are identical to each other. • My Number is 136  A challenging logic puzzle from the Wall Street Journal involving two numbers whose sum and product are each seen by only one person. • Multiply by Four and Reverse  What four-digit number when multiplied by four is equal to the original number with its digits written in reverse order? • Postage Stamps Problem  You have two different denominations of stamps. Using various numbers of each kind of stamp, what possible total amounts can you make? In particular, under what circumstances is there an upper limit on the amount you cannot make, and what is a formula for that limiting amount? • Finding the Integral of an Inverse Function  If you know the integral of a forward function, you can find the integral of its inverse function by integrating by parts. • Law of Universal Procrastination  NSF reports of when submissions to grant requests are made during the open window of acceptance leads to a universal hyperbolic law of procrastination. Notably, about half of all submissions will be on the due day, be it for grant proposals, tax returns, student papers, or what have you. • Triangle Bisector Theorem  A derivation of the relationships between the side lengths and angles if one angle of a triangle is bisected by a line. • Repeating Decimals  A quick proof that any repeating decimal fraction can be expressed in rational form by dividing the repetend by that number all of whose digits are 9 and having as many digits as the repetend has. • Quant Quiz  Wall Street Journal published 5 sample questions from a recent Math Competition on page C13 of the Wednesday 4 March 2015 issue of their newspaper. • The Problem of a Boy Born on a Tuesday  If you know that at least one of the children in a two-child family is a boy, what is the probability the family has two boys? How about if you know at least one child is a boy born on a Tuesday? How about if you know the older child is a boy? Remarkably the answers to these three questions are all different. You may wish to compare it to the Problem of Two Aces listed below. • Solution of a Cubic Equation  Cardano's four-step method for finding the algebraic solution of a cubic in terms of its coefficients is compactly outlined. • Area of an Ellipse  A simple noncalculus derivation of the area of an ellipse is reviewed. • Probability of Being Chosen in Repeated Independent Trials  There are a bunch of numbered objects in a group. Suppose that the probability that an object's number is selected is a constant p during repeated trials. What are the odds that that number will be selected at least once over the course of n trials? • Solving M Equations in N Unknowns by Gaussian Elimination  I review the number of solutions that arise for a system of linear equations. The idea is to row reduce the augmented matrix and then inspect for the presence of rows where every entry except possibly the last is zero. • Why the Jacobian Transforms Variables in an Integral  A quick sketch is presented of how the Jacobian enters into integrals when one transforms multiple variables (such as from rectangular to polar coordinates). • Problem of Two Aces  You are dealt two cards from a shuffled deck. What is the probability of getting two aces? If you know that one is an ace, what is the probability that the other is an ace? If you know that one is the ace of spades, what is the probability that the other is an ace? Each of these questions has a different answer, clearly demonstrating that the more information you have, the more your odds go up. If you start an argument at a party by asking these questions, DON'T BLAME ME! • Riemann Zeta of 4 as Needed for Stefan-Boltzmann Law  A quick review of the standard method of evaluating ζ(4) using Parseval's relation for a triangular wave. Knowledge of this result enables one to find the value of the Stefan-Boltzmann constant. • Small-Argument Expansion of a Polynomial in a Denominator  Four methods are presented for expanding the reciprocal of a polynomial in x as a power series for small x. • Names of the Trigonometric Functions  I summarize some mnemonics for remembering the values of the trig functions, along with an abbreviated etymology of their names. This article is a short but useful handout for students who cannot remember which function is which. • Cosines of Common Angles  There is a nice pattern to the values of the cosines of common angles in the first quadrant. A pattern involving square roots and the integers 0 through 4 is found. • Center of Mass of a Uniform Triangular Plate  One can find the center of mass of a triangular plate using an algebraic scaling argument. It is easiest to first do a right triangle. Then an oblique triangle can be split into two right triangles, both having positive mass for an obtuse triangle and one having negative mass for an acute triangle. • Griffiths Form of the Potential for a Semi-Infinite Slot  I derive the summation of the series for the solution of the Laplace equation inside a rectangular slot of height b between two grounded plates and with a constant potential along the edge on the y-axis. • Extrema & Saddle Points for a Function of Two Variables  A summary of how to find the locations of all of the minima, maxima, and saddle points for a function of two independent variables. • Modified Taylor Series  Standard 1D Taylor series can be modified to expand f(x+h) or f(g) where h(x) and g(x) are functions of x. As an example, a series expansion for sin(sin(x)) is obtained. • Chords of a Circle  Draw a circle and choose any point inside the circle and call it P. Then draw a chord that passes through point P. Point P divides the chord into two lengths. Prove that the product of those two lengths would be the same for any chord you drew through point P. As a corollary, derive the equation of a circle, or equivalently the Pythagoras theorem. As another corollary, show that the locus of points describing the perpendicular intersection of two lines passing through two fixed points is a circle. • Matrix Coefficients for a System of Linear Equations  Consider the matrix equation Ax=d. A standard problem is to find x given A and d. But what is the general solution instead for A, given x and d? • Coins Puzzle  While blindfolded, you are handed a tray of identical coins. You are told how many of them are heads up. How can you divide them into two groups such that each group has an identical number of heads? You may manipulate the coins any way you like, but have no means of determining which are heads and which are tails. Hint: There is a simple algorithm which works even if the initial number of heads is odd. • Looped String Puzzle  You hang a loop of string over a bunch of parallel rods. No matter which rod you pull out of the bunch, the loop falls to the ground. How was the string wound around the rods? • Shaking Hands at a Party  A bunch of couples at a party shake hands. No one shakes his spouse's hand and every person except one shakes a different number of hands. How many hands did the odd person out shake hands with? • An Induction Proof for Prime Numbers  Prove that the difference between n raised to the power p and n, where n is any whole number and p is any prime number, is divisible by p. This establishes a famous theorem about primes due to Fermat. • Equilateral Triangle on the Surface of a Sphere  In this document I derive formulas for the interior angle and area of a triangle with sides of equal length on the surface of a unit sphere. This is a nice exercise showing that Euclidean geometry changes in curved space. • Integral Representation of the Riemann Zeta Function  In texts such as Arfken, the integral representation of the Riemann Zeta function (needed for example in the derivation of the Stefan-Boltzmann law) is obtained by contour integration. Here is a much simpler derivation, accessible to a student who is familiar with the definition of the gamma function. • Volume of a Hypersphere  The volume of a hypersphere in n dimensions is derived. This is a wonderful exercise in the use of the gamma function and gives an amazingly compact form for the final result. • Irrationality of Square Roots  Can you prove that the square root of 3 is irrational? How about of 4.1? The proof is amazingly simple, by extension of a well-known proof that the square root of 2 is irrational. • Frobenius Series Solutions of Bessel's Equation  A derivation of Bessel functions of the first and second kind, together with some student exercises. • Solutions to Elementary First- and Second-Order Differential Equations  A one-page reference guide to common ODEs. • Flowchart for Series Solutions to Differential Equations  Do you use a power series or a Frobenius series? How do you handle various possibilities for the roots of the indicial equation? • Functions expressed in terms of Hypergeometric Series  A variety of functions encountered in Boas are expressed in terms of hypergeometric and confluent hypergeometric series. No attempt is made to derive these expressions; the purpose of this one-page handout is simply to whet interest in this series. • Uniformly Charged Wire Outside a Grounded Cylinder  An infinite wire carrying a uniform linear charge density runs parallel to an infinitely long, grounded, conducting cylinder. The potential everywhere outside the cylinder is derived using series expansions. The paper ends with some student exercises. • Solving Newton's Second Law in One Variable in the Absence of Dissipation  This one-page note gives an algorithm for finding the time needed by a particle moving along a specified curve to get from a given point to another, assuming that no net nonconservative force acts on the particle. This is essentially a disguised version of the work-energy theorem. • Derivations of Stirling's Approximation  Stirling's Approximation is derived two different ways. First, a quick proof uses Taylor's theorem, as is appropriate for the thermodynamics teacher who needs a 10-minute class derivation. Second, a much longer proof due to Mermin is presented; in his usual style, it meanders through many flowery meadows, including the "compound interest" formula for e and Wallis' formula for pi. • Multipole Expansion of the Electrostatic Potential  The electrostatic potential of an arbitrary finite charge distribution is expanded in powers of Legendre polynomials. The monopole, dipole, and quadrupole terms are written out explicitly and compared to analogous quantities in mechanics. The handout ends with six student exercises. • An Easy Method for Partial Fraction Decomposition  If you know how to find residues of simple poles (which is usually the first example one learns about in connection with residues), then you can easily decompose partial fractions without having to laboriously solve simultaneous equations. This brief handout gives an example outlining the idea. • Physics Cinema Classics  A few choice selections from this "golden oldie" series. • Mechanical Universe and Beyond  If you have seen this excellent series of 52 videos by David Goodstein, the following set of keywords may suffice to jog your memory. A web address where you can watch the series online is included. • Video Encyclopedia of Physics Demonstrations  I watched this entire 25-laserdisc collection of physics demonstrations. I summarize my favorites and tie them to specific courses and textbooks in a typical physics curriculum. Select videos can be watched online.
93e790e351466ceb
Broadband infrared supercontinuum generation in a soft-glass photonic crystal fiber pumped with a sub-picosecond Er-doped fiber laser mode-locked by a graphene saturable absorber Buczynski, R., Sobon, G.; Sotor, J.; Klimczak, M; Stepniewski, G.; Pysz, D.; Martynkien, T.; Kasztelanic, R.; Stepien, R.; Abramski, KM. Typ publikacji: Publikacja naukowa recenzowana (Science Citation Index) Laser Physics 23 (10), 2013, art. 105106, 10.1088/1054-660X/23/10/105106 Jednostka organizacyjna: A fiber-based supercontinuum source, comprising a graphene mode-locked erbium fiber laser and a highly nonlinear photonic crystal fiber (PCF), is reported. The nonlinear fiber has zero-dispersion wavelength shifted towards 1500 nm specifically for pumping with compact femtosecond and sub-picosecond fiber lasers operating in this spectral area. A chirped pulse amplification system seeded by a graphene mode-locked laser, generating linearly polarized 850 fs pulses and a pulse energy of 20 nJ at a repetition rate of 50 MHz, was used as the pump source. A 6 cm long, soft-glass PCF sample enabled generation of a supercontinuum spanning over an octave from 1000 to over 2300 nm in a 20 dB dynamic range. The measured results are interpreted numerically, based on a solution to the nonlinear Schrödinger equation using the split-step Fourier method; assignment of the nonlinear processes taking part in the observed broadening is proposed. The developed model is then used to estimate supercontinuum performance in the presented fiber with improved experimental conditions.
e6e254ce2ae57ff0
• 图案背景 • 纯色背景 • 2019-11-16 Physics - The Basic Tools Of Quantum Mechanics in Chemistry 内容提示: Words to the reader about how to use this textbookI. What This Book Does and Does Not ContainThis text is intended for use by beginning graduate students and advanced upperdivision undergraduate students in all areas of chemistry.It provides:(i) An introduction to the fundamentals of quantum mechanics as they apply to chemistry,(ii) Material that provides brief introductions to the subjects of molecular spectroscopy andchemical dynamics,(iii) An introduction to computational chemistry applied to the treatm... 文档格式:PDF| 浏览次数:5| 上传日期:2015-03-04 12:20:02| 文档星级: Words to the reader about how to use this textbookI. What This Book Does and Does Not ContainThis text is intended for use by beginning graduate students and advanced upperdivision undergraduate students in all areas of chemistry.It provides:(i) An introduction to the fundamentals of quantum mechanics as they apply to chemistry,(ii) Material that provides brief introductions to the subjects of molecular spectroscopy andchemical dynamics,(iii) An introduction to computational chemistry applied to the treatment of electronicstructures of atoms, molecules, radicals, and ions,(iv) A large number of exercises, problems, and detailed solutions.It does not provide much historical perspective on the development of quantummechanics. Subjects such as the photoelectric effect, black-body radiation, the dual natureof electrons and photons, and the Davisson and Germer experiments are not evendiscussed.To provide a text that students can use to gain introductory level knowledge ofquantum mechanics as applied to chemistry problems, such a non-historical approach hadto be followed. This text immediately exposes the reader to the machinery of quantummechanics.Sections 1 and 2 (i.e., Chapters 1-7), together with Appendices A, B, C and E,could constitute a one-semester course for most first-year Ph. D. programs in the U. S. A.Section 3 (Chapters 8-12) and selected material from other appendices or selections fromSection 6 would be appropriate for a second-quarter or second-semester course. Chapters13- 15 of Sections 4 and 5 would be of use for providing a link to a one-quarter or one-semester class covering molecular spectroscopy. Chapter 16 of Section 5 provides a briefintroduction to chemical dynamics that could be used at the beginning of a class on thissubject.There are many quantum chemistry and quantum mechanics textbooks that covermaterial similar to that contained in Sections 1 and 2; in fact, our treatment of this materialis generally briefer and less detailed than one finds in, for example, H. Eyring, J. Walter, and G. E. Kimball, J. Wiley and Sons, New York, N.Y. (1947), Quantum Chemistry , D. A. McQuarrie, University Science Books, Mill Valley, Ca.(1983), Molecular Quantum Mechanics , P. W. Atkins, Oxford Univ. Press, Oxford,England (1983), or Quantum Chemistry , I. N. Levine, Prentice Hall, Englewood Cliffs,Quantum Chemistry , N. J. (1991), Depending on the backgrounds of the students, our coverage may have to besupplemented in these first two Sections.By covering this introductory material in less detail, we are able, within theconfines of a text that can be used for a one-year or a two-quarter course, to introduce thestudent to the more modern subjects treated in Sections 3, 5, and 6. Our coverage ofmodern quantum chemistry methodology is not as detailed as that found in Quantum Chemistry , A. Szabo and N. S. Ostlund, Mc Graw-Hill, New York (1989),which contains little or none of the introductory material of our Sections 1 and 2.By combining both introductory and modern up-to-date quantum chemistry materialin a single book designed to serve as a text for one-quarter, one-semester, two-quarter, orone-year classes for first-year graduate students, we offer a unique product.It is anticipated that a course dealing with atomic and molecular spectroscopy willfollow the student's mastery of the material covered in Sections 1- 4. For this reason,beyond these introductory sections, this text's emphasis is placed on electronic structureapplications rather than on vibrational and rotational energy levels, which are traditionallycovered in considerable detail in spectroscopy courses.ModernIn brief summary, this book includes the following material:1. The Section entitled The Basic Tools of Quantum Mechanics treatsthe fundamental postulates of quantum mechanics and several applications to exactlysoluble model problems. These problems include the conventional particle-in-a-box (in oneand more dimensions), rigid-rotor, harmonic oscillator, and one-electron hydrogenicatomic orbitals. The concept of the Born-Oppenheimer separation of electronic andvibration-rotation motions is introduced here. Moreover, the vibrational and rotationalenergies, states, and wavefunctions of diatomic, linear polyatomic and non-linearpolyatomic molecules are discussed here at an introductory level. This section alsointroduces the variational method and perturbation theory as tools that are used to deal withproblems that can not be solved exactly.2. The Section Simple Molecular Orbital Theory deals with atomic andmolecular orbitals in a qualitative manner, including their symmetries, shapes, sizes, andenergies. It introduces bonding, non-bonding, and antibonding orbitals, delocalized,hybrid, and Rydberg orbitals, and introduces Hückel-level models for the calculation ofmolecular orbitals as linear combinations of atomic orbitals (a more extensive treatment of several semi-empirical methods is provided in Appendix F). This section also developsthe Orbital Correlation Diagram concept that plays a central role in using Woodward-Hoffmann rules to predict whether chemical reactions encounter symmetry-imposedbarriers.3. The Electronic Configurations, Term Symbols, and StatesSection treats the spatial, angular momentum, and spin symmetries of the many-electronwavefunctions that are formed as antisymmetrized products of atomic or molecular orbitals.Proper coupling of angular momenta (orbital and spin) is covered here, and atomic andmolecular term symbols are treated. The need to include Configuration Interaction toachieve qualitatively correct descriptions of certain species' electronic structures is treatedhere. The role of the resultant Configuration Correlation Diagrams in the Woodward-Hoffmann theory of chemical reactivity is also developed.4. The Section on Molecular Rotation and Vibration provides anintroduction to how vibrational and rotational energy levels and wavefunctions areexpressed for diatomic, linear polyatomic, and non-linear polyatomic molecules whoseelectronic energies are described by a single potential energy surface. Rotations of &quot;rigid&quot;molecules and harmonic vibrations of uncoupled normal modes constitute the starting pointof such treatments.5. The Time Dependent Processes Section uses time-dependent perturbationtheory, combined with the classical electric and magnetic fields that arise due to theinteraction of photons with the nuclei and electrons of a molecule, to derive expressions forthe rates of transitions among atomic or molecular electronic, vibrational, and rotationalstates induced by photon absorption or emission. Sources of line broadening and timecorrelation function treatments of absorption lineshapes are briefly introduced. Finally,transitions induced by collisions rather than by electromagnetic fields are briefly treated toprovide an introduction to the subject of theoretical chemical dynamics.6. The Section on More Quantitive Aspects of Electronic StructureCalculations introduces many of the computational chemistry methods that are usedto quantitatively evaluate molecular orbital and configuration mixing amplitudes. TheHartree-Fock self-consistent field (SCF), configuration interaction (CI),multiconfigurational SCF (MCSCF), many-body and Møller-Plesset perturbation theories, coupled-cluster (CC), and density functional or Xα-like methods are included. Thestrengths and weaknesses of each of these techniques are discussed in some detail. Havingmastered this section, the reader should be familiar with how potential energyhypersurfaces, molecular properties, forces on the individual atomic centers, and responsesto externally applied fields or perturbations are evaluated on high speed computers.II. How to Use This Book: Other Sources of Information and Building NecessaryBackgroundIn most class room settings, the group of students learning quantum mechanics as itapplies to chemistry have quite diverse backgrounds. In particular, the level of preparationin mathematics is likely to vary considerably from student to student, as will the exposureto symmetry and group theory. This text is organized in a manner that allows students toskip material that is already familiar while providing access to most if not all necessarybackground material. This is accomplished by dividing the material into sections, chaptersand Appendices which fill in the background, provide methodological tools, and provideadditional details.The Appendices covering Point Group Symmetry and Mathematics Review areespecially important to master. Neither of these two Appendices provides a first-principlestreatment of their subject matter. The students are assumed to have fulfilled normalAmerican Chemical Society mathematics requirements for a degree in chemistry, so only areview of the material especially relevant to quantum chemistry is given in the MathematicsReview Appendix. Likewise, the student is assumed to have learned or to besimultaneously learning about symmetry and group theory as applied to chemistry, so thissubject is treated in a review and practical-application manner here. If group theory is to beincluded as an integral part of the class, then this text should be supplemented (e.g., byusing the text Chemical Applications of Group Theory , F. A. Cotton, Interscience, NewYork, N. Y. (1963)).The progression of sections leads the reader from the principles of quantummechanics and several model problems which illustrate these principles and relate tochemical phenomena, through atomic and molecular orbitals, N-electron configurations,states, and term symbols, vibrational and rotational energy levels, photon-inducedtransitions among various levels, and eventually to computational techniques for treatingchemical bonding and reactivity. At the end of each Section, a set of Review Exercises and fully worked outanswers are given. Attempting to work these exercises should allow the student todetermine whether he or she needs to pursue additional background building via theAppendices .In addition to the Review Exercises , sets of Exercises and Problems, andtheir solutions, are given at the end of each section.The exercises are brief and highly focused on learning a particular skill. They allow thestudent to practice the mathematical steps and other material introduced in the section. Theproblems are more extensive and require that numerous steps be executed. They illustrateapplication of the material contained in the chapter to chemical phenomena and they helpteach the relevance of this material to experimental chemistry. In many cases, new materialis introduced in the problems, so all readers are encouraged to become actively involved insolving all problems.To further assist the learning process, readers may find it useful to consult othertextbooks or literature references. Several particular texts are recommended for additionalreading, further details, or simply an alternative point of view. They include the following(in each case, the abbreviated name used in this text is given following the properreference):1. Quantum Chemistry , H. Eyring, J. Walter, and G. E. Kimball, J. Wileyand Sons, New York, N.Y. (1947)- EWK.2. Quantum Chemistry , D. A. McQuarrie, University Science Books, Mill Valley, Ca.(1983)- McQuarrie.3. Molecular Quantum Mechanics , P. W. Atkins, Oxford Univ. Press, Oxford, England(1983)- Atkins.4. The Fundamental Principles of Quantum Mechanics , E. C. Kemble, McGraw-Hill, NewYork, N.Y. (1937)- Kemble.5. The Theory of Atomic Spectra , E. U. Condon and G. H. Shortley, Cambridge Univ.Press, Cambridge, England (1963)- Condon and Shortley.6. The Principles of Quantum Mechanics , P. A. M. Dirac, Oxford Univ. Press, Oxford,England (1947)- Dirac.7. Molecular Vibrations , E. B. Wilson, J. C. Decius, and P. C. Cross, Dover Pub., NewYork, N. Y. (1955)- WDC.8. Chemical Applications of Group Theory , F. A. Cotton, Interscience, New York, N. Y.(1963)- Cotton.9. Angular Momentum , R. N. Zare, John Wiley and Sons, New York, N. Y. (1988)-Zare. 10. Introduction to Quantum Mechanics , L. Pauling and E. B. Wilson, Dover Publications,Inc., New York, N. Y. (1963)- Pauling and Wilson.11. Modern Quantum Chemistry , A. Szabo and N. S. Ostlund, Mc Graw-Hill, New York(1989)- Szabo and Ostlund.12. Quantum Chemistry , I. N. Levine, Prentice Hall, Englewood Cliffs, N. J. (1991)-Levine.13. Energetic Principles of Chemical Reactions , J. Simons, Jones and Bartlett, PortolaValley, Calif. (1983), Section 1 The Basic Tools of Quantum MechanicsChapter 1Quantum Mechanics Describes Matter in Terms of Wavefunctions and Energy Levels.Physical Measurements are Described in Terms of Operators Acting on WavefunctionsI. Operators, Wavefunctions, and the Schrödinger EquationThe trends in chemical and physical properties of the elements described beautifullyin the periodic table and the ability of early spectroscopists to fit atomic line spectra bysimple mathematical formulas and to interpret atomic electronic states in terms of empiricalquantum numbers provide compelling evidence that some relatively simple frameworkmust exist for understanding the electronic structures of all atoms. The great predictivepower of the concept of atomic valence further suggests that molecular electronic structureshould be understandable in terms of those of the constituent atoms.Much of quantum chemistry attempts to make more quantitative these aspects ofchemists' view of the periodic table and of atomic valence and structure. By starting from'first principles' and treating atomic and molecular states as solutions of a so-calledSchrödinger equation, quantum chemistry seeks to determine quantum numbers, orbitals, the aufbau principle and the concept of valence used byspectroscopists and chemists, in some cases, even prior to the advent of quantummechanics.Quantum mechanics is cast in a language that is not familiar to most students ofchemistry who are examining the subject for the first time. Its mathematical content andhow it relates to experimental measurements both require a great deal of effort to master.With these thoughts in mind, the authors have organized this introductory section in amanner that first provides the student with a brief introduction to the two primaryconstructs of quantum mechanics, operators and wavefunctions that obey a Schrödingerequation, then demonstrates the application of these constructs to several chemicallyrelevant model problems, and finally returns to examine in more detail the conceptualstructure of quantum mechanics.By learning the solutions of the Schrödinger equation for a few model systems, thestudent can better appreciate the treatment of the fundamental postulates of quantummechanics as well as their relation to experimental measurement because the wavefunctionsof the known model problems can be used to illustrate.what underlies the empirical A. OperatorsEach physically measurable quantity has a corresponding operator. The eigenvaluesof the operator tell the values of the corresponding physical property that can be observedIn quantum mechanics, any experimentally measurable physical quantity F (e.g.,energy, dipole moment, orbital angular momentum, spin angular momentum, linearmomentum, kinetic energy) whose classical mechanical expression can be written in termsof the cartesian positions {qi} and momenta {pi} of the particles that comprise the systemof interest is assigned a corresponding quantum mechanical operator F. Given F in termsof the {qi} and {pi}, F is formed by replacing pj by -ih∂/∂qj and leaving qj untouched.For example, ifF=Σl=1,N (pl2/2ml + 1/2 k(ql-ql0)2 + L(ql-ql0)),thenF=Σl=1,N (- h2/2ml ∂2/∂ql2 + 1/2 k(ql-ql0)2 + L(ql-ql0))is the corresponding quantum mechanical operator. Such an operator would occur when,for example, one describes the sum of the kinetic energies of a collection of particles (theΣl=1,N (pl2/2ml ) term, plus the sum of &quot;Hookes' Law&quot; parabolic potentials (the 1/2 Σl=1,Nk(ql-ql0)2), and (the last term in F) the interactions of the particles with an externallyapplied field whose potential energy varies linearly as the particles move away from theirequilibrium positions {ql0}.The sum of the z-components of angular momenta of a collection of N particles hasF=Σj=1,N (xjpyj - yjpxj),and the corresponding operator isF=-ih Σj=1,N (xj∂/∂yj - yj∂/∂xj).The x-component of the dipole moment for a collection of N particles hasF=Σj=1,N Zjexj, andF=Σj=1,N Zjexj ,where Zje is the charge on the jth particle.The mapping from F to F is straightforward only in terms of cartesian coordinates.To map a classical function F, given in terms of curvilinear coordinates (even if they areorthogonal), into its quantum operator is not at all straightforward. Interested readers arereferred to Kemble's text on quantum mechanics which deals with this matter in detail. Themapping can always be done in terms of cartesian coordinates after which a transformationof the resulting coordinates and differential operators to a curvilinear system can beperformed. The corresponding transformation of the kinetic energy operator to sphericalcoordinates is treated in detail in Appendix A. The text by EWK also covers this topic inconsiderable detail.The relationship of these quantum mechanical operators to experimentalmeasurement will be made clear later in this chapter. For now, suffice it to say that theseoperators define equations whose solutions determine the values of the correspondingphysical property that can be observed when a measurement is carried out; only the valuesso determined can be observed. This should suggest the origins of quantum mechanics'prediction that some measurements will produce discrete or quantized values of certainvariables (e.g., energy, angular momentum, etc.).B. WavefunctionsThe eigenfunctions of a quantum mechanical operator depend on the coordinatesupon which the operator acts; these functions are called wavefunctionsIn addition to operators corresponding to each physically measurable quantity,quantum mechanics describes the state of the system in terms of a wavefunction Ψ that is afunction of the coordinates {qj} and of time t. The function |Ψ(qj,t)|2 = Ψ*Ψ gives theprobability density for observing the coordinates at the values qj at time t. For a many-particle system such as the H2O molecule, the wavefunction depends on many coordinates.For the H2O example, it depends on the x, y, and z (or r,θ, and φ) coordinates of the ten electrons and the x, y, and z (or r,θ, and φ) coordinates of the oxygen nucleus and of thetwo protons; a total of thirty-nine coordinates appear in Ψ.In classical mechanics, the coordinates qj and their corresponding momenta pj arefunctions of time. The state of the system is then described by specifying qj(t) and pj(t). Inquantum mechanics, the concept that qj is known as a function of time is replaced by theconcept of the probability density for finding qj at a particular value at a particular time t:|Ψ(qj,t)|2. Knowledge of the corresponding momenta as functions of time is alsorelinquished in quantum mechanics; again, only knowledge of the probability density forfinding pj with any particular value at a particular time t remains.C. The Schrödinger EquationThis equation is an eigenvalue equation for the energy or Hamiltonian operator; itseigenvalues provide the energy levels of the system1. The Time-Dependent EquationIf the Hamiltonian operator contains the time variable explicitly, one must solve thetime-dependent Schrödinger equationHow to extract from Ψ(qj,t) knowledge about momenta is treated below in Sec. III.A, where the structure of quantum mechanics, the use of operators and wavefunctions tomake predictions and interpretations about experimental measurements, and the origin of'uncertainty relations' such as the well known Heisenberg uncertainty condition dealingwith measurements of coordinates and momenta are also treated.Before moving deeper into understanding what quantum mechanics 'means', it isuseful to learn how the wavefunctions Ψ are found by applying the basic equation ofquantum mechanics, the Schrödinger equation , to a few exactly soluble model problems.Knowing the solutions to these 'easy' yet chemically very relevant models will thenfacilitate learning more of the details about the structure of quantum mechanics becausethese model cases can be used as 'concrete examples'.The Schrödinger equation is a differential equation depending on time and on all ofthe spatial coordinates necessary to describe the system at hand (thirty-nine for the H2Oexample cited above). It is usually writtenH Ψ = i h ∂Ψ/∂t where Ψ(qj,t) is the unknown wavefunction and H is the operator corresponding to thetotal energy physical property of the system. This operator is called the Hamiltonian and isformed, as stated above, by first writing down the classical mechanical expression for thetotal energy (kinetic plus potential) in cartesian coordinates and momenta and then replacingall classical momenta pj by their quantum mechanical operators pj = - ih∂/∂qj .For the H2O example used above, the classical mechanical energy of all thirteenparticles isE = Σi { pi2/2me + 1/2 Σj e2/ri,j - Σa Zae2/ri,a }+ Σa {pa2/2ma + 1/2 Σb ZaZbe2/ra,b },where the indices i and j are used to label the ten electrons whose thirty cartesiancoordinates are {qi} and a and b label the three nuclei whose charges are denoted {Za}, andwhose nine cartesian coordinates are {qa}. The electron and nuclear masses are denoted meand {ma}, respectively.The corresponding Hamiltonian operator isH = Σi { - (h2/2me) ∂2/∂qi2 + 1/2 Σj e2/ri,j - Σa Zae2/ri,a }+ Σa { - (h2/2ma) ∂2/∂qa2+ 1/2 Σb ZaZbe2/ra,b }.Notice that H is a second order differential operator in the space of the thirty-nine cartesiancoordinates that describe the positions of the ten electrons and three nuclei. It is a secondorder operator because the momenta appear in the kinetic energy as pj2 and pa2, and thequantum mechanical operator for each momentum p = -ih ∂/∂q is of first order.The Schrödinger equation for the H2O example at hand then readsΣi { - (h2/2me) ∂2/∂qi2 + 1/2 Σj e2/ri,j - Σa Zae2/ri,a } Ψ+ Σa { - (h2/2ma) ∂2/∂qa2+ 1/2 Σb ZaZbe2/ra,b } Ψ= i h ∂Ψ/∂t.2. The Time-Independent Equation If the Hamiltonian operator does not contain the time variable explicitly, one cansolve the time-independent Schrödinger equationIn cases where the classical energy, and hence the quantum Hamiltonian, do not contain terms that are explicitly time dependent (e.g., interactions with time varyingexternal electric or magnetic fields would add to the above classical energy expression timedependent terms discussed later in this text), the separations of variables techniques can beused to reduce the Schrödinger equation to a time-independent equation.In such cases, H is not explicitly time dependent, so one can assume that Ψ(qj,t) isof the formΨ(qj,t) = Ψ(qj) F(t).Substituting this 'ansatz' into the time-dependent Schrödinger equation givesΨ(qj) i h ∂F/∂t = H Ψ(qj) F(t) .Dividing by Ψ(qj) F(t) then givesF-1 (i h ∂F/∂t) = Ψ-1 (H Ψ(qj) ).Since F(t) is only a function of time t, and Ψ(qj) is only a function of the spatialcoordinates {qj}, and because the left hand and right hand sides must be equal for allvalues of t and of {qj}, both the left and right hand sides must equal a constant. If thisconstant is called E, the two equations that are embodied in this separated Schrödingerequation read as follows:H Ψ(qj) = E Ψ(qj),i h ∂F(t)/∂t = ih dF(t)/dt = E F(t).The first of these equations is called the time-independent Schrödinger equation; itis a so-called eigenvalue equation in which one is asked to find functions that yield aconstant multiple of themselves when acted on by the Hamiltonian operator. Such functionsare called eigenfunctions of H and the corresponding constants are called eigenvalues of H. For example, if H were of the form - h2/2M ∂2/∂φ2 = H , then functions of the form exp(imφ) would be eigenfunctions because{ - h2/2M ∂2/∂φ2} exp(i mφ) = { m2 h2 /2M } exp(i mφ).In this case, { m2 h2 /2M } is the eigenvalue.When the Schrödinger equation can be separated to generate a time-independentequation describing the spatial coordinate dependence of the wavefunction, the eigenvalueE must be returned to the equation determining F(t) to find the time dependent part of thewavefunction. By solvingih dF(t)/dt = E F(t)once E is known, one obtainsF(t) = exp( -i Et/ h),and the full wavefunction can be written asΨ(qj,t) = Ψ(qj) exp (-i Et/ h).For the above example, the time dependence is expressed byF(t) = exp ( -i t { m2 h2 /2M }/ h).Having been introduced to the concepts of operators, wavefunctions, theHamiltonian and its Schrödinger equation, it is important to now consider several examplesof the applications of these concepts. The examples treated below were chosen to providethe learner with valuable experience in solving the Schrödinger equation; they were alsochosen because the models they embody form the most elementary chemical models ofelectronic motions in conjugated molecules and in atoms, rotations of linear molecules, andvibrations of chemical bonds.II. Examples of Solving the Schrödinger EquationA. Free-Particle Motion in Two Dimensions The number of dimensions depends on the number of particles and the number ofspatial (and other) dimensions needed to characterize the position and motion of eachparticle1. The Schrödinger EquationConsider an electron of mass m and charge e moving on a two-dimensional surfacethat defines the x,y plane (perhaps the electron is constrained to the surface of a solid by apotential that binds it tightly to a narrow region in the z-direction), and assume that theelectron experiences a constant potential V0 at all points in this plane (on any real atomic ormolecular surface, the electron would experience a potential that varies with position in amanner that reflects the periodic structure of the surface). The pertinent time independentSchrödinger equation is:- h2/2m (∂2/∂x2 +∂2/∂y2)ψ(x,y) +V0ψ(x,y) = E ψ(x,y).Because there are no terms in this equation that couple motion in the x and y directions(e.g., no terms of the form xayb or ∂/∂x ∂/∂y or x∂/∂y), separation of variables can be usedto write ψ as a product ψ(x,y)=A(x)B(y). Substitution of this form into the Schrödingerequation, followed by collecting together all x-dependent and all y-dependent terms, gives;- h2/2m A-1∂2A/∂x2 - h2/2m B-1∂2B/∂y2 =E-V0.Since the first term contains no y-dependence and the second contains no x-dependence,both must actually be constant (these two constants are denoted Ex and Ey, respectively),which allows two separate Schrödinger equations to be written:- h2/2m A-1∂2A/∂x2 =Ex, and- h2/2m B-1∂2B/∂y2 =Ey.The total energy E can then be expressed in terms of these separate energies Ex and Ey asEx + Ey =E-V0. Solutions to the x- and y- Schrödinger equations are easily seen to be:A(x) = exp(ix(2mEx/h2)1/2) and exp(-ix(2mEx/h2)1/2) , B(y) = exp(iy(2mEy/h2)1/2) and exp(-iy(2mEy/h2)1/2).Two independent solutions are obtained for each equation because the x- and y-spaceSchrödinger equations are both second order differential equations.2. Boundary ConditionsThe boundary conditions, not the Schrödinger equation, determine whether theeigenvalues will be discrete or continuousIf the electron is entirely unconstrained within the x,y plane, the energies Ex and Eycan assume any value; this means that the experimenter can 'inject' the electron onto the x,yplane with any total energy E and any components Ex and Ey along the two axes as long asEx + Ey = E. In such a situation, one speaks of the energies along both coordinates asbeing 'in the continuum' or 'not quantized'.In contrast, if the electron is constrained to remain within a fixed area in the x,yplane (e.g., a rectangular or circular region), then the situation is qualitatively different.Constraining the electron to any such specified area gives rise to so-called boundaryconditions that impose additional requirements on the above A and B functions.These constraints can arise, for example, if the potential V0(x,y) becomes very large forx,y values outside the region, in which case, the probability of finding the electron outsidethe region is very small. Such a case might represent, for example, a situation in which themolecular structure of the solid surface changes outside the enclosed region in a way that ishighly repulsive to the electron.For example, if motion is constrained to take place within a rectangular regiondefined by 0 ≤ x ≤ Lx; 0 ≤ y ≤ Ly, then the continuity property that all wavefunctions mustobey (because of their interpretation as probability densities, which must be continuous)causes A(x) to vanish at 0 and at Lx. Likewise, B(y) must vanish at 0 and at Ly. Toimplement these constraints for A(x), one must linearly combine the above two solutionsexp(ix(2mEx/h2)1/2) and exp(-ix(2mEx/h2)1/2) to achieve a function that vanishes at x=0:A(x) = exp(ix(2mEx/h2)1/2) - exp(-ix(2mEx/h2)1/2).One is allowed to linearly combine solutions of the Schrödinger equation that have the sameenergy (i.e., are degenerate) because Schrödinger equations are linear differential equations. An analogous process must be applied to B(y) to achieve a function thatvanishes at y=0:B(y) = exp(iy(2mEy/h2)1/2) - exp(-iy(2mEy/h2)1/2).Further requiring A(x) and B(y) to vanish, respectively, at x=Lx and y=Ly, givesequations that can be obeyed only if Ex and Ey assume particular values:exp(iLx(2mEx/h2)1/2) - exp(-iLx(2mEx/h2)1/2) = 0, andexp(iLy(2mEy/h2)1/2) - exp(-iLy(2mEy/h2)1/2) = 0.These equations are equivalent tosin(Lx(2mEx/h2)1/2) = sin(Ly(2mEy/h2)1/2) = 0.Knowing that sin(θ) vanishes at θ=nπ, for n=1,2,3,..., (although the sin(nπ) functionvanishes for n=0, this function vanishes for all x or y, and is therefore unacceptablebecause it represents zero probability density at all points in space) one concludes that theenergies Ex and Ey can assume only values that obey:Lx(2mEx/h2)1/2 =nxπ,Ly(2mEy/h2)1/2 =nyπ, orEx = nx2π2 h2/(2mLx2), andEy = ny2π2 h2/(2mLy2), with nx and ny =1,2,3, ...It is important to stress that it is the imposition of boundary conditions, expressing the factthat the electron is spatially constrained, that gives rise to quantized energies. In the absenceof spatial confinement, or with confinement only at x =0 or Lx or onlyat y =0 or Ly, quantized energies would not be realized.In this example, confinement of the electron to a finite interval along both the x andy coordinates yields energies that are quantized along both axes. If the electron wereconfined along one coordinate (e.g., between 0 ≤ x ≤ Lx) but not along the other (i.e., B(y) is either restricted to vanish at y=0 or at y=Ly or at neither point), then the total energy Elies in the continuum; its Ex component is quantized but Ey is not. Such cases arise, forexample, when a linear triatomic molecule has more than enough energy in one of its bondsto rupture it but not much energy in the other bond; the first bond's energy lies in thecontinuum, but the second bond's energy is quantized.Perhaps more interesting is the case in which the bond with the higher dissociationenergy is excited to a level that is not enough to break it but that is in excess of thedissociation energy of the weaker bond. In this case, one has two degenerate states- i. thestrong bond having high internal energy and the weak bond having low energy (ψ1), andii. the strong bond having little energy and the weak bond having more than enough energyto rupture it (ψ2). Although an experiment may prepare the molecule in a state that containsonly the former component (i.e., ψ= C1ψ1 + C2ψ2 with C1&gt;&gt;C2), coupling between thetwo degenerate functions (induced by terms in the Hamiltonian H that have been ignored indefining ψ1 and ψ2) usually causes the true wavefunction Ψ = exp(-itH/h) ψ to acquire acomponent of the second function as time evolves. In such a case, one speaks of internalvibrational energy flow giving rise to unimolecular decomposition of the molecule.3. Energies and Wavefunctions for Bound StatesFor discrete energy levels, the energies are specified functions the depend onquantum numbers, one for each degree of freedom that is quantizedReturning to the situation in which motion is constrained along both axes, theresultant total energies a...
e948864ae4af7de9
3. E. J. Betinis, Quantum Field Derivation of the Superluminal Schrödinger Equation and Deuteron Potential $25.00 each For purchase of this item, please read the instructions. Volume 15: Pages 11-40, 2002 Quantum Field Derivation of the Superluminal Schrödinger Equation and Deuteron Potential E. J. Betinis Department of Physics, Elmhurst College, Elmhurst, Illinois 60126 U.S.A. The superluminal Schrödinger equation originally derived by the author [Phys. Essays 11, 311 (1998)] was based on the author's derivation of the superluminal form of kinetic energy [Phys. Essays 11, 81 (1998)], which did not become singular at the velocity of light. This form of the kinetic energy then increased indefinitely as the particle velocity increased indefinitely without a singularity at v = c. The superluminal Schrödinger equation was then found by writing the kinetic energy as an operator by use of the usual quantummechanical operator formalism corresponding to the superluminal kinetic energy. In the present paper, it is demonstrated that one can construct Lagrangian and Hamiltonian densities that allow the rederivation of the author's superluminal Schrödinger equation by the quantum field theory approach. Although not explored in the present paper, the quantum field so constructed may yield further insights into the world of superluminal physics. The superluminal spherically symmetric Schrödinger equation was solved for the eigenfunctions and, by a method of successive approximations, the spherically symmetric superluminal potential was found for the deuteron. The iterative procedure developed would be broadly classified as the inverse boundary value problem for finding the potential for the Sstate. The iterations were carried out by use of the Laplace transform. The solution of the thirdorder differential equation derived by the author for superluminal quantum mechanics displayed unique characteristics. The principal objective was to find the attractive deuteron potential with a repulsive “hard” core. It was found that the potential obtained by use of the superluminal Schrödinger equation for all practical purposes had converged after the fourth iteration. Moreover, the poten tials calculated by the superluminal approach were very similar to those found by the author in a previous paper [Phys. Essays 6, 341 (1993)] and resembled the subluminal potentials found by Reid [B.L. Cohen, Concepts of Nuclear Physics (McGrawHill, New York, 1974), p. 59]. In view of the fact that approximations had to be made to solve the superluminal Schrödinger equation, the potential found by iteration was acceptable as it had the general characteristics of those found before. Considerable care had to be exercised in carrying out the calculations numerically by use of the quite extensive computer program written. Since the potential found from the superluminal approach was, in essence, a reasonable extension of the subluminal potentials, this finding supports the idea that the author's superluminal theories of previous works are also valid. In this paper, it is shown that the boson exchanged between nucleons to mediate the nuclear force has a radial velocity v > c. In another paper [Phys. Essays 9, 135 (1996)], the author also showed by use of the Heisenberg uncertainty principle that this boson also had a radial velocity greater than c. The implication of the results of this paper and the 1996 paper above is that particles moving at v > c in the nucleus may also move at v > c upon their release from the nucleus and, if this contention is experimentally verified, the implication that follows is that the velocityoflight limitation should be lifted from other branches of physics. Keywords: quantum field theory, superluminal Schrödinger equation, superluminal nuclear potentials, Laplace transform, successive approximations, numerical differentiation and integration, thirdorder differential equation Received: February 25, 2000; Published online: December 15, 2008
fc4ae3e06e841f3f
Virginia Tech™home Martin Klaus McBryde Room 472 460 McBryde Hall, Virginia Tech 225 Stanger Street Blacksburg, VA 24061-1026 Broadly speaking, my research area is the spectral theory and inverse scattering theory of linear operators and nonlinear evolution equations arising in mathematical physics. Over the past few years, my focus of research has been the spectrum of certain linear systems of differential equations (the Zakharov-Shabat and AKNS systems) which play a crucial role in the solution of the nonlinear Schrödinger equation by means of the inverse scattering technique. The nonlinear Schrödinger equation is the partial differential equation that governs the propagation of pulses in optical fibers; it is studied extensively in the mathematical and engineering literature. The study of the spectrum of these systems poses interesting challenges and progress on the mathematical side may have direct consequences for the design of fiber optic systems.
c9f6adfd24c1f2b9
Hydrogen Facts - H or Atomic Number 1 Quick Facts about the Element Hydrogen Over 75% of the matter in the universe is hydrogen. Helium accounts for most of the other quarter, with the other elements accounting for less than one percent. Over 75% of the matter in the universe is hydrogen. Helium accounts for most of the other quarter, with the other elements accounting for less than one percent. Reinhold Wittich/Stocktrek Images, Getty Images Hydrogen is the chemical element with the element symbol H and atomic number 1. It's essential for all life and abundant in the universe, so it's one element you should get to know better. Here are basic facts about the first element in the periodic table, hydrogen. Fast Facts: Hydrogen • Element Name: Hydrogen • Element Symbol: H • Atomic Number: 1 • Group: Group 1 • Classification: Nonmetal • Block: s-block • Electron Configuration: 1s1 • Phase at STP: Gas • Melting Point: 13.99 K ​(−259.16 °C, ​−434.49 °F) • Boiling Point: 20.271 K ​(−252.879 °C, ​−423.182 °F) • Density at STP: 0.08988 g/L • Oxidation States: -1, +1 • Electronegativity (Pauling Scale): 2.20 • Crystal Structure: Hexagonal • Magnetic Ordering: Diamagnetic • Discovery: Henry Cavendish (1766) • Named By: Antoine Lavoisier (1783) Atomic Number: 1 Hydrogen is the first element in the periodic table, meaning it has an atomic number of 1 or 1 proton in each hydrogen atom. The name of the element comes from the Greek words hydro for "water" and genes for "forming," since hydrogen bonds with oxygen to form water (H2O). Robert Boyle produced hydrogen gas in 1671 during an experiment with iron and acid, but hydrogen wasn't recognized as an element until 1766 by Henry Cavendish. Atomic Weight: 1.00794 This makes hydrogen the lightest element. It is so light, the pure element isn't bound by Earth's gravity. So, there is very little hydrogen gas left in the atmosphere. Massive planets, such as Jupiter, consist mainly of hydrogen, much like the Sun and stars. Even though hydrogen, as a pure element, bonds to itself to form H2, it's still lighter than a single atom of helium because most hydrogen atoms don't have any neutrons. In fact, two hydrogen atoms (1.008 atomic mass units per atom) are less than half the mass of one helium atom (atomic mass 4.003). Hydrogen Facts • Hydrogen is the most abundant element. About 90% of the atoms and 75% of the element mass of the universe is hydrogen, usually in the atomic state or as plasma. Although hydrogen is the most abundant element in the human body in terms of numbers of atoms of the element, it's only 3rd in abundance by mass, after oxygen and carbon, because hydrogen is so light. Hydrogen exists as a pure element on Earth as a diatomic gas, H2, but it's rare in Earth's atmosphere because it is light enough to escape gravity and bleed into space. The element remains common at the Earth's surface, where it is bound into water and hydrocarbons to be the third most abundant element. • There are three natural isotopes of hydrogen: protium, deuterium, and tritium. The most common isotope of hydrogen is protium, which has 1 proton, 0 neutrons, and 1 electron. This makes hydrogen the only element that can have atoms without any neutrons! Deuterium has 1 proton, 1 neutron, and 1 electron. Although this isotope is heavier than protium, deuterium is not radioactive. However, tritium does emit radiation. Tritium is the isotope with 1 proton, 2 neutrons, and 1 electron. • Hydrogen gas is extremely flammable. It is used as a fuel by the space shuttle main engine and was associated with the famous explosion of the Hindenburg airship. While many people consider oxygen to be flammable, it actually doesn't burn. However, it's an oxidizer, which is why hydrogen is so explosive in air or with oxygen. • Hydrogen compounds commonly are called hydrides. • Hydrogen may be produced by reacting metals with acids (e.g., zinc with hydrochloric acid). • The physical form of hydrogen at room temperature and pressure is a colorless and odorless gas. The gas and liquid are nonmetals, but when hydrogen is compressed into a solid, the element is an alkali metal. Solid crystalline metallic hydrogen has the lowest density of any crystalline solid. • Hydrogen has many uses, though most hydrogen is used for processing fossil fuels and in the production of ammonia. It is gaining importance as an alternate fuel that produces energy by combustion, similar to what happens in fossil fuel engines. Hydrogen is also used in fuel cells that react hydrogen and oxygen to produce water and electricity. • In compounds, hydrogen can take a negative charge (H-) or a positive charge (H+). • Hydrogen is the only atom for which the Schrödinger equation has an exact solution. • Weast, Robert (1984). CRC, Handbook of Chemistry and Physics. Boca Raton, Florida: Chemical Rubber Company Publishing. ISBN 978-0-8493-0464-4.
21e1ac9fbaf4acb2
Wednesday, January 06, 2010 Is Physics Cognitively Biased? Recently we discussed the question “What is natural?” Today, I want to expand on the key point I was making. What humans find interesting, natural, elegant, or beautiful originates in brains that developed through evolution and were shaped by sensory input received and processed. This genetic history also affects the sort of question we are likely to ask, the kind of theory we search for, and how we search. I am wondering then may it be that we are biased to miss clues necessary for progress in physics? It would be surprising if we were scientifically entirely unbiased. Cognitive biases caused by evolutionary traits inappropriate for the modern world have recently received a lot of attention. Many psychological effects in consumer behavior, opinion and decision making are well known by now (and frequently used and abused). Also the neurological origins of religious thought and superstition have been examined. One study particularly interesting in this context is Peter Brugger et al’s on the role of dopamine in identifying signals over noise. If you bear with me for a paragraph, there’s something else interesting about Brugger’s study. I came across this study mentioned in Bild der Wissenschaft (a German popular science magazine, high quality, very recommendable), but no reference. So I checked Google scholar but didn’t find the paper. I checked the author’s website but nothing there either. Several Google web searches on related keywords however brought up first of all a note in NewScientist from July 2002. No journal reference. Then there’s literally dozens of articles mentioning the study after this. Some do refer to, some don’t refer to the NewScientist article, but they all sound like they copied from each other. The article was mentioned in Psychology Today, was quoted in Newspapers, etc. But no journal reference anywhere. Frustrated, I finally wrote to Peter Brugger asking for a reference. He replied almost immediately. Turns out the study was not published at all! Though it is meanwhile, after more than 7 years, written up and apparently in the publication process, I find it astonishing how much attention a study could get without having been peer reviewed. Anyway, Brugger was kind enough to send me a copy of the paper in print, so I know now what they actually did. To briefly summarize it: they recruited two groups of people, 20 each. One were self-declared believers in the paranormal, the other one self-declared skeptics. This self-description was later quantified with commonly used questionnaires like the Australian Sheep-Goat Scale (with a point scale rather than binary though). These people performed two tasks. In one task they were briefly shown (short) words that sometimes were sensible words, sometimes just random letters. In the other task they were briefly shown faces or just random combination of facial features. (These both tasks apparently use different parts of the brain, but that’s not so relevant for our purposes. Also, they were shown both to the right and left visual field separately for the same reason, but that’s not so important for us either.) The participants had to identify a “signal” (word/face) from the “noise” (random combination) in a short amount of time, too short to use the part of the brain necessary for rational thought. The researchers counted the hits and misses. They focused on two parameters from this measurement series. The one is the trend of the bias: whether it’s randomly wrong, has a bias for false positives or a bias for false negatives (Type I error or Type II error). The second parameter is how well the signal was identified in total. The experiment was repeated after a randomly selected half of the participants received a high dose of levodopa (a Parkinson medication that increases the dopamine level in the brain), the other half a placebo. The result was the following. First, without the medication the skeptics had a bias for Type II errors (they more often discarded as noise what really was a signal), whereas the believers had a bias for Type I errors (they more often saw a signal where it was really just noise). The bias was equally strong for both, but in opposite directions. It is interesting though not too surprising that the expressed worldview correlates with unconscious cognitive characteristics. Overall, the skeptics were better at identifying the signal. Then, with the medication, the bias of both skeptics and believers tended towards the mean (random yes/no misses), but the skeptics overall became as bad at identifying signals as the believers who stayed equally bad as without extra dopamine. The researcher’s conclusion is that the (previously made) claim that dopamine generally increases the signal to noise ratio is wrong, and that certain psychological traits (roughly the willingness to believe in the paranormal) correlates with a tendency to false positives. Moreover, other research results seem to have shown a correlation between high dopamine levels and various psychological disorders. One can roughly say if you fiddle with the dose you’ll start seeing “signals” everywhere and eventually go bonkers (psychotic, paranoid, schizoid, you name it). Not my field, so I can’t really comment on the status of this research. Sounds plausible enough (I’m seeing a signal here). In any case, these research studies show that our brain chemistry contributes to us finding patters and signals, and, in extreme, also to assign meaning to the meaningless (there really is no hidden message in the word-verification). Evolutionary, type I errors in signal detection are vastly preferable: It’s fine if a breeze moving leaves gives you an adrenaline rush but you only mistake a tiger for a breeze once. Thus, today the world is full of believers (Al Gore is the antichrist) and paranoids who see a tiger in every bush/a feminist in every woman. Such overactive signal identification has also been argued to contribute to the wide spread of religions (a topic that currently seems to be fashionable). Seeing signals in noise is however also a source of creativity and inspiration. Genius and insanity, as they say, go hand in hand. It seems however odd to me to blame religion on a cognitive bias for Type I errors. Searching for hidden relations on the risk that there are none per se doesn’t only characterize believers in The Almighty Something, but also scientists. The difference is in the procedure thereafter. The religious will see patterns and interpret them as signs of God. The scientist will see patterns and look for an explanation. (God can be aptly characterized as the ultimate non-explanation.) This means that Brugger’s (self-)classification of people by paranormal beliefs is somewhat besides the point (it likely depends on the education). You don’t have to believe in ESP to see patterns where there are none. If you read physics blogs you know there’s an abundance of people who have “theories” for everything from the planetary orbits, over the mass of the neutron, to the value of the gravitational constant. One of my favorites is the guy who noticed that in SI units G times c is to good precision 2/100. (Before you build a theory on that noise, recall that I told you last time the values of dimensionful parameters are meaningless.) The question then arises, how frequently do scientists see patterns where there are none? And what impact does this cognitive bias have on the research projects we pursue? Did you know that the Higgs VEV is the geometric mean of the Planck mass and the 4th root of the Cosmological Constant? Ever heard of Koide’s formula? Anomalous alignments in the CMB? The 1.5 sigma “detection?” It can’t be coincidence our universe is “just right” for life. Or can it? This then brings us back to my earlier post. (I warned you I would “expand” on the topic!) The question “What is natural” is a particularly simple and timely example where physicists search for an explanation. It seems though I left those readers confused who didn’t follow my advice: If you didn’t get what I said, just keep asking why. In the end the explanation is one of intuition, not of scientific derivation. It is possible that the Standard Model is finetuned. It’s just not satisfactory. For example Lubos Motl, a blogger in Pilsen, Czech Republic, believes that naturalness is not an assumption but “tautologically true.” As “proof” he offers us that a number is natural when it is likely. What is likely however depends on the probability distribution used. This argument is thus tautological indeed: it merely shifts the question what is a natural from the numbers to what is a natural probability distribution. Unsurprisingly then, Motl has to assume the probability distribution is not based on an equation with “very awkward patterns,” and the argument collapses to “you won't get too far from 1 unless special, awkward, unlikely, unusual things appear.” Or in other words, things are natural unless they’re unnatural. (Calling it Bayesian inference doesn’t improve the argument. We’re not talking about the probability of a hypothesis, the hypothesis is the probability.) I am mentioning this sad case because it is exactly the kind of faulty argument that my post was warning of. (Motl also seems to find the cosine function more natural than the exponential function. As far as I am concerned the exponential function is very natural. Think otherwise? Well, zis why I’m saying it’s not a scientific argument.) The other point that some readers misunderstood is my opinion on whether or not asking questions of naturalness is useful. I do think naturalness is a useful guide. The effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained), but it’s definitely well documented. Dimensionless numbers that are much larger or smaller than one have undeniably an itch-factor. I’m not claiming one should ignore this itch. But be aware that this want for explanation is an intuition, call it a brain child. I am not saying thou shell disregard your intuition. I say thou shell be clear what is intuition and what derivation. Don’t misconstrue for a signal what is none. And don’t scratch too much. But more importantly it is worthwhile to as ask what formed our intuitions. On the one hand they are useful. On the other hand we might have evolutionary blind spots when it comes to scientific theories. We might ask the wrong questions. We might be on the wrong path because we believe to have seen a face in random noise, and miss other paths that could lead us forward. When a field has been stuck for decades one should consider the possibility something is done systematically wrong. To some extend that possibility has been considered recently. Extreme examples for skeptics in science are proponents of the multiverse, Max Tegmark with his Mathematical Universe ahead of all. The multiverse is possibly the mother of all Type II errors, a complete denial that there is any signal. In Tegmark’s universe it’s all just math. Tegmark unfortunately fails to notice it’s impossible for us to know that a theory is free of cognitive bias which he calls “human baggage.” (Where is the control group?) Just because we cannot today think of anything better than math to describe Nature doesn't mean there is nothing. Genius and insanity... For what the multiversists are concerned, the “principle of mediocrity” has dawned upon them, and now they ask for a probability distribution in the multiverse according to which our own universe is “common.” (Otherwise they had nothing left to explain. Not the kind of research area you want to work in.) That however is but a modified probabilistic version of the original conundrum: trying to explain why our theories have the features they have. The question why our universe is special is replaced by why is our universe especially unspecial. Same emperor, different clothes. The logical consequence of the multiversial way is a theory like Lee Smolin’s Cosmological Natural Selection (see also). It might take string theorists some more decades to notice though. (And then what? It’s going to be highly entertaining. Unless of course the main proponents are dead by then.) Now I’m wondering what would happen if you gave Max Tegmark a dose of levodopa? It would be interesting if a version of Brugger’s test was available online and we could test for a correlation between Type I/II errors and sympathy for the multiverse (rather than a believe in ESP). I would like to know how I score. While I am a clear non-believer when it comes to NewScientist articles, I do see patterns in the CMB ;-) [Click here if you don't see what I see] The title of this post is of course totally biased. I could have replaced physics with science but tend to think physics first. Conclusion: I was asking may it be that we are biased to miss clues necessary for progress in physics? I am concluding it is more likely we're jumping on clues that are none. Purpose: This post is supposed to make you think about what you think about. Reminder: You're not supposed to comment without first having completely read this post. 1. Ooh, isn't that weird?! Your initials are carved into the CMB! I saw the blue face of the devil first. That's definitely there as well. Isn't science great! :) 2. If modern particle physics is cognitively biased, the biases are subtle. I'd say subtler than the assumptions about geometry (Euclidean) and time (Newtonian) that prevailed before Einstein. 3. Now if we could only look at the CMBs of all of Tegmark's other universes, what would be the great message be that the Romans placed there ?-) Of course, thought and perception are necessarily biased by the sense receptors and the brains and physiology each of us is equipped with – and by the history of our experiences, personal and collective. What we try to do, especially with science, is to use experience to gradually separate signal from noise. And we can do that no better than is allowed by the set of tools we're born with and which we add to, as a result of added experience and understanding. Because our 'equipment' varies slightly for genetic and other accidental reasons, so will our biases. But the strategies for enhancing S/N, should tend to reduce the net effect of bias on THOSE DIFFERENCES. We may never be able to be overcome other 'biases', that relate to our finite shared biology and experiences. 4. Although it matters to the essence of the question, let's put aside the intuitive sense that we "really exist" in a way distinguishing us from modal-realist possible worlds. (IMHO, it's not a mere coincidence between the sense of vivid realness in consciousness and the issue of "this is a real world, dammit!) Consider the technical propriety of claiming the world is "all math." That to me, implies that a reasonable mathematical model of "what happens" can be made. As far as I am concerned, the collapse problem in QM makes that impossible. We don't really know how to take a spread out, superposed wave function and make it pop into some little space or specific outcome. Furthermore, "real randomness" cannot come from math, which is deterministic! (I mean the outcomes themselves, not cheating by talking about the probabilities as a higher abstraction etc.) Same issue for "flowing time" and maybe more. Some people think they can resolve the perplexity through a very flawed, circular argument that I'm glad looks suspect to Roger Penrose too. Just griping isn't enough, see my post on decoherence at my link. But in any case this is not elegant, smooth mathematics. Many say, that renormalization is kind of a scam too. Maybe it's some people's cognitive bias to imagine that the universe must be mathematical, or their cognitive dissonance to fail to accept that the universe really doesn't play along - but the universe really isn't a good "mathematical model." I think that's more important than e.g. how many universes there are. 5. Bee, Just wanted to point out that the study said nothing about pattern recognition. In fact, from what you stated about the duration of time ("too short to use the part of the brain necessary for rational thought") to make the decision, no pattern recognition was involved or affected by the test: patterns take thought to see. So, while I agree that pattern recognition is an evolutionary boon, is involved in creativity, and is present in both scientists and "believers", that says nothing about the quality of the patterns being observed. Bad signal-vs.-noise separation would, obviously, lead to bad patterns (GIGO, anyone?), but even good signal-vs.-noise separation could lead to bad patterns. The study results seem to say that what was affected wasn't the interpreted quality of the signal (which wasn't tested), just whether it *was* a signal or was just noise. The correlation between "believers" and false signal detection might be more related to the GIGO issue rather than an assumed increase in pattern detection ability. 6. "...too short to use the part of the brain necessary for rational thought." I wonder what that phrase means. 7. From AWT perspective, modern physics is dual to philosophy. While philosophers cannot see quantitative relations even at the case,their derivation is quite trivial and straightforward, formally thinking physicists often cannot see qualitative relations between phenomena - even at the case, such intuitive understanding would be quite trivial. Because we are seeing objects as a pinpoint particles from sufficient distance in space-time, Aether theory considers most distant (i.e. "fundamental") reality composed of inertial points, i.e. similar to dense gas, which is forming foamy density fluctuations. Philosophers tend to see chaotic portion of reality, where energy spreads via longitudinal waves, whereas physicists are looking for "laws", i.e. density fluctuations itself, where energy spreads in atemporal way of transversal waves. It means, physicists tend to see gradient and patterns even at the case, when these patterns are of limited scope in space-time and it tends to extrapolate these patterns outside of their applicability scope - as Bee detected correctly. Lubos Motl is particularly good case to demonstrate such bias, because he is loud and strictly formally thinking person. Bee is woman and thinking of women is more holistic & plural, which is the reason, why women aren't good in math in general. Nevertheless she's still biased by her profession, too. I don't think, any real physicist can detect bias of his proffession exactly, just because (s)he is immanent part of it. 8. For what it's worth, I saw nothing that I could identify in the CMB. The study you cite is cute, but as with most psychological studies, it doesn't pay to try to milk the data for more than is actually there. Thinking you detect a signal and being willing to act on a signal are not the same thing, although in this simplistic, no-risk situation, they are made to appear to be. And science isn't just about how many times you say 'ooh!' in response to what you think is a signal. Science is very much about having that 'signal' validated by others using independent means. I'm really not sure who or what you are trying to jab with this post, other than the poke at ESP. And I'm seconding Austin with respect to pattern recognition. :) 9. /*..Extreme examples for skeptics in science are proponents of the multiverse..*/ From local perspective of CMB Universe appears like fractal foam of density fluctuations, where positive curvature is nearly balanced by this negative one. The energy/information is spreading through this foam in circles or loops simmilar to Mobius strip due the dispersion and subtle portion of every transversal wave is returning back to the observer in form of subtle gravitational, i.e. longitudinal waves. We should realize, there is absolutely no methaphysics into such perspective, as it's all just a consequence of emergent geometry. But this dispersion results into various supersymmetry phenomena, where strictly formally thinking people are often adhering to vague concepts and vice-versa. For example, many philosophers are obsessed by searching for universal hidden law of Nature or simply God, which drives everything. Whereas many formally thinking people are proposing multiverse concept often. We can find many examples of supersymmetry in behavior of dogmatic people, as they're often taking an opinions, which are in direct contradiction to their behavior. We are often talking about inconsistency in thinking in this connection, but it's just a manifestation of dual nature of information spreading inside of random systems. 10. Supersymmetry in thinking could be perceived as a sort of mental correction of biased perceiving of reality, although in unconscious, i.e. intuitive way. But there is a dual result of dispersion, which leads into mental singularities, i.e. black holes in causal space-time. The strictly formally thinking people often tend to follow not just vague and inconsistent opinions, but they're often of "too consistent" opinions often, which leads them into dogmatic, self-confirmatory thinking. The picture of energy spreading through metamaterial foam illustrates this duality in thinking well: portion of energy gets always dispersed into neighborhood, another portion of energy is always ending in singularity. Unbiaselly thinking people never get both into schematic, fundamentalistic thinking, both into apparently logically inconsistent opinions, which contradicts their behavior. Their way of thinking is atemporal, which means it follows "photon sphere" of causal space-time. From this perspective, the people dedicated deeply to their ideas, like Hitler or Lenin weren't evils by their nature, they were just "too consequential" in their thinking about "socially righteous" society. The most dangerous people aren't opportunists, but blindly thinking fanatists. The purpose of such rationalization isn't to excuse their behavior - but to understand its emergence and to avoid it better in future. Their neural wave packets spreads in transversal waves preferably, which makes them often ingenial in logical, consequential way of thinking. But at the moment, when energy density of society goes down during economical or social crisis, society is behaving like boson condensate or vacuum, where longitudinal waves are weak - and such schematically thinking fanatics can become quite influential. 11. /*..what is a natural from the numbers to what is a natural probability distribution..*/ This is a good point, but in AWT the most natural is probability distribution in ideal dense Boltzmann gas. I don't know, how such probability appears and if it could be replaced by Boltzmann distribution - but it could be simulated by particle collisions (i.e. causual events in space-time) inside of very dense gas, which makes it predictable and testable. 12. /*.. the effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained..*/ It's because it's a product of long-term adaptation: the universe is a fractal foam, so that human brain maintains a fractal foam of solitons to predict it's behavior as well, as possible. Therefore both character, both the wavelength of brain waves correspond the CMB wavelength (or diameter of black hole model of observable Universe). From perspective of AWT or Boltzmann brain Universe appears like random clouds or Perlin noise. A very subtle portion of this fluctuations would interact with the rest of noise in atemporal way, i.e. via transversal waves preferably. This makes anthropic principle a tautology: deep sea sharks are so perfectly adopted to bottom of oceans from exsintric perspective, they could perceive their environment as perfectly adopted to sharks from insintric persective of these sharks. These two perspectives are virtually indistinguishable each other from sufficiently general perspective. In CMB noise we can see the Universe both from inside via microwave photons, both from outside via gravitational waves or gravitons. We can talk about black hole geometry in this connection The effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained - but basically it's just a consequence of energy spreading in chaotic particle environment, which has its analogies even at the water surface. 13. The reason, why contemporary physics cannot get such trivial connections its adherence to strictly causal, i.e. insintric perspective. Its blind refusal of Aether concept is rather a consequence, then the reason of this biased stance. We know, mainstream physics has developed into duality of general relativity and quantum mechanics, but its general way of thinking still remains strictly causal, i.e. relativistic by its very nature. Their adherence to formal models just deepens such bias (many things, which cannot be derived can still be simulated by particle models, for example). From this reason, physicists cannot imagine the things from their (slightly) more general exsintric perspective due their adherence to (misunderstood) Popper's methodology, because exsintric perspective it's unavailable for experimental validation by its very definition - so it's virtually unfalsifiable from this perspective. We cannot travel outside of our Universe to make sure, how it appears - which makes impossible for physicists to think about it from more general perspective. 14. Low Math, Meekly Interacting10:52 PM, January 06, 2010 Of course we're prone to bias. That's why science works better than faith or philosophy: Nature doesn't care what we want. I don't think bias is bad per se, though. It's difficult to make progress without a preconceived notion of what the goal might be. Even if that notion is completely wrong, at least picking an angle of attack and following it will eventually lead one to recognize their error and readjust, hopefully. Without some bias, we flail around at random. It's when we can't temper our biases with observation and experiment that science really runs into trouble. Dopamine is implicated in motivation, drive, the reward mechanism we inherited from our hunter-gatherer ancestors. It's good to love the chase; it keeps us fed when we're hungry, even if we can't see the food yet. Mice deprived of dopamine in certain brain regions literally starve to death for want of any desire to get up and eat. And no genius accomplishes anything without drive. So let there be bias. But let there be evidence, too, and a hunger to find it. 15. Hi Bee, “This post is supposed to make you think about what you think about.” Well gauging from the responses thus far, all it’s managed is to have many to remind others as to how they are suppose to think rather than give reason as to why. To me that simply serves to demonstrate that there are more people who are convinced the world should be as they think it should be, as opposed to those concerned as how to best learn to discover the way it presents itself as being. So these wonderings as how one is best able to judge signal from noise, is just the modern way of asking how one is able to find what is truth as opposed to what are merely the shadows. That would have the sceptics on dopamine to be like the freed prisoner when first returned to the darkness of Plato's cave to be asked again to measure the shadows, while the believers on dopamine would be how that same prisoner found himself when first freed to the upper world. So what then would Plato have said is the best way to judge signal from noise. To do this one has to introspect themselves in relation to the world, before one can excogitate about it, rather than consider only what one can imagine is how the world necessarily must be, for it then is only a projection of self and thus merely a shadow. So all the talk of the effect of observation on reality or our world is the way it is as to accommodate our existence, seems to be just what those prisoners in Plato’s cave must have thought and for the same reason. So I apologise if this seems nothing more than philosophy, yet is that not what’s asked we considered here, as what constitutes being good natural philosophy. -Plato- Allegory of the Cave 16. It's very hard to guess what "bias" is supposed to mean in these contexts. Our brains like to keep things simple, to find economical descriptions of reality. With the help of math, though, those descriptions become florid indeed. Whatever the biases of the human brain, we know that (some) humans are damn good at sniffing out the laws of nature, because they have found so many of them. Did our prejucices about space and time retard relativity, or our prejudices about causality retard quantum mechanics? Maybe a little but not for long. Neither could plausibly have been discovered 70 years earlier than they were. Engineers are very familiar with the problem of detecting a signal in noise. The trick is to steer an optimal route between missed signals and false alarms. Your experiment suggests that dopamine moves the needle in favor of higher tolerance for false alarms than missed signals. 17. Testable predictions and experimental testing are the only known way to verify which patterns/ideas are useful and which are "robust" and "compelling" but not useful in understanding nature. One in a million can reliably use intuition as a guide in science. 18. CIP: It means 140 msec. The paper didn't indeed say why 140 msec, but I guess the reason is roughly what I wrote. If you have time to actually "read" rather than "recognize" the word, you'd just test for illiteracy. Best, 19. This is why I read this blog. Happy New Year, Bee! 20. Austin, Anonymous: With "pattern recognition" I was simply referring to finding the face/word in random noise. You seem to refer to pattern recognition as pattern in a time series instead, sorry, I should have been clearer on that. However, you might find the introduction of this paper interesting which more generally is about the issue of mistakenly assigning meaning to the meaningless rspt causal connections where there are none. It's very readable. This paper (it seems to be an introduction to a special issue) also mentions the following "The meaningfulness of a coincidence is in the brain of the beholder, and while ‘‘meaningless coincidences’’ do not invite explanatory elaborations, those considered meaningful have often lured intelligent people into a search for underlying rules and laws (Kammerer, 1919, for a case study)." Seems like there hasn't been much research on that though. Best, 21. Dear Arun: I wasn't so much thinking about particle physics (except possibly for the 1.5 sigma detections) but more about the attempt to go beyond that. Best, 22. Hi Len, I agree with what you say. However, it has been shown in other context that knowing about a bias can serve as a corrective instance. Ie just telling people to be rational has the effect of them indeed being more rational. Best, 23. Neil: There's a whole field of mathematics, called stochastic, dedicated to randomness. It deals with variables that have no certain value that's the whole point. I thus don't know in which way you think "math is deterministic" (deterministic is a statement about a time evolution). In any case, I believe Tegmark favors the many worlds interpretation, so no collapse. Best, 24. /* which way you think "math is deterministic"..*/ Math is atemporal, which basically means, what you get is always what you put in - and the result of derivation doesn't change with time. Which is good for theorists - but it makes math a nonrealistic represenation of dynamical reality. 25. Zephir: That a derivation is atemporal does not prevent maths from describing something as a function of a parameter rspt as a function of coordinates on a manifold. Best, 26. I know, but this function is still fixed in time. Instead of this, our reality is more close to dynamic particle simulation. We should listen great men of fictional history and their moms. 27. Zephir: That a function (rather than it's values) is "fixed in time" is an illdefined statement. The function is a map from one space to another space. To speak of something like constancy (being "fixed") with a parameter you first need to explain what you mean with that. Best, 28. The only way you can deal with bias is to find a good reason for every assertion you make and to provide a consistent, well defined theoretical explanation based on the evidence and on the accumulated knowledge in your area. That's the best think you can do I guess and your assertion will be debated. The diversity and pluralism of educated opinions is the best chance we have to filter any bias. The fact that you've raised the question of bias with your post is a living prove of that. 29. Hi Bee, Just as a straight forward question from a layperson to a professional researcher in respect to what underlies this post, that is to ask if you consider physics turning ever closer to becoming the study of natural phenomena by those influenced primarily by their beliefs, rather than by their reason as grounded in doubt? As a follow up question, if you then consider this to be true, what measures would you find that need to be taken to correct this as to have physics better serve its intended purpose as it relates to discovering how the world works as it does? 30. Giotis: Yes, that's why I've raised the question. One should however distinguish between cognitive and social bias. Diversity and pluralism of opinions might work well to counteract social bias, but to understand and address cognitive bias one also needs to know better what that bias might look like. Plurality might not do. Best, 31. Hi Phil, Here as in most aspects of life it's a matter of balance. Neither doubt nor belief alone will do. I don't know if there's a trend towards more belief today than at other times in the history of science and I wouldn't know how to quantify that anyway. What I do see however is a certain sloppiness in argumentation possibly based on the last century's successes, and a widespread self-confidence that one "knows" (rather than successfully explains) which I find very unhealthy. I personally keep it with Socrates "The only real wisdom is knowing you know nothing." This is why more often than not my writing comes in the sort of a question rather than an answer. Not sure that answers your question, but for what I think should be done is to keep asking. Best, 32. Hi Phil, Regarding your earlier comment, yes, one could say some introspection every now and then could not harm. Maybe I'm just nostalgic, but science has had a long tradition of careful thought, discussion and argumentation that I feel today is very insufficiently communicated and lived. Best, 33. This comment has been removed by the author. 34. Hi Bee, Well how could I argue with you pointing to Socrates for inspiration as his is the seed of this aspect of doubt as it relates to science? The only thing I would add is that Plato only expanded as to remind we are all prisoners and are better to be constantly reminded that we are; which of course is what you propose as the only remedy for bias. So would you not agree that the best sages of science usually are the ones that hold fast to this vision and that how they came to their conclusions are perhaps the better lessons , rather than what they actually have us come to know. -Isaac Newton- Principia Mathematica Oh yes this has me remember how I was surprised that Stefan a few days past did not with a post remind us of the birthday of this important sage of science :-) 35. Hi Phil, Well, reporting on dead scientists' birthday gets somewhat dull after a while. For what I am concerned what makes a good scientist and what doesn't is whether his or her work is successful. This might to some extend be a matter of luck or being at the right time at the right moment. But there are certainly things one can do to enhance the likeliness of success, a good education ahead of all. What other traits are useful to success depends on the research you're conducting, so your question is far too general for an all-encompassing reply. We previously discussed the four stages of science that Shneider suggested, and while I have some reservations on the details of his paper I think he's making a good point there. The trait you mention and what I was also concerned with I'd think is instrumental for what Shneider calls 1st stage science. Best, 36. Aye, yai, yai. Once again Bee, you have been singled out for criticism at The Reference Frame, in particular this blarticle. Click here for the review in question by Lubos. If it's not too much trouble, a review by you of Lubos' review would be appreciated. Based on our previous discussion it will not be published there, or more to point you will not attempt to do so based on previous experience with TRF, therefore we humbly beseech thee to respond here under the reply section of this very blarticle that inspired Lubos to generate so many, many, very, very many words. (And for an added bonus, he trashes Sean Carroll's new book as well) Thanks in advance. Okay, what about the Ulam's Spiral Or, what about Pascal's triangle? Have you noticed any patterns? This goes to the question then of what is invented versus what is discovered? As to Wmap, don't you see this?:) 38. Yes but the fact that you've raised the question of possible bias proves that humans (due to the pluralism of opinions) have the capability to take the factor of cognitive bias under consideration and maybe even attempt to take alternative roads due to that. You are part of the human race aren't you? So this proves my point:-) 39. Steven: Lubos "criticism" is as usual a big joke. It consists of claiming I said things I didn't say and then making fun of them. It's terribly dumb and in its repetition also unoriginal. Just some examples: - I meanwhile explicitly stated two times that I do not think arguments or naturalness "have no room in physics" as Lubos claims. He is either indeed unable to grasp the simplest sentences I write or he pretends to be. In the above post I wrote "I do think naturalness is a useful guide." How can one possibly misunderstand this sentence if one isn't illiterate or braindead? - Lubos summary of my summary of Brugger's paper is extremely vague and misleading. Eg he writes "Skeptics converged closer to believers when they were "treated" by levodopa" but one doesn't really know what converged towards what. As I said as far as the bias is concerned they both converge towards the mean. This also means they converged to each other but isn't the same. - Lubos says that "The biases are the reasons why the people are overly believing or why they excessively deny what can be seen. Sabine Hossenfelder doesn't like this obvious explanation - that the author of the paper has offered, too." First in fact, the authors were very accurate in their statements. What their research has shown is a correlation, not a causation. Second, I certainly haven't "denied" this possible explanation. That this is not only a correlation but also a causation is exactly why I have asked whether physics is cognitively biased, so what's his point? And so on and so forth. It is really too tiring and entirely fruitless to comment on all his mistakes. Note also that he had nothing to say to my criticism of his earlier article. There was a time when I was thinking I should tell him when he makes a mistake, but I had to notice that he is not even remotely interested in having a constructive exchange. He simply doesn't like me and the essence of his writing is to invent reasons why I'm dumb and hope others are stupid enough to believe him. It's a behavior not appropriate for a decent scientist. Best, 40. Hi Giotis, Yes, sure, I agree that we should be able to address and understand cognitive bias in science and that this starts with awareness that is easier to be found in a body that is pluralistic. What I was saying is that relying on plurality might bring up the question but not be the solution. (Much like brainstorming might bring up ideas but not their realization). Btw: The package is on the way. Please send us a short note when it arrives just so we know it didn't get lost. 41. Typo: Should have been "arguments of naturalness" not "arguments or naturalness" 42. Bee, what I mean by "deterministic" math is that the math process can't actually *produce* the random results. Just saying "this variable has no specific value" etc. is "cheating" (in the sense philosophers use it), because you have to "put in the values by hand." Such math either produces "results" which are the probability distributions - not actual sequences of results - or in actual application, the user "cheats" by using some outside source of randomness or pseudo-randomness like digits of roots. (Such sequences are themselves of course, wholly determined by the process - they just have the right mix that is not predictable to anyone not knowing what they came from. In that sense, they merely appear "random.") I think most philosophers of the foundations of mathematics would agree with me. As for MWI, I still ask: why doesn't the intitial beam splitter of a MZI split the wave into two worlds, thus preventing the interference at all? 43. Hi Bee, I actually read a book by Julian Baggini called: 'A very short Introduction to Atheism.' Baggini writes about evidence vs. supernatural and about naturalism. Where do we find evidence ? We all know: only in an experiment. But as you said it must be a good experiment. That means, as you too said, it must be based on a 'good' initialization. For example, for dark matter and dark energy must be found a detector. The correct detector is needed to be found. What is the correct detector in case of dark matter or dark energy ? Best Kay 44. Neil: I don't know what you mean with "math process producing a result." Stochastic processes produce results. The results just are probabilistic, which is the whole point. There is no "result" beyond that. I'm not "putting in values by hand," the only information there is is the distribution of the values. You are stuck on the the quite common idea that the result actually "has" a value, and then you don't see how math give you this value. Best, 45. Sure Bee, I'll do that. Thanks. 46. The CMB is a remarkably coincident map of the Earth: Europe, Asia, and Africa to the right; the Americas to the left. Is physical theory bent by pareidolia? Physics is obsessed with symmetries: S((U2)xU(3)) or U(1)xSU(2)xSU(3) for the Standard model, then SUSY and SUGRA. String theory is born of fundamental symmetries, then whacked to lower symmetries toward observables (never quite arriving). Umpolung! Remove symmetries and test (not talk) physics for flaws. Chemistry (pharma!) is explicit: Does the vacuum differentially interact with massed chirality? pdf pp. 25-27, calculation of the chiral case 1) Two solid single crystal spheres of quartz in enantiomorphic space groups P3(1)21 and P3(2)21 are plated wtih superconductor, cooled, and Meissner effect levitated in hard vacuum beind the usual shieldings. If they spontaneously reproducibly spin in opposite directions, there's your vacuum background. 2) Teleparallel gravitation in Weitzenböck space specifically allows Equivalence Principle violation by opposite parity mass distributions, falsifying metric gravitation in pseudo-Riemannian space. A parity Eotvos experiment is trivial to perform, again using single crystals of space groups P3(1)21 and P3(2)21 quartz. Glycine gamma-polymorph in enantiomorphic space groups P3(1) and P3(2) is a lower symmetry case and is charge-polarized, with 1.6 times the atom packing density of quartz. Theoretic grandstanding has produced nothing tangible after 25 years of celebrated pontification. Gravitation theories are geometries arising from postulated "beautiful" symmetries. They are vulnerable to geometric falsification (e.g., Euclid vs. elliptic and hyperbolic triangles). Somebody should look. 47. Bee, what I mean is, the mathematical machinery can't produce the actual random results directly. That means, a sequence like 4,1,3,3,0,6, ... or something else instead. It just treats the randomness as an abstraction. If you can find a way for the *operation* to produce an actual sequence of random numbers or etc., please explain and show the results. REM the same operation must produce different sequences other times it is "run" or it isn't really random. (I don't think you can, since any known "operation" will produce the same result each time - again, if you don't "cheat" by pulling results from outside. Hence, taking sqrt 2 provides a specific sequence, and it will every time you do it. Even if you said, it can be either negative or positive if you consider x^2 = 2, *you* are still going to decide which to show each time. Otherwise, it is just the set of solutions. In a random variable, it represents a class of outputs - that is not the same, as having a mechanism to produce varying results each time. Don't you think, if a math process could do that, chip mfrs would use that instead of either seeded pseudo-random generators, or an actual physical process? If you are thinking in terms of practical use, all I can say is: I mean, the logical definition that a worker in FOM would use, and I think they agree with me with few exceptions. Please think it through carefully, tx. 48. Neil: I understand what you're saying but you don't understand what I'm saying. You are implicitly assuming reality "is" something more than a process that is (to some extend) "really" probabilisitc. You're thinking it instead "really is" the sequence and the sequence is not the random variable. That is your point but it is a circular argument: you think reality can't be probabilistic because a probabilistic distribution is not real. Define "not real," see the problem? Best, 49. Bee, Giotis: Specifically with regard to QFT - how well-defined does one have to be? Are we well-defined enough? 50. One can never be well-defined enough. The pitfalls in physics as in economics and biology are the hidden assumptions people forget about because they are either "obvious" (cognitive bias) or "everybody makes them" (social bias). Best, 51. Bee, I mean very carefully what I said about the specific point I made: that *math* can't produce such "really random" results, but only describe them in the abstract. But if we were talking at cross purposes, then we could both be right about our separate points. As for yours: I am assuming nothing about the universe or what it has to be like. But if we appreciate the first point above, and then look at the universe: we find "random" results supposedly coming out. The universe does produce actual sequences and events, not (unless you dodge via MWI) a mere abstraction of a space of probable outcomes. If actual outcomes, sample sequences which are the true 'data' from experiments, are genuinely "random" in the manner I described, then: (1) The universe produces "random" sequences upon demand. (2) They can't - as particulars - be produced by a mathematical process. (3): The universe is therefore not "just math", and MUH is invalid. That is not a circular argument. It is a valid course of deduction from a starting assumption (about math, supported by the consensus of the FOM community AFAIK), which is compared to the apparent behavior of the universe, with a disjoint deduced thereby. As for what "real" means, who knows for sure? But we do know how math works, we know how nature works, and it cannot IMHO be entirely the same. 52. Neil: But I said in the very beginning it's a many world's picture. The MUH doesn't only mean all that you think is "real" is mathematics but all that is math is real. Best, 53. Neil: You're right, I didn't say that, I just thought I said it. Sorry for the misunderstanding. Best, 54. Well it depends. The simplest example is the divergence of the vacuum energy. You just subtract it in QFT saying that only energy differences matter if gravity is not considered and QFT is not a complete theory anyway. Are you satisfied with the explanation? Some people are not very happy with all these divergences and their handling with the renormalization procedure. Also perturbative QFT misses a number of phenomena. So somebody could say that is not well defined or is well defined in a certain regime under certain preconditions. The main issue though is that you'll always find people who challenge the existing knowledge, ask questions and doubt the truth of the given explanations if they are not well defined (as Bee does in her post). That's why I talked about pluralism, diversity of opinions, open dialogue and open/free access to knowledge, as a remedy even for the cognitive bias. And I'm not talking about physics or science only but generally. 55. Neil: Maybe what I said becomes clearer this way. Your problem is that stochastic doesn't offer a "process to produce" a sequence. Since the sequence is what you believe is real, reality can't be all described by math. What I'm asking is how do you know that only the sequence is "real" and not all the possible sequences the random variable can produce? I'm saying it's circular because you're explaining the maths isn't "real" because "reality" isn't the math, and I'm asking how do you know the latter. (Besides, just to be clear on this, I am neither a fan of MUH nor MWI.) Best, 56. Neil, have you read some of Gregory Chaitin's work who believes randomness lies at the heart of mathematics? Good article here: "My Omega number possesses infinite complexity and therefore cannot be explained by any finite mathematical theory. This shows that in a sense there is randomness in pure mathematics." I think as a clear example, the distribution of prime numbers appears to be fundamentally random - cannot be predicted from any algorithm. But the positions are clearly defined in mathematics. So that's fundamental randomness right at the heart of maths. 57. While I think it is very important to examine the question of bias, I also think it is a very sticky wicket. Observing signals among noise is an extremely individualistic thing. It is a fact that some gifted individuals can pick signals out of the noise but can't explain how they do it. Or they explain it and it isn't rational to the rest of us. For instance the brains of many idiot savants, (and also some normally functioning individuals, which is much more rare), can calculate numbers in their head using shapes and colors that they visualize. Others can memorize entire phonebooks using similar methods. Visualization cues are often key to these abilities. To most of us it would seem like very good intuition because most brains don't work like that, but I think that is a mistake. I certainly think there are rare individuals who can do similar things in other fields of study. But scientists are often too biased in their reductionist philosophy to accept it. They assume that because a particular individuals brain doesn't work the way their's do any explanation for how the "calculation" was done is that is was just good intuition. That conclusion itself is an overly reductionist conclusion. 58. Whew, what a metaphysical morass. Well, about what MUH people think is true: it is rather clear, they say that all mathematical descriptions are equally "real" in the way our world is, but furthermore there is no other way to be "real" (ie, modal realism.) So there isn't any valid: "hey, we are in a materially real world, but that conceptual alteration where things are a little different 'does not really exist' except as an unactualized abstraction." MT et al would say, there is no distinction between unactualized, and actualized (as a "material" world) abstractions. Poor Madonna, was she wrong? But I don't agree anyway. BTW, MUH doesn't really prove MWI unless you can connect all the histories to get retrodiction of the quantum probabilities. Bee, Andrew: Now, about math: the stochastic variable represents the set of outcomes. Why do I know only the particular outcome is real? Well, in an actual experiment that's what you get. How can I make that more clear? Of course the math is "real", but so is the outcome in a real universe. There is a mismatch. Please don't go around the issue of which is real. Both are real in their own ways, they just can't be equivalent. You can't construct a mathematical engine to produce such output. I don't think Chaitin's number can produce *different* results each time one uses the process. Like I said, I want to see such output produced. It is not consensus to disbelieve in the deterministic nature of math. As for primes, that is a common misunderstanding regarding pseudo-random sequences. The sequence of primes is *fixed*, that is what matters regardless of what it looks like. If you calculate the primes, you get the *same* sequence each time, OK?! But a quantum process produces one sequence one time, another sequence another (in "our world" at least, and let them prove any more.) Folks, I shouldn't have to slog through all this. Check some texts on the foundations of math, I doubt many disagree with me. Bee - I can't get email notify any more. Hi Neil, yes, but that's not the definition of "random" - that it "produces a different number each time". If I produce an algorithm that produces "1", "2", "3" etc. then it is producing "a different number each time" but that is clearly not random. No, the definition of a random sequence is one which cannot be algorithmically compressed to something simpler (e.g., the sequence 1,2,3 can clearly be compressed down to a much simpler algorithm). I can assure you, the distribution of the primes (or the decimals of pi, for example) is truly random in that it cannot be further compressed. Random quantum behaviour would be described by such a truly random sequence in that the behvaiour cannot be compressed to a simpler algorithm (i.e., a simpler deterministic algorithm). Neil: "Folks, I shouldn't have to slog through all this. Check some texts on the foundations of math, I doubt many disagree with me." I actually think most would disagree, Neil. See more on algorithmically random sequence 60. Bee: Of course we are not unbiased. The brain does Bayesian inference (whether consciously or not) and Bayesian inference depends in part on a prior estimate of the probability distribution over the possible observed data. This prior distribution unavoidably introduces bias into cognition. Since this prior distribution is encoded in one’s current brain state at the moment one begins to process a newly observed datum, no two people will bring the exact same bias to any given inference. This is as true of low-level perceptual inference of the kind studied by Brugger as it is of high-level abstract inductive inference of the kind that gives rise to scientific theories. Equally unavoidably, we are predisposed by the structures of our brains to describe the world in terms of certain archetypical symbols, which you may think of as eigenvectors of the brain state. The structure of each brain is determined by a complex interplay between genetic factors and the entire history of that brain from the moment of conception. Thus, there are bound to be species-wide biases as well as cultural and individual predispositions in the way we describe what we see, the questions can we ask about it, and the answers we are able to accept. The only remedy for such biases is the scientific method, practiced with complete intellectual honesty and total disregard for accepted doctrine and dogma -- to the extent that this is humanly possible. Unfortunately, in recent times, this process is becoming increasingly hobbled by a number of destructive trends. Firstly, we have allowed indoctrination to become the primary goal of our education system. Where once it was considered self-evident that the purpose of education is “to change an empty mind into an open one,” educators now claim explicitly that the most important role of education is “to inculcate the right attitude towards society.” Secondly, the unavoidable imperfections of the peer review process have been co-opted by political special-interest groups as well as the personal fiefdoms and in-groups of influential scientists, so the very process that is supposed to guard against bias is now perpetuating it. This can be seen in every modern science; specific recent examples include psychology, sociology, anthropology, archeology, climatology, physics and mathematics. Thirdly, widespread misunderstanding of the content of quantum theory has lead many to doubt that “objective reality” even exists. This, in turn, is used by so-called “philosophers” of the post-modern persuasion to call the very idea of “rational thought” into question. Well, if objective reality and rational thought are disallowed, then only blind superstition and ideological conformity are left. Is it any wonder, then, that progress in science (as distinct from technology) is grinding to a halt? 61. Bee, you ask exactly the right question. If I may paraphrase it thus: "What cognitive or social biases have ( become embedded in and )impeded Science from developing a truly compelling and comprehensive Quantum Gravity unification cosmology & philosophy? " ( say provisionally, cQGc). In a soon to be released monograph, 3 such impediments and biases with far ranging theoretical consequences are identified. In appreciation of this and many of your previous blog postings and since you ask, I feel compelled to answer your question in some detail with this sneak preview of some of the introduction from that monograph, edited only slightly to accomodate the context of this post. "... however our senses, which can fall victim to optical illusions and other cognitive biases, only generate the rawest form of data for Science which applies to these measurement, rigor and axiomatic philosophical principles to weed out such biases to generate the positivistic consensus reality Science seeks to fully describe and explain. Despite this ideal, a great many scientists themselves ( and their theories) still fall victim to the incorrect cognitive bias that our consensus reality is continuous rather than being discrete and positivistic and there is widespread subscription to the mistaken idea that Science is uncovering reality as it 'really is'. This is to mistake the map for the territory it depicts. In a May 2009 essay for Physics Today David Mermin reminds us of the importance of not falling victim to this mistaken thinking. This failure in many to respect the positivistic rudder in Science has been with us since the days of the Copenhagen School and the Bohr/Einstein debates and is the first of 3 major impediments to discovering a cQGc. The deep divide and raging debate ( indeed crisis) which philosophically divides the theoretical physics community regarding the invalidity of mistaken notions of ManyWorlds, MultiVerses and Anthropic rationalizations is not just about the absence of some sort Popperian critical tests of such models but rather, their invalidity that so many fail to accept is based on the blatant violation of intrinsic QM positivism these ideas embody. .../ cont. in Pt.2 62. ... Part.2 The 2.nd impediment has been whimsical or careless nomenclature and/or careless use of language which has resulted in sloppy philosophizing and the embedding into our inquiries, certain misapprehensions regarding precisely what it is we seek to explain. So for example, none of the observational evidence in support of the big bang in any way supports the assertion that this was the birth of the Universe but rather, all we can infer is that the big bang was the 'birth' or phase change of SpaceTime, a subset of Universe, from a state of near SpaceTime_lessness to what we observe today. Philosophically, how can the Universe in its totality, go from a timeless state of (presumably) perfect stasis ( or non-existence)to a timeful state as we observe today. Note how this simple clarification immediate resolves 2 deep questions. Creation ex nihilo and "Why is there something rather than nothing ? " The latter is a positivistic non-sequitur as there is no evidence whatsoever that the Universe was ever in a state of non-existence and Science, being positivistic, need not explain those things which never occur, only those which have or are allowed occur. The 3.rd impediment has been mis-use or runaway abuse of Newton's Hypothetico-Deductive (HD) method where, for example, we begin with say, an Inflation Conjecture to HD resolve certain issues but before very long, we have Eternal Inflation and then we have baby universes popping off everywhere, in abject violation of positivism not to mention SpaceTime Invariance. Similarly, the HD proposal of a string as the fundamental entity of our consensus reality to better interpret a dataset formarly known as the scattering matrix which then becomes String Theory which then becomes Superstring Theory which then becomes matrix theory which then becomes M-Theory perfectly forgets that searching for a fundamental object of our consensus reality is like looking for the most fundamental word in the dictionary. Our consensus reality is intrinsically relational and this fact is the lesson we should take from Goedel's Incompleteness Theorem (GI). So, the mistake here is to take or overly rely on Conjectures as established results and build further HD conjectures on top as also established. In passing, i would further observe that a string can only support a vibratory state (or wave mechanics) by remembering that such a string must have tension, a property which seems to me is conceptually lost when one connects the ends of the string to inadmissably conjure up the first loop to force fit the consequences of one's initial, flawed, HD conjecture. The invocation of convenient quantum fluctuations to force fit Inflation in the face CMB anisotropies is another yet example of such erroneous reasoning. Science is the formal system which can never succeed, in principle, in bootstrapping itself to a generally covariant absolute statement of Truth like "This is the Universe as it really is". (The URL under my name for this comment will take you to a talk which strongly suggests that even Stephen Hawking subscribes to the concept of a reality as it 'really is' ). ... / cont Pt.3 63. ... part.3 So, even a derivation of a cQGc from first principles which would be a proof in any other context, remains undecidably True while at the same time we will know it to be provisionally true( lower case t) because of its comprehensiveness and the absence of a counter-example. GI is actually the only legitimate anthropic principle we may recognize in Science and arises from the fact that all our formal systems (languages, Science, etc) are all arbitrary convential human inventions which can only self-consistently describe the consensus reality we positivistically observe and are able to measure or infer, consistent with our nature as an inextricable subsystem of that consensus reality. My personal mnemonic for GI is "More truth than Proof" So Bee, I hope this goes some way to answering your question and while I feel sure none of it comes as any surprise to you( though other aspects of the monograph might when you someday read it). I hope this respopnse helps and accurately clarifies some things for your readership in answer to your question. Thanks again, 64. Bee, Neil, and Andrew: Regarding your ongoing exchange, I would like to emphasise that there is no point in trying to distinguish between “truly random” and “pseudo-random.” Any process which takes place in finite time can only depend on a finite amount of information, and it takes infinite information to distinguish between “truly random” and “pseudo-random.” Chaitin’s criterion regarding where to stop and declare that we are “close enough to truly random for practical purposes” is as good as any other -- perhaps better than most. In addition, probability distributions merely enumerate possibilities. Therefore, the distributions that follow from our mathematical models apply only to the models, and not to the real world. We may, for example, make an idealized model of coin tossing, which is governed by a binomial distribution. But that distribution only enumerates the possibilities inherent in the idealized model. In the real world, the odds are not 50/50; the dynamics depends very sensitively on initial conditions, and there is no limit to the number of factors we may choose to take into account or neglect as extraneous. Thus our choice of a probability distribution describes our state of knowledge about coin tossing. In respect of phenomena in the real world, we may choose to treat them analytically as though they are governed by some particular probability distribution. But in so doing, it would be a mistake to ascribe objective reality to that distribution. The “true distribution” is as unknowable as the “true value” of a measurement. The best we can do is to approximate these things with varying degrees of accuracy. Hopefully, our accuracy improves as we learn more about the real world. Of course, it is also a mistake to claim that these values and distributions don’t exist, just because they are unknowable. The very fact that these things can be repeatably approximated shows that they are indeed, objectively real. Of that we can be certain, despite being equally certain that we can never know them with perfect precision. 65. Canadian_Phil: I would remind you that reality needs no help from you or your putative “consensus” to be what it is. If our consensus is not converging on an ever-more-accurate approximation of an objective reality that exists independent of any of us, then we are wasting our time with solipsistic nonsense. 66. Well, things are made more difficult by various senses of "random" that are used in various contexts. Yes, there is such a thing as a 'random' sequence per se. BTW it should have been clear, I meant about a process that produces a different sequence of numbers each time it is run. In other words, it's *action* is random. A mathematical process cannot do that. So even if there are other ways to be "random", my essential point is correct: the universe cannot be "made from math" because math is deterministic. That is the key point, "deterministic", more than the precise definition of "randomness" which also gets hung on on pseudo-randomness etc. The digits of pi may be "random" in the sense of appearances but their order is determined by the definition, and it will be the same time after time. That makes those digits "predictable." That is equivalent to the physical point: determinism v. (claimed) inherent predictability. I also still maintain that the most cogent thinkers in foundations of mathematics agree with me in the context I make. 67. (REM also that in the sense used to claim that certain phenomena are "truly random", that is meant to imply that there is nothing we can know that would show us reliably what would happen next. Sure, if I just look at a sequence of digits it may "appear" random and to various tests, as the definitions admit. But once I found out that they were generated by eg the deterministic mathematics behind deriving a root, then I would know what was coming next etc. Andrew - since you are interested in QM issues, pls. take a look at my own blog post on decoherence. A bit clunky now, but explains how we could experimentally recover information that conventional assumptions would say, was lost.) 68. /*...Al Gore is the antichrist...*/ LOL, how did you come into it? 69. Regarding arguments about infinite complexity, I'd like to make a small correction. The information content of pi can be contained in a finite algorithm so it contains only a finite amount of information. I think there are similar algorithms for generating prime numbers as well? 70. I see now that 'anonymous' already made this point much better than I did! 71. If someone were to make an unfortunate comment like: "What I was aiming at is that unlike all other systems the universe is perfectly isolated." Someone else might respond: "What “universe” is she talking about? The local observable universe? The entire Universe [rather poorly sampled!]? We have so little hard evidence in cosmology that it is ill-advised for us to make such sweeping and absolute statements about something we know very little about. Then again, cosmologists and theoretical physicists are: “often wrong, but never in doubt”. Blind leading the blindly credulous into benightedness? 72. Ulrich: I was using the word "universe" in the old fashioned sense to mean "all there is." I have expressed several times (most recently in a comment at CV) that already the word "multiverse" is meaningless since the universe is already everything. But words that become common use are not always good choices. Besides this, I would recommend that instead of posting as "Anonymous" you check the box Name/URL below the comment window and enter a name. You don't have to enter a URL. That's because our comment sections get easily confusing if there's several aonymouses. Best, 73. Zephir: Read it on a blog. You find plenty of numerology regarding Al Gore'e evilness if you Google "Al Gore Anticrist 666." 74. Neil: You cannot. That's why it's circular. It doesn't matter whether you call it "real" or "actual," you have some idea of what it is that you cannot define. (This is not your fault, it's not possible.) Let me repeat what I said earlier. In which sense are the other outcomes "not real?" How do you know that? It occurred to me yesterday that this is way too complicated to see why MUH is not "invalid" for the reasons you mention. (What I wrote in my post is not that MUH cannot be but that Tegmark's claim it can be derived rather than assumed is false. It's such sloppiness in argumentation that I was complaining to Phil about.) Forget about your "sequence" with which you have a problem, and take your own reality at a time t_0. Let's call this Neil(t_0). I leave it to you whether you want Neil just to be your brain or include your body, clothes, girlfriend, doesn't matter. Point is, MUH says you're a mathematical structure and all mathematical structures are equally real somewhere in the level 4 multiverse (or whatever he calls it). Now note that by assuming this you have assumed away any problem of the sort you're mentioning. You do not need to produce your past or future and some sensible sequence, all you really need is Neil(t_0) who BELIEVES he has a past. And that you have already by assumption. (Come to think of it, somehow this smells Barbourian to me.) This of course doesn't explain anything, which is exactly why I find it pointless. Best, 75. Anonymous (6:54 PM, January 07, 2010), First the same recommendation to you as to Ulrich: Please chose Name/URL below the comment window and enter a name (or at least a number) because the comment sections get easily confusing with various anonymouses. (If I could I would disable anonymous comments, but I can only do so when I also disable the pseudonymous ones, thus I unfortunately keep repeating this over and over again.) I agree with you on the first and second point. I don't know what to make of the third and given that I've never heard of it despite having spent more than a decade in fundamental research I doubt that there are many of my colleagues who believe "rational thought is disallowed," and thus there cannot be much to the problem you think it is. Best, 76. Hi Canadian Phil, -Neils Bohr After reading through your long treatise it appears to boil down to have the above statement of Bohr to be just generalized to all of physics. I would say that you’re thinking and that of Mermin’s echoes the same sentiment, which I would contend more indicative as to what the problem is in modern physics, rather then what should be considered as a remedy. So if I were to pick someone who stood for the counter of your position it would be J.S. Bell, as he so often reminded that much of what we consider as truth is not forced upon us by what the experiments tell us, yet rather directly from deliberate theoretical choice. The type of theoretical choices he was referencing being the ones resultantly formed of the sort of scientific ambiguity and sloppiness which are exactly the type you support. “Even now the de Broglie - Bohm picture is gene rally ignored, and not taught to students. I think this is a great loss. For that picture exercises the mind in a very salutary way. “ -J.S, Bell-Introductory remarks at Naples-Amalfi meeting,May 7, 1984. “Why is the pilot wave picture ignored in textbooks? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism are not forced on us by experimental facts, but by deliberate theoretical choice?” -J.S. Bell-“On the impossible pilot wave”, Foundations of Physics, 12 (1982) pp 989-99. P.S. I must apologize for my two previous erasers, yet this to was simply to rid my own thoughts of the ills Bell complained about :-) 77. Phil & Phil: We discussed Mermin's pamphlet here, please stick to the topic. Best, PS: Canadian Phil, I'm afraid the other Phil is also Canadian. 78. Janne: "Regarding arguments about infinite complexity, I'd like to make a small correction. The information content of pi can be contained in a finite algorithm so it contains only a finite amount of information." Yes, you're quite right. I realised after I wrote it but I hoped no one would notice! The decimals of pi are certainly not random as they can be produced by a very simple algorithm. The distribution of the primes is a different thing altogether, which I believe is genuinely random (i.e., cannot be produced by a simpler algorithm. At least, they are random if someone can prove the Riemann Hypothesis - there's a great article : The Music of the Primes. Neil, I think your criticism of the MUH is not so much based on randomness at all, but more the idea that ANY mathematical structure is unvarying with respect to time and so cannot represent the universe. However, this isn't a valid criticism of Tegmark's idea as he proposed a block universe mathematical structure in his original paper which would, of course, be unvarying with time but would appear to change with time for any observer inside the universe. Here is an extract from Tegmark's paper: "We need to distinguish between two different ways of viewing the external physical reality: the outside view or bird perspective of a mathematician studying the mathematical structure and the inside view or frog perspective of an observer living in it. A first subtlety in relating the two perspectives involves time. Recall that a mathematical structure is an abstract, immutable entity existing outside of space and time. If history were a movie, the structure would therefore correspond not to a single frame of it but to the entire videotape." So the entire mathematical structure might be fixed and immutable, but to the frog everything still appears to be moving in time. I don't think it's possible to simply criticise Tegmark's work on that basis - he did his job very well. It's a superb paper, really all-encompassing, well worth putting aside a day to read it. But I don't think his conclusion is right (hardly anyone does, it appears). (Good luck with your work on decoherence, Neil. I was interested a while back but I've had my fill of it). 79. Hi Bee, As I would say that what Bell was referring to has directly to do with what is asked here , as to whether physics is cognitively biased, I would wonder as to how that has my remarks as being off topic? Perhaps you feel that my contention was meant as support for a particular theory, which if that be the case I can assure you it certainly is not as I don’t have a particular theory I favour. Actually all I was asking to be considered is the contention of Bell that vagueness, ambiguity and sloppiness are primary what stands as being the noise which currently prevents it from being able to discover what nature is, rather then only what we might be able to say about it. 80. Phil: I was just saying for prevention if you want to discuss Mermin's essay, please don't do it here since Canadian Phil doesn't seem to know we previously discussed it. Best, 81. Hi Bee, I see your point. as perhaps this post is more meant to ponder the cause(s) of bias, rather then what any particular one might be. Still though as in medicine it is hard to discover the mechanism of disease without first examining its symptoms. That would be as science would have us look to experiment to consider what begs explanation. Then of course with the aid of this examination to find if of any the explanations offered to be correct, only if it can further have us understand the mechanism as to able to predict further what this would demand. In the case of medicine this is confirmed when such understanding has rendered a cure that exceeds those found sometimes as only resultant of a belief one has, rather than able to demonstrate one has an understanding as to how that suggests reason as to why. So I see this whole thing that’s called science as a continuing process to delve ever deeper to discover the underlying mechanisms of the world, rather then have it become something that prevents us from finding them. I’m thus reminded of Newton’s statement that he could offer no explanation of gravity yet only able to predict it’s actions and that should be enough and yet Einstein was not intimated into accepting such a limitation and resultantly able to come up with a mechanism which has proven us able to understand more than Newton thought as being relevant or to have utility. So simply put as I see a person of science is not one who at any point is able to accept the answer for how or why as simply because, as if they do that forms to be the gratest bias which prevents its success. 82. Bee, with all due respect you are making the wrong choice about who has the burden of proof about our world and the various unique "outcomes" we observe, v. the idea that there are more of them. Let's say we do an actual quantum experiment (like MZ interferometer with a phase difference) and get sequence 1 0 0 1 1 1 0 1 0 1 0 0 1 1 ... That is an "actual result" that is not AFAWK computable from some particular algorithm. It is not like the digits of pi: they are logically necessary (and hence, deterministically reproducible) consequences of a particular mathematical operation. It is not my job to "prove" or even have the burden of argument, that all other possible sequences of hits from the MZI "exist" somewhere as other than raw abstractions, like "all possible chess games." The burden of proof is on you and anyone who believes in physobabble concepts like MWI. Until that is demonstrated or at least solidly supported, I have the right to claim the upper hand (not "certainty"; but so what) about there being a distinction between "natural" QM process outcomes, and the logically necessary and fixed results of mathematical operations. 83. (I mean not my job to prove they aren't there.) 84. Dyson, one of the most highly-regarded scientists of his time, poignantly informed the young man that his findings into the distribution of prime numbers corresponded with the spacing and distribution of energy levels of a higher-ordered quantum state. Mathematics Problem That Remains Elusive —And Beautiful By Raymond Petersen Robert Sacks devised the Sacks spiral a variant of the Ulam spiral, in 1994. It differs from Ulam's in three ways: it places points on an Archimedean spiral rather than the square spiral used by Ulam, it places zero in the center of the spiral, and it makes a full rotation for each perfect square while the Ulam spiral places two squares per rotation. Certain curves originating from the origin appear to be unusually dense in prime numbers; one such curve, for instance, contains the numbers of the form n2 + n + 41, a famous prime-rich polynomial discovered by Leonhard Euler in 1774. The extent to which the number spiral's curves are predictive of large primes and composites remains unknown. A closely related spiral, described by Hahn (2008), places each integer at a distance from the origin equal to its square root, at a unit distance from the previous integer. It also approximates an Archimedean spiral, but it makes less than one rotation for every three squares. It seems such randomness has some order to it?:) 85. Ok, but where do you get the justification for the received wisdom that "the universe is perfectly isolated" in any meaningful physical sense? Why are not scientists more careful and humble in their intuitive beliefs? 86. Bee: Thanks for explaining how to post under a pseudonym. I am the Anonymous from 6:54 PM, 8:05 PM, and 8:09 PM on January 07. The “widespread misunderstanding of the content of quantum theory” I was referring to includes, inter alia, the notion that a quantum system has no properties until they are brought into existence by the observer through an act of measurement. This sort of nonsense not only retards the progress of physics, but gives rise to all manner of pernicious superstition and mystical hocus-pocus, wrapped in a false mantle of scientific objectivity. In my view, enormous damage has been done, not only to physics, but to all of science – and indeed to the very concept of objective rationality – by those who mistakenly read an ontological content into the famous statement of Neils Bohr, quoted by Phil Warnell above. Let me repeat it here for convenience: This is an explicit warning not to ascribe the “weirdness” of the quantum formalism to the real physical world, but since the day the words were uttered, there has been an apparently irresistible urge to do the exact opposite. Bohr was not alone in suffering such misinterpretation. Schrödinger originally introduced us to his cat as a caution against ascribing physical reality to the superposition of states, yet Schrödinger’s cat was made famous by others who deviously used it to support precisely what Schrödinger argued against. And Bell’s theorem is ubiquitously used in support of spooky claims about quantum measurement, effectively drowning out Bell’s own opinion of hidden-variable theories, as made clear by another quote from page 997 of the article quoted by Phil Warnell: “What is proved by impossibility proofs is lack of imagination.” Of course, those who indulge in mystical interpretations of quantum mechanics do not believe they are disallowing rational thought; they think they are being deep. But their stance is nonetheless profoundly anti-rational; it leaks out of physics into metaphysics and philosophy, and from there, into the rest of post-modern thought. It lends credence to such notions as “the quantum law of attraction” (otherwise known by Oprah fans as “the secret”), not to mention the idea that reality is a matter of consensus. The first is a thinly veiled return to sympathetic magic, and the second is a kind of quantum solipsism that results from treating the “intersubjective rationality” of Jürgen Habermas as legitimate epistemology, instead of recognizing it as a degenerative disease of the rational faculty. 87. Ulrich: "Everything there is" is perfectly isolated from everything else, since it's damned hard to interact with nothing. Best, 88. Neil: I already said above I don't believe in MWI. Unfortunately, since you are the one claiming you have a "proof" that MUH can't describe reality it's on you to formulate your proof in well-defined terms, which you fail to do. Your three step procedure makes use of the notion of a "production" which is undefined and your other arguments continue to assume a notion of what is "not real" that makes your argument circular. Look, read my last comment addressed to you and you'll notice that you can stand on your feet and wiggle your toes but there is no way to proof what you want to proof without having to assume some particular notion of reality already. Andrew got it exactly right: your problem is that you believe there has to be some actual time-sequence, some "production" (what is a production if not a time-sequence?). I'm telling you you don't need a time-sequence. You don't need, in fact, any sort of sequence or even ordering. All you need to capture your reality in maths is one timeless instant of Neil(now). That's not how you might think about reality, but there's no way to prove that's not your reality. Best, I wonder how one could have ever been lead through to the "entanglement processes" without ever first going through Bell? I mean sure, at first it was about Einstein and spooky, and now it's not such a subject to think it has through time become entwined with something metaphysical and irrelevant (thought experiments about elephants)because one would like to twist the reality according too? I mean what was Perose and Susskind thinking?:) Poetically, it has cast a segment of the population toward connotations of "blind men." Make's one think their house is some how "more appealing" as a totally subjective remark. So indeed one has to be careful how we can cast dispersions upon the "rest of society" while we think we are safe in our "own interactions" to think we are totally within the white garment of science. I hear you.:) 90. Plato: Yes, that’s a perfect example of the sort of drivel that results when you think a probability is a property of a particle. It makes smart guys say dumb things... 91. Ain Soph, Your choice of a handle reminded me of a term that just came to me as if I had heard it before but the spelling was different. Is there any correlation? 92. Hi Ain Soph, I must say I was intrigued by what you said last as to where our prejudices and preconceptions can lead us to, even though they may appear as sound science. I would for the most part agree with what you said in such regard, except for the role of Bohr and what his intentions where as driven by his own philosophical and metaphysical center. To serve as evidence of my contention goes back to the very beginnings of the Copenhagen interpretation’s creation and the sheer force of will Bohr had to serve in having it become as ambiguous and sloppy as many find it now. That would be when Heisenberg first arrived at the necessity for uncertainty with his principle and with his microscope example attempted to lend physical meaning to it all. Bohr of course staunchly opposed such an attempt and argued Heisenberg even when taken to bed in sickness until he finally relented and altered his view to match that of Bohr’s. So my way of reading this coupled with the content of his rebuttal of EPR has given me reason to find that while Bohr may not as Einstein being guilty at times of telling nature how it should be,was guilty of having the audacity of insisting what nature would allow us to ultimately know. I’ve then have long asked, which is the greater transgression as to enabling physics to progress; that being convinced nature having certain limits in regards to what’s reasonable or rather the only limiting quality it has is in it restricting having anyone able to find the reason in them. So in light of this I don’t know what your answer would be, yet I consider the second as being the most unscientific and thus harmful of the two biases; as the first can be falsified by experiment, while the latter prevents one from even bothering making an attempt. Fortunately for science there always have been and I hope always will be those like Einstein, Bohm and Bell, who refuse to be so intimidated as to feel restricted to look. 93. Bee: Precisely. How interesting that you should recognize the reference... 94. Plato: My last post should have been addressed to you, not Bee. 95. Phil: I get the impression that you’ve spent quite a bit more time studying the history of the subject than I, so I will defer to your greater knowledge of it. It seems, then, that I have always given Bohr the benefit of more doubt than there actually is. The quotation we have both commented on actually doesn’t appear in print anywhere under the by-line of Neils Bohr. It was attributed to him by Aage Petersen in an article that appeared in Bull. Atom. Sci. 19:7 in 1963, a year after Bohr’s death. I had always thought that Petersen rather overstated the case – especially in the third sentence – and that Bohr’s own stance must have been more sane. But perhaps not. Another, who gleefully conflated mysticism and quantum mechanics, was J. R. Oppenheimer. For example, his 1953 Reith Lectures left his listeners to ponder such ersatz profundities as the following: “If we ask whether the electron is at rest, we must say no; if we ask whether it is in motion, we must say no. The Buddha has given such answers when interrogated as to the conditions of a man’s self after his death; but they are not familiar answers for the tradition of seventeenth- and eighteenth-century science.” Disturbingly, this strikes me not so much as cognitive bias as deliberate obfuscation. True things are said in a way that invites the listener to jump to false conclusions. 96. "Ulrich: "Everything there is" is perfectly isolated from everything else, since it's damned hard to interact with nothing. Best, B." If you give it a little more thought, you may be forced to concede that the "perfectly isolated" assumption lacks any rigorous scientific meaning. Certainly no empirical proof in sight. By the way, you and your colleagues: (1) Do not know what the dark matter is [and that's = or > than 90% of your "everthing"]. (2) Do know what physical process give rise to "dark energy" phenomena. (3) Do not have an empirical clue about the size of the Universe. (4) Do not have more than description and arm-waving when it comes to explaining the existence and unique properties of galaxies. Wake up! Stop swaggering around like arrogant twits, pretending to a comprehensive knowledge that you most certainly do not possess. Einstein spoke the truth when he said: "All our science when measured against reality [read nature] is primitive and childish, and yet it is the most precious thing we have." THAT is the right attitude, and it is a two-part attitude, and both parts are mandatory for all scientists. Real change is on its way, 97. Ulrich: It's not an assumption. The universe is a thermodynamically perfectly isolated system according to all definitions that I can think of. If you claim it is not, please explain in which way it is not isolated. As for the rest of your comment, yes, these are presently open questions in physics. I have never "pretended" I know the answer, so what's your point. Besides this, your comments are not only insulting, they are also off-topic. Please re-read our comment rules. Thanks, 98. Ulrich: Real change is on its way? What – you’re going to learn some manners? 99. The very name "Ain Soph" suggests a lack of manners. 100. Hi Arun, I find Ain Soph to be quite a respectful name as to serve as a reminder that when it comes to science since its central premise denies ever considering there be made allowable such a privilege position to have it then able to deny their be reason as to look away from finding explanation, for as Newton reminded in respect to any such propositions: 101. Hi Ain Soph, Well I don’t know which of us are more studied when it comes to the history of the foundations, as it appears you’ve looked at it pretty closely. My only objection being it seems as of late there appears to be a little rewriting of it as to give Bohr a pass on what his role in all this was and what camp he represented, as to have him thought as misunderstood rather than its primary advocate. Of course we don’t have any of them with us here today as to ask directly, yet still I think things are made pretty clear between what they left of their thoughts and their legacy made evident with the general attitudes of the scientists of the following generation. My thoughts are this obfuscation as you call it, has simply reincarnated itself in things like many universes, many worlds, all is math and so many of the other approaches in which the central premise of each is to have made unapproachable exactly what needs to be approached. I must say your moniker is an excellent symbol as to what all these amount to as being when it comes to natural philosophy. So as such I would agree that anytime things in science are devised which prevents one from being able to ask a question meant to enable one to find the solution to something that begs explanation, that’s the time to no longer have it considered as science since it’s lost its reason to be. That’s to say there is no harm in having biases as long as the method assures these can be exposed for what they are with allowing them to be proven to be wrong as they apply to nature. 102. Hi Bee, I think this whole question of biases come down to considering one thing, that being the responsibility of physics is to have recognized and give explanation to nature’s biases, rather than being able to justify our own. So yes it does all depend on biases with having reality itself having the only ones that are relevant. 103. Phil, Its fine. I have not been able to decipher what you and Ain Soph are saying anyway. 104. Hi Ain soph It is not by relation that I can say I am Jewish...but that I understood that different parts of society have their relation to religion, and you piqued my interest by "the spelling" and how it sounded to me. It bothered me as to where I had seen it. As in science, I do not like to see such divisions, based on a perceived notion that has been limited by our own choosing "to identify with" what ever part of science that seems to bother people about other parts of science as if religion then should come between us. You determined your position long before you choose the name. The name only added to it as if by some exclamation point. Not only do I hear you but I see you too.:) 105. Plato: Yes, the spelling is the irony. But Copenhagen is not Cefalu. Or is it? 106. Oops, my bad! Let me just reiterate: and leave it at that. Almost. Science, unlike cast-in-stone religions, is self-correcting. It may be a slow process, but I should trust the process. Come on you grumbler [not to mention sock puppet], have a little faith! 107. Phil: Oh, great. Whenever the revisionists go to work on a discipline, expect trouble! If the Copenhagen Orthodox Congregation, the Bohmians, the Consistent Historians, the Einselectionists, the Spontaneous Collapsicans, and the Everettistas can’t communicate now, just wait until revisionism has cut every last bit of common ground out from under them! 108. Ulrich: Grumbler? Sock Puppet? What is it with you? You can’t maintain a civil tone from one end of a 100-word post to the other? Clean up your act, Ulrich, or I will just ignore you. 109. Hi Arun, ”Its fine. I have not been able to decipher what you and Ain Soph are saying anyway.” Now I feel that I’ve contributed to the confusion, rather than having things made a little clearer, which is probably my fault. To have it simply put as to what for instance Bell’s main contention and complaint was is that things like superposition that lead to contentions such as taking too seriously things such as the collapse of the wave function and the measurement problem more generally, are the result of particular theoretical choices, rather then what’s mandated by experiment. So Bell’s fear, if you would have it called that, is the impediment such concepts have as physics attempts to move forward to develop even a deeper understanding. That’s why he used concepts such as ‘beables’ in place of things like ‘observables’ for instance in an attempt to avoid such prejudices and preconceptions. 110. Hi Ain Soph, I actually don’t have much concern the historical revisionists will be able to increase the confusion any greater than it already is. My only concern is when deeper theories are being considered is that the researchers are clear as to what begs explanation and what really doesn’t. That’s to have them able to distinguish what concepts the use are result of only particular theoretical choice and which ones are required solely by what experiment have as necessary. That’s to simply to have recognized what serves to increase understanding, while what only serves as impediments in such regard. 111. Phil: It’s true that it would be hard to increase the confusion beyond its current level. But historical revisionism could make things worse by erasing the “trail of crumbs” that marks how we got here. Personally, when I’m confused, I often find the only remedy is to backtrack to a place where I wasn’t confused and start over from there. Quantum mechanics is, these days, presented to students as a formal axiomatic system. As such, it is internally consistent, and consistent also with a wide variety of experimental results. But it is inadequate as a physical theory. So some of the axioms need to be adjusted, but which ones? And in what way? The axiomatic system itself gives us no help in that regard, and simply trying random alternatives is an exercise in futility. The more familiar we are with the existing system, the harder it is to think of sensible alternatives, and the very success of the current theory guarantees that any alternative we try will almost certainly be worse. Indeed, the literature of the past century is littered with such attempts, including some truly astounding combinations of formal virtuosity and physical vacuity. So, to have any hope of progress, I think we must trace back over the history of the formulation of the present theory, and reconsider why this particular set of axioms was chosen, what alternatives were considered, why they were rejected, and by whom. We need to reconsider which choices were made for good reason after sober debate, which ones were tacitly absorbed without due consideration because they were part of “what everybody knew” at the time, and which ones were adopted as a result of deferring to the vigorous urgings of charismatic individuals. We are, as you say, badly lost. But let that trail of crumbs be erased, and we may well find ourselves hopelessly lost. And that is why historical revisionism is so dangerous. 112. Phil: “... he used concepts such as ‘beables’ in place of things like ‘observables’ for instance in an attempt to avoid such prejudices and preconceptions.” I must confess that I cringe every time I read a paper about beables. Yes, the term avoids prejudices and preconceptions, but it is also completely devoid of valid physical insight. Thus it throws the baby out with the bathwater. For me, it makes thinking about the underlying physics even harder and actually strengthens the stranglehold of the formal axiomatic system we are trying to escape. 113. Ain Soph: But Copenhagen is not Cefalu. Or is it? Oh please the understanding about what amounts to today's methods has been the journey "through the historical past" and the lineage of teacher and students has not changed in it's methodology. Some of the younger class of scientist would like to detach themselves from the old boys and traditin. Spread their wings. Woman too, cast to a system that they too want to break free of. So now, such a glorious image to have painted a crippled old one to extreme who is picking at brick and mortar. How nice. Historical revisionist? They needed no help from me. These things are in minds that I have no control over, so, how shall I look to them but as revisionists of the way the world works now. Even, Hooft himself:) 114. Ain Soph: I'm not really sure what point you're trying to make. If anything then the common present-day interpretation of quantum mechanics is an overcompensation for a suspected possible bias: we're naturally tending towards a realist interpretation, thus students are taught to abandon their intuitions. If this isn't accompanied by sufficient reflection I'm afraid though it just backlashes. Btw, thanks for Bell's impossibility quote! I should have used that for fqxi essay! Best, 115. Hi Ain Soph, As to the revisionists it’s true that they might cause some reason for concern. However there is the other side of the coin where those like Guido Bacciagaluppi & Antony Valentini who are telling the story from the opposite perspective of the prevailing paradigm and so I suspect the crumbs will always remain as to be followed. I am surprised you’re not a ‘beables’ appreciator for it was Bell’s way of emphasizing that QM had to be stripped bare first of such motions before it had any chance of being reconstructed in such a way it would serve to be a consistant theory that can take one to the experimental results without interjecting provisos that don’t stem from the formalism. This has me mindful of a pdf I have of a hand written note Bell handed to a colleague during a conference he attended year ago that listed thw words he thought should be forbidden to be used in any serious conversation regarding the subject which were “ system, apparatus, microscopic, macroscopic, reversible, irreversible, observable, measurement, for all practical purposes ”. So I don’t know as to exactly how you feel about it, yet to me this appears as a good place to start. 116. Plato: For the past few hundred years, Western civilization has enjoyed an increasingly secular and rational world view – a view which practiced science as natural philosophy and revered knowledge as an end in itself. The result was an unprecedented proliferation of freedom and prosperity throughout the Western world. But that period peaked around the turn of the last century, and has been in decline for almost a hundred years. Now we value science primarily for the technology we can derive from it. And the love of knowledge is being pushed aside by a resurgence of mysticism and virulent anti-rationalism. This is not just young Turks making their mark. This is barbarian hordes at the gates. And yes, we are now witnessing a return to a preoccupation with gods and goddesses and magic, just like the last time a world-dominating civilization went into decline. The result was a thousand years of ignorance and serfdom. This time, the result may be less pleasant. 117. Bee: “If this isn't accompanied by sufficient reflection I'm afraid though it just backlashes.” Yes. Exactly my point. Students today are not encouraged to reflect and develop insight. They are encouraged to memorize formal axioms and practice with them until they can produce detailed calculations of already known phenomena. They are thereby trained to use quantum mechanics in the development of new technologies, but they are not educated in a way that would allow them to move beyond the accepted axiomatic system in any principled way. Bourbakism and the Delphi method ensure that the questions and beliefs of the vast majority remain well within the approved limits. 118. Phil (and Bee - this further amplifies my reply to you): While I agree with the intent of banishing misconceptions and preconceptions, I disagree with the method of inventing semantically sterile new terminology. For example, in moving from Euclidean to hyperbolic geometry, one can simply amend the parallel postulate, claim that geometric insight is therefore of no further use, and deduce the theorems of hyperbolic geometry by the sterile, rote application of axioms. Or one can draw a picture of a hyperbolic surface, and enlist one’s geometric insight to understand how geometry changes when the parallel postulate is amended. One ends up proving the same theorems, but one gets to them much faster, and much more surely. And one understands them much better. In short, I think teaching students to abandon their intuitions does more harm than good. Having abandoned them, what choice remains to them but to mimic the cognitive biases of their instructor? 119. This comment has been removed by the author. 120. Hi Ain Soph, You talk about amending axioms instead of eliminating them from being ones; this is exactly how the type of ambiguity and sloppiness that Bell complained about arose in the first place. What an axiom or postulate represents in math or theory is a self evident truth, which either is to be considered so or not. What would it mean to amend an axiom, could that mean for instance that the fifth postulate holds every day except Tuesdays? No I’m sorry that’s the type of muddled headed thinking that has had QM become what it is, with all the ad hoc rules and decisions as to how and when they are to apply. The fact is in deductive (or inductive) reasoning a postulate is or it isn’t, with no exceptions allowed, otherwise it has lost all its ability to be considered as logic. This then is exactly what a ‘beable’ is, as being something that you consider as a postulate (prerequisite) or not, it either is or it isn’t. What then falls out is then consistent with nature as it presents or it doesn’t. So that’s why for instance Bell liked the pilot wave explanation, since when asked is it particle or wave , such a restriction of premise didn’t satisfy what nature demonstrated as being both particle and wave. Therefore the concept of ‘beables’ is not to have what is possible ignored, yet quite the opposite. So where for instance the pilot wave picture is referred to as being a hidden variables theory, Bell would counter that standard QM is a denied variables theory. This is to find that it makes no sense to have axioms amended, they either are or they’re not, otherwise it just isn’t a method of reason. So what’s asked for is not that intuitions be ignored, rather that when such intuitions are incorporated into theory there be a way to assess its validity where nature is the arbitrator of what is truth and not the theorist. 121. Phil: Sorry, I should have been more clear. Essentially, the parallel postulate holds that the sum of the internal angles of any triangle is equal to 180 degrees. If we amend that to read “greater than” then we get hyperbolic geometry. (And “less than” gives us elliptic geometry.) 122. Hi Ain Soph, What you are talking about is not amending a postulate, yet rather to define or set parameters where there isn’t one. What the fifth postulate is doesn’t allow for what you propose in either case, so therefore it must be eliminated to even have it considered. That’s like people believing Einstein set the speed of light as a limit, rather than him realizing this speed was a limit resultant of being a logical consequence of his actual premises; which are there is no preferred frame of reference, such that whenever anyone is arbitrarily chosen the laws of nature will present as the same. The speed of light then being a limit falls out of these axioms and not needed in addition as to have it to be. That is it’s not an axiom yet rather a direct consequence of them. So then if you want things to always be hyperbolic or elliptic geometrically would require an axiom and not a parameter to mandate it be so. Whether it is less or greater than 180 degrees holds no significant where such parameters are just special cases as the one it being compared , with itself also just a special case where no such axiom to have it be so exists. So for me a true explanation is found when things are no longer simply parameters, yet rather consequences of premise (or axioms). Of course one could insist all such things are indeed arbitrarily chosen, which on the surface sounds reasonable, yet it still begs the question be answered how is it these parameters hold at all to present a reality that has them as fixed. So my way of thinking, being consistent with Bell’s, is to be a scientist is to find the world as a construct mandated by logic and to think otherwise just isn’t science. This I would call the first axiom of science, where that of Descartes’ being the second, which is to give us and not reality reason to think we might discover what, how and why it is as it is. 123. Ain Sof:Now we value science primarily for the technology we can derive from it. No, as I see it, you are the harbinger of that misfortune. What can possible be derived from developing measure that extend our views of the universe? "Only" human satisfaction? Shall we leave these things unquestionable then and satisfactory, as to the progression you have seen civilization make up to this point? You intertwine the responsibility of, and confuse your own self as to what is taking place in society, cannot possibly be taking place within your own mind?:)yet, you have "become it" and diagnosed the projection incorrectly from my point of view:) You could not possibly be wrong?:) 124. Phil, He is right in relation to this geometric sense and recognizes this to be part of the assessment of what exists naturally. Gauss was able to provide such examples with a mountain view as a move to geometrically fourth dimensional thinking. As to lineage, without Gauss and Riemann, Einstein would have not geometrically made sense. This is what Grossman did for Einstein by introduction. Wheeler, a Kip Thorne. 125. In context, Phil: Emphasis mine. Businessmen value Science for the technology it can produce. Governments, sometimes. There is of course this little thing called "National Defense" such that even if a country is not on a war-footing, they at least seek the technology that puts them on an even-footing with other governments that may put the war-foot on them. USSR vs USA in the 20th century, Iran vs Israel and the West today, and there are many other examples throughout history. But we knew that. I'm just reminding. I believe Ain Soph was railing against the Politico-Economic "human" system that places Engineering above Science, and I hope I've explained why. I don't see where Ain Soph was being the harbinger of that reality; rather, he was pointing it out. Governments also support Theory, and that's key. Questions regarding how many Theorists are actually needed non-withstanding, we do need them. Businesses know this, and cull theorists only when they are on the edge of a breakthrough. They haven't the time to waste on things that will pay off 10-20 years down the road. They want applications, now. Yesterday would be better. Two examples: Bell Labs up through the mid-1990's, and Intel. Intel used Quantum Physics as was known, specifically Surface Physics. Bell Labs on the other hand had no reason to work on "Pure Research," yet they did. But Bell Labs was part of the communications monopoly AT&T, which had more money than God, AND the US Government poured lots of money into Bell Labs as well to the point you couldn't tell where AT&T ended and the government began, so the Labs were an example of yes, Government funding, at least partially. Enter our new age, where rich folks like Branson and Lazaridis etc. are picking up the slack. There has been a shift in funding sources, especially with governments hard pressed to meet budgets, and when that happens, Theory always takes a hit. 126. It's late and also I don't think Bee really gets my point (did anyone else?) about randomness in nature v. the deterministic nature of math. But for the record some clarification is needed. First, it's not really a matter of my having or claiming to have a disproof of MWI. But it is accepted logical practice that the one postulating something more than we know or "have", has the burden of proof. Also, I am saying that *if* the world is not MWI then it cannot be represented by deterministic math - which is different from saying, "it is not" MWI and thus cannot be represented by math. (Bee, either you need to sharpen up your logical analysis of semantics, or I wasn't clear enough.) Furthermore, I don't "believe" that there has to be some actual sequence, I am saying that such specific sequences are what we actually find. But now I see the source of much confusion of you and Andrew T: you thought, I was conflating the idea that "real flowing time" couldn't be mathematically modeled with the other idea that a mathematical process can produce other than the specific sequence it logically "has to", such as digits of roots. But that isn't what I meant. It doesn't matter whether time actually flows or not, or if we live in a block universe. The issue is, the sequence produced in say a run of quantum-random processes is thought to be literally random and undetermined. That means it was not logically mandated in advance by some specific choice such as "to take the digits of the cube root of 23." Sure, some controlling authority could pick a different seed each time for every quantum experiment, but "who would do that"? But if there isn't such a game-changer, then every experiment on a given particle would yield the same results each time. That is the point, and it is supported by the best thinking in foundations. Above all, try to get someone's point. 127. (OK, I still may have confused the issue about "time" by saying "in advance." The point is: even in a block universe with no "real time", then various sequences of e.g. hits in a quantum experiment would have to be generated by separate, different math processes. That sequence means the ordered set of numbers, whether inside "real time" or just a list referring to ordering in a block of space-time. So one run would need to take e.g. the sqrt of 5, another the cube root of 70, another 11 + pi, etc. Something would have to pick out various generators to get the varying results. If that isn't finally clear to anyone still dodging and weaving on this, you can't be helped. 128. There were a few other, unique touches I had some fun with. Legend has it that glowering over the entrance to Plato's Academy was the phrase, "Let none ignorant of geometry enter here." Tipping our metaphorical hat to rigour of the Ancient Greeks while simultaneously invoking our outreach mandate, I contacted a classicist so that I could eventually inscribe a Greek translation of"Let no one uninterested in Geometry enter here" over both the the north and south doors of the building. It's possible that some wilful geometrical ignoramuses could penetrate the facility through another entrance, of course, but they'd have to go to a fair amount of trouble to do so.First Principles by Howard Burton, page 244, para 2 and page 245 See:Let no one destitute of geometry enter my doors. Even Howard was seeking to define this attribute in relation to the development of the institute. Not a lot understand the implication of this over the doorway in this "new institution called Perimeter Institute" but yes, if one were hell bent toward research money for militarization then indeed such technologies could or might seem as to the status of our civilization. Part of this Institution called PI I believe is what Ain Sof is clarifying to Phil, is an important correlation along side of all the physics, and does not constitute the idea of militarization, but research about the current state of the industry. That is a "cold war residue" without the issue being prevalent has now been transferred to men in caves. Fear driven. The larger part of society does not think this way? So in essence you can now see Ain sof's bias.:) 129. Neil: I apologize in case I mistakenly mangled your statement about MWI, that was not my intention. About the burden of proof: you're the one criticizing somebody else's work, it's on you to clarify your criticism. I think you have meanwhile noticed what the problem is with your most recent statement. I think I said everything I had to say and have nothing to add. You are still using undefined expressions like "process" and "picking" and "generation of sequences." Let me just repeat that there is no need to "generate" a sequence, "pick" specific numbers or anything of that sort. It seems however this exchange is not moving forward. Best, 130. This comment has been removed by the author. 131. (Note: all of the following is relevant to the subject of cognitive bias in physics, being concerned with the validity of our models and the use of math per cognitive model of the world.) Bee: thanks for admitting some confusion, and it may be a dead end but I feel a need to defend some of my framings of the issue. I don't know why you have so much trouble with my terms. We have actual experiments which produce sequences which appear "random", and which are not known to be determined by the initial state. That is already a given. I was just saying as analysis that a set of sequences that are not all the same as each other, cannot be generated by a uniform mathematical process (like, the exact same "program" inside each muon or polarizing filter.) If there was the same math operation or algorithm there each time, it would have to produce the same result each "time" or instance. Find someone credible who doesn't agree in the terms as framed, and I'll take it seriously. Steven C: Your post is gone (well, in my email box - and I handle use of author-deleted comments with great carefulness), but I want you to see this anyway: As for this particular squabble, it isn't any more my stubbornness than anyone else who disagrees and keeps on posting. I had to, since I was often misunderstood and am expressing (believe or like it or not) the consensus position in foundations of mathematics and physics. Read up on foundations and find re determinism and logical necessity v. true randomness. Your agreeing with Orzel in that infamous thread at Uncertain Principles doesn't mean his defense of the decoherence interpretation was valid. Most of my general complaints are the same as those made by Roger Penrose (as in Shadows of the Mind.) He made, like I did, the point that DI uses a circular argument: if you put the sort of statistics caused by collapse into the density matrix to begin with, then scrambling phases produces the same "statistics" as one would get for a classical mixture. Uh yeah, but only because "statistics" are fed into the DM in the first place. Otherwise, the DM would just be a description of the spread of amplitudes per se. You have to imagine a collapse process to turn those amplitudes - certain or varied as the case may be- into statistical isolation. The DI is a circular argument, which is a logical fallacy not excused or validated by "knowing more physics." Would you be dismissive of Penrose? As for MWI: if possible measurements produce "splits" but there is nothing special about the process of measurement or measuring devices per se, then wouldn't the first BS in a MZ interferometer instigate a "split" into two worlds? That is, one world in which the photon went the lower path, another world where it went the other path? But if that happened, then we wouldn't see the required interference pattern in any world (or as ensemble) because future evolution would not recombine the separated paths at BS2. Reflect on that awhile, heh. 132. [part two of long comment] At Uncertain Principles I critiqued Orzel's specific example - his choice - which used "split photons" in a MZI subject to random environmental phase changes. He made the outrageous argument that, if the phase varies from instance to instance, the fact that the collective (ensemble) interference pattern is spoiled (like it would be *if* photons went out as if particles, from one side of the BS2 or the other) somehow explains why we don't continue to see them as superpositions. But that is absurd. If you believe in the model, the fact that the phase varies in subsequent or prior instances can't have any effect on what happens during a given run. (One critique of many - suppose the variation in phase gets worse over time - then, there is no logical cut-off point to include a set of instances to construct a DM from an average spread, see?) At the end he said of the superposed states "they just don't interfere" - which is meaningless, since in a single instance the amplitudes should just add regardless of what the phase is. Sure, we can't "show" interference in a pretty, consistent way if the phase changes, but Orzel's argument that the two cases are literally (?) equivalent (despite the *model* being the problem anyway, not FAPP concerns) is a sort of post-modern philosophical mumbo jumbo. My reply was "philosophy" too, but at least it was valid philosophical reasoning and not circular and not making sloppy, semantically cute use of the ensemble concept. (How can I appreciate his or similar arguments if it isn't even clear what is being stated or refuted?) Funny that you would complain about philosophy v. experiment, when the DM is essentially an "interpretation" of QM not a way to find different results. Saying decoherence happens in X tiny moment and look, no superposition! - doesn't prove that the interpretation is correct. We already knew, the state is "collapsed" whenever we look. Finally, I did actually just propose a literal experiment to retrieve amplitude data that should be lost according to common understanding of creating effective (only that!) mixtures due to phase changes. It's the same sort of setup Orzel used, only with unequal amplitude split at BS1. You should be interested in that (go look, or again but carefully), it can actually be done. It's importance goes beyond the DI as interpretation, since such information is considered lost, period, in traditional theory not even counting interpretative issues. I do like your final advice: TRY, brutha, to expand your horizons. Is all I'm saying. Yes, indeed! I do try - now, will you? BTW Bee and I get along fine, despite tenaciously arguing over mere issues, and are good Facebook Friends. We send each other hearts and aquarium stuff etc. - I hope that's OK with Stefan! (Stefan, I will send a Friend request to you too, so you feel better about it.) I also have her back when she's picked on by LuMo or peppered by Zephir. 133. (correction, and then I leave it alone for awhile)- I meant to say, in paragraph #1 of second comment: [Not, "during a given run." - heh, ironic but I can see it's a mistake to conflate the two.] 134. Phil: I don’t know what to make of your last two posts. Surely you’re not unfamiliar with non-Euclidean geometry? In flat Euclidean space, the geodesics are straight lines and the sum of the internal angles of any triangle is equal to 180 degrees. In a negatively curved space, the geodesics are hyperbolae and the sum of the internal angles of any triangle is less than 180 degrees. In a positively curved space, the geodesics are ellipses and the sum of the internal angles of any triangle is greater than 180 degrees. These are facts. And in each case, they are also logical necessities that follow from the geometric structure of the space. (note: I sloppily reversed greater and less in my last post) We believe that space is negatively curved on a cosmological scale, and we know that live on the surface of a spheroid, which is positively curved. So this parameter which you claim I make up and set arbitrarily is actually very real and determined by measurable properties of real things. Now, to return to my point: It would be foolish to study non-Euclidean geometry by abandoning our geometric insight, just because that insight was developed in a Euclidean context. Rather, we should use our geometric insight to see precisely what must be generalized in moving from Euclidean to non-Euclidean geometry, and to understand how and why the generalizations are possible and when they are necessary. The same remark applies to the study of special relativity, where the finite speed of light leads to an indefinite metric which induces a hyperbolic geometry, and Lorentz boosts are nothing other than 4-dimensional hyperbolic rotations. And the same remark applies to quantum mechanics, where the non-vanishing of Planck’s constant induces a hyperbolic projective mapping from the Bloch sphere to the complex Hilbert space and causes probabilities to appear noncommutative and complex. It is precisely by retaining our geometric insight that the apparent paradoxes of these subjects are most easily resolved and understood to be nothing more than logical necessities that follow from the underlying geometric structure. 135. Plato: Certainly, there are many things I could be wrong about. But, be that as it may... I see that the philosophical trends of the last century are an outright attack on rationality, replacing reason with rhetoric whose primary aim is to deconstruct Western culture. I see that research is funded by agencies uninterested in the pursuit of knowledge except as a source of economic advantage and weapons production. I see that universities have been transformed from academies of learning into vocational schools and centers of indoctrination. These are simple observations, easily seen by anyone who looks with open eyes. Thus they cannot possibly be projections of anything taking place within my own mind. 136. ain soph, "I see" then, you have no biases and I am not blind.:) Good clarity on the subject of geometrical propensities. Good stuff. 137. Neil B: “Find someone credible who doesn't agree in the terms as framed, and I'll take it seriously.” “Would you be dismissive of Penrose?” For someone who likes to decry logical fallacies, you’re awfully fond of the argument from authority... By the way, quantum states don’t collapse. State functions collapse. Just as my observation of the outcome of a coin toss collapses my statistical description of the coin, but does not change the coin at all. Given sufficient sensitivity to initial conditions, arbitrarily small amounts of background noise are all it takes to make nominally identical experiments come out different every time, in ways that are completely unpredictable, yet conform to certain statistical regularities. Now, if you can clearly define the difference between “completely unpredictable” and “truly random” in any operationally meaningful, non-circular way, then you may have a point. Otherwise I have no more difficulty dismissing your argument than some of the ill-considered arguments made by Penrose. 138. Plato: Yup. That’s our story, and we’re stickin’ to it... 139. Ain SopH: No, I'm not all that fond of the argument from authority, if you mean that if so-and-so believes it, it must be true. Your understanding of that fallacy seems a little tinny and simpleminded, because the point is that such a person's belief doesn't mean the opinion has to be true. However, neither should a major figure's opinion be taken lightly, which is why I actually said to SC: "Would you be dismissive of Penrose?" instead of, "Penrose said DI was crap, so it must be." But you do have a point, so remember: if the majority of physicists now like the DI and/or MWI, that isn't really evidence of it being valid. This statement by you is incredibly misinformed: By the way, quantum states don’t collapse. State functions collapse. Just as my observation of the outcome of a coin toss collapses my statistical description of the coin, but does not change the coin at all. Uh, you didn't realize that the wave function can't be just a description of classical style ignorance, because parts can interfere with each other? That if I shot BBs at double slits, the pattern would be just two patches? Referring to abstractions like "statistical description" doesn't tell me what you think is "really there" in flight. Well, do you believe in pilot wave theory, what? What is going from emitter, through both (?) slits and then a screen, etc? Pardon my further indulgence in the widely misunderstood "fallacy of argument from authority", but all those many quantum physicists, great and common, were just wasting their wonder, worrying why we couldn't realistically model this behavior? That only a recent application of tricky doubletalk and unverifiable, bong-style notions like "splitting into infinite other worlds" somehow makes it all OK? 140. [two of three, and then I rest awhile] Before I go into this, please don't confuse the discussion about whether math can model true randomness (it can't, whether Bee gets it or not), with the specific discussion of decoherence and randomness there. They are related but not exactly the same. Now: you are right that the background noise can make certain experiments turn out differently each time (roughly, since they might be the same result!) but with a certain statistics. But what does that show? Does it show that there weren't really e.g two different wave states involved, or that we don't have to worry what happened to the one we don't find in a given instance? No. First, the question it begs is, why are the results statistical in the first place instead of a continued superposition of amplitudes, and why are they those statistics and not some other. If you apply a collapse mechanism R to a well-ordered ensemble of cases of a WF, then you can get the nice statistics that show it must involve interference. If R acts on a disordered WF ensemble, then statistics can be generated that are like mixtures. Does that prove jack squat about how we can avoid introducing R to get those results? No. If something (like, on paper, a clueless decoherence advocate who applies the squared amplitude rule to get the statistics, and who doesn't even realize he has just fallaciously introduced through the back door the very process he thinks he is trying to "explain") hadn't applied R to the WFs, there wouldn't be *a statistics* of any kind, orderly or disorderly. (On paper, that something could be a clueless decoherence advocate who applies the squared amplitude rule to get the statistics, and who doesn't even realize he has just circularly and fallaciously introduced through the back door the very process he thinks he will "explain.") There would be just shifting amplitudes. It is the process R that produces mixture-like statistics from disordered sets of WFs, not the MLSs that explain/produce/whatever "the appearance" of R through a cutesy, backwards, semantic slight of hand. Your point about whether such kinds of sequences could be distinguished (as if two processes that were different in principle could not produce identical results anyway, which is a FAPP conceit that does not treat the model problems) is moot, it isn't even the key issue anyway. The key issue is: why any statistics or sequences at all, from superpositions of deterministically evolving wave functions. So we don't know what R is or how it can whisk away the unobserved part of a superposition, etc. This is what a great mind like Roger Penrose "gets", and a philosophically careless, working-physics Villager like Orzel does not. I'm not sure what you get about this line of reasoning since you didn't actually deal with my specific complaints or examples. Note my critique of MWI, per that the first BS in a MZ setup should split the worlds before the wave trains can even be brought back together again. 141. Here's another point: the logical and physical status of the density matrix, creating mixtures, and the effects of decoherence shouldn't depend on whether someone knows the secret of how it is composed. But if I produce a "mixture" of |x> and |y> sequential photons by switching a polarizer around, I know what the sequence is. Whether someone else can later confidently find that particular polarization sequence depends on whether I tell them - it isn't a consistent physical trait. Someone not in on the plan would have to consider the same sequence to be "random", and just as if a sequence of diagonal pol, CP, etc. as shown by a density matrix. But it *can't be the same* since the informed confederate can retrieve the information that the rubes can't. So the DM can't really describe nature, it isn't a trait as though e.g. a given photon might really "be" a DM or mixture instead of a pure state or superposition. Hence, in the MZ with decoherence that supposedly shows how the state approaches a true mixture, everything changes if someone knows what the phase changes are. That person can correct for the known phase changes, and recover perfect interference. How can the shifting patterns be real mixtures if you can do that? Oh, BTW - a "random" pattern that is known in advance (like I tell you, it's sqrt 23) "looks just like" a really random pattern that you don't or can't know, but it can make all the difference in the world, see? Finally, I said I worked up a proposal to experimentally recover some information that we'd expect to be lost by decoherence, and it seems you or the other deconauts never checked it out. It may be a rough draft, but it's there. 142. Arun: That’s an interesting paper, although it suffers greatly under the influence of “critical theory” and goes out of its way to rewrite history in terms of economic class struggle. After reading its unflattering description German universities of the nineteenth century, one can only wonder how such reprehensible places could have given us Planck, Heisenberg, Schrödinger, Minkowski, Stückelberg, Graßmann, Helmholtz, Kirchhoff, Boltzmann, Riemann, Gauss, Einstein... Clearly, those places were doing something right. Something we’re not doing, otherwise we would be getting comparable results. But the paper refuses to acknowledge that, and studiously avoids giving the reader any reason to search for what that something might be. Some of the paper’s criticisms are not without substance, but that’s all the paper does: it criticises. And thus it makes an excellent example of the corrosive influence of historical revisionism, and how critical theory is used to undermine Western culture. 143. Neil: You continue to make the same mistake, you still start with postulating something that is "actual" and what we "know" and is "given" what I'm telling you we actually don't know without already making further assumptions. I really don't know what else to say. Look, take a random variable X. It exists qua definition somewhere in the MUH. It has a space of values, call them {x_i}. It doesn't matter if they're discrete or continuous. If you want a sequence, each value corresponds to a path, call it a history. All of these paths exists qua definition somewhere in the MUH because they belong to the "mathematical structure". Your "existence" is one of the x_i(t), and has a particular history. But you don't need to "generate" one particular sequence, you just have to face that what you think is "real" is but a tiny part of what MUH assumes is "real." Besides, this is very off-topic, could we please come back to the topic of this post? Best, 144. Hi Ain Soph, Of course I’m familiar with non-Euclidian geometry, as it simply refers to geometries that exclude the fifth postulate. I’m also quite aware that GR is totally dependent upon it . The point I was attempting to make is what the difference is between the axioms of a theory and any free parameters it contains. It could be said for instance what forces non-Euclidian geometry upon GR is its postulate of covariance which has the architecture of space time mandated by the matter/energy contained. However particularly what that (non-Euclidian) geometry is in terms of the whole universe is not determined by this postulate, yet rather the free parameter known as the cosmological constant; that is whether it be closed, flat or open. So my contention is that to fix this variable one needs to replace the parameter with an axiom that will as a consequence mandate what this should be, whether that be within the confines of GR or a theory which is to supersede it. Anyway somehow or other I don’t believe either you or Plato understand the point I’ ve attempted to make and thus rather then just repeat what I said I’ll just leave it there. 145. Bee: could we please come back to the topic of this post? I second the motion. I'm sure we're jumping on wrong clues on all sorts of things*, and thanks for the Brugger psychology-testing stuff. Awesome. I do know something about Dopamine since a close family member was turned into a paranoid schizophrenic thanks to a single does of LSD her so-called "friend" put in her mashed potatoes at lunchtime one day. The results were horrific, the girl went "nuts" to use the vernacular. Too much Dopamine=Very Bad. Well, I've long felt everyone suffers to some degree some amount of mental illness. The Brugger test confirms that in my mind. *So our brains aren't perfect, yet I believe the ideal is community, in Physics that means peer review, to sort out the weaknesses of one individual by contrasting their ideas with multiples of those better informed, not all of whom will agree of course, and not all of whom will be right. So consensus is important, before testing proves or disproves, or is even devised. Regarding assumptions (whether true or false), I think that is the job of a (Real not Pop) Philosopher, going all the way back to good ol' Aristotle and his "Logic" stuff. George Musser sums it up better than I, as so: Historically, the greatest difficulty in scientific revolutions is usually not the missing piece but the extraneous one - the assumption that we've all taken for granted but is actually unnecessary. Philosophers are trained to smoke out these mental interlopers. Many of the problems that scientists now face are simply the latest guise of deep questions that have troubled thinkers for thousands of years. Philosophers bring this depth of experience with them. Many have backgrounds in physics as well. I leave you with pure cheek: Andrew Thomas: Ooh, isn't that weird?! Your initials are carved into the CMB! I didn't see that in the oval Bee featured. I DID see "S'H", which I interpret as God confirming my SHT, or SH Theory, aka "Shit Happens" Theory. What a merry prankster that God dude is, what a joker man, putting it out there right on the CMB for all to see! Well, he DID invent the Platypus, so that's your first clue. ;-) Clearly, those places were doing something right.... Well, experiments did not cost an arm and a leg and did not ever require hundreds of scientists or a satellite launch in those days. As one biographer pointed out, even upto Einstein's middle age, it was possible for a person to read all the relevant literature; the exponential growth since has made it impossible. Lastly, in the areas where the constraints mentioned above don't hold we're doing fine - e.g, genetics and molecular biology, computing, etc. It is just that you - we - do not recognize the pioneers in those fields to have such genius; that is a definite cognitive bias on our part. 147. From Columbus to Shackleton - the West had a great tradition of explorers, but now nobody is discovering new places on the Earth - must be an attack by the forces of unreason on the foundations of Western civilization. I mean, what else could it be? 148. Hi Phil, You mustn't become discouraged as to not understanding your point or not, all the better taken in stride. However, by throwing out Euclid's fifth postulate we get theories that have meaning in wider contexts, hyperbolic geometry for example. We must simply be prepared to use labels like “line” and “parallel” with greater flexibility. The development of hyperbolic geometry taught mathematicians that postulates should be regarded as purely formal statements, and not as facts based on experience. SeeAxiom There is to me a succession( who was Unruh's teacher?) and advancement of thought about the subjects according to the environment one is predisposed too. Your angle, your bias, is the time you spent with, greatly, before appearing on the scene here. Your comments are then taken within "this context" as I see it. Part of our communication problem has been what Ain Sof is showing. This has been my bias. Ain Sof doesn't have any.:) Why I hold to the understanding of what Howard Burton was looking for in the development of the PI institution was a "personal preference of his own" in relation to the entrance too, is what constitutes all the science there plus this quest of his. So as best I can understand "axiom" I wanted to move geometrical propensity toward what iS "self evident." Feynman's path integral models. Feynman based his ideas on Dirac's axiom "as matrices." I am definitively open to corrections my our better educated peers. Here was born the idea of time in relation to the (i) when it was inserted in the matrices? How was anti-matter ascertained? Feynman's toy models then serve to illustrate? Let the wrath be sent down here to the layman's understandings. 149. Arun: You’re not suggesting we’ve mapped out physics with anywhere near the completeness with which we’ve mapped out the Earth, are you? 150. Plato: “This has been my bias. Ain Sof doesn't have any.” Now, now... What I said was that certain trends are so obvious that I’m certain I’m seeing something that is really there, and not projecting my own stuff onto the world. Of course, being certain of it is no guarantee that its true... 151. Phil: Once again, I agree wholeheartedly with what Bell is trying to accomplish by adopting the word, “beable,” but I loose more by abandoning the correct parts of my understanding of words, like “observation” and “property,” than I gain from the tabula rasa that comes with the word, “beable.” Bell himself recognised that the introduction of the word was a two-edged sword. In his 1975 paper, “The Theory of Local Beables,” he writes The name is deliberately modeled on “the algebra of local observables.” The terminology, be-able as against observ-able is not designed to frighten with metaphysic those dedicated to realphysic. It is chosen rather to help in making explicit some notions already implicit in, and basic to, ordinary quantum theory. For, in the words of Bohr, “it is decisive to recognize that, however far the phenomena transcend the scope of classical physical explanation, the account of all evidence must be expressed in classical terms.” It is the ambition of the theory of local beables to bring these “classical terms” into the mathematics, and not relegate them entirely to the surrounding talk. [emphasis in the original] Two or three paragraphs later, he adds One of the apparent non-localities of quantum mechanics is the instantaneous, over all space, “collapse of the wave function” on “measurement.” But this does not bother us if we do not grant beable status to the wave function. We can regard it simply as a convenient but inessential mathematical device for formulating correlations between experimental procedures and experimental results, i.e., between one set of beables and another. Now, for someone who is thoroughly steeped in the orthodox view that probabilities are objectively real properties of physical systems, I suppose it can be useful to adopt the word, “beable,” to remind themselves that a probability isn’t one. But the real danger of introducing this term is that it tempts one to treat the concept as being relevant only in the quantum context. Thus it opens the door to a new misconception while throwing the old one out the window. So I think the preferable way to combat this kind of cognitive bias is to realize that its root lies in the widespread misapprehension of the concept of probability. And this is where I draw a parallel to my remarks about geometric insight, because we must use our insight to see precisely what must be generalized in moving from classical to quantum mechanics, and to understand how and why the generalizations are possible and when they are necessary. The key realisation is that probable inference is the generalization of deductive inference from the two-element field {0,1} to the real interval (0,1). That alone should be enough to counteract any tendency to ascribe objective reality to probabilities (i.e., treat them as beables), even in a classical context. The we must generalize again to vector probabilities in statistical mechanics, and finally to spinor probabilities in quantum mechanics. As I remarked above: my observation of the outcome of a coin toss collapses my statistical description of the coin, but does not change the coin at all. Once we realize that this same idea still applies when probabilities are generalized from to scalars to spinors, and manifests in the latter case as the collapse of the wavefunction, the “weirdness” of quantum mechanics evaporates, and along with it, the need for terms like “beable.” 152. Ain Sof, Ah!....there is hope for you then.:) Alma mater University of Cambridge Doctoral advisor Paul Dirac Doctoral students John D. Barrow George Ellis Gary Gibbons Stephen Hawking Martin Rees David Deutsch Brandon Carter 153. Hi Ain Soph, So you are basically saying that as long as the statistical description is relegated to being an instrument for calculating outcome, rather then what embodies as being the mechanics (machinery) of outcome, then we don’t need anything else to keep us from making false assumptions. This to me sounds like what someone like Feynman would say who contended that the path integral completely explained least action as what mandates what we find as outcome. I’m sorry yet for me that just doesn’t cut it, as it assigns the probabilities as being the machinery itself. The whole point of Bell’s ‘beable’ concept is to force us to look at exactly what hasn’t been explained physically, rather than having us able to ignore their existance. That’s to say, that yes the reality of the coin is not affected by it being flipped, yet one still has to ask what constitutes being the flipper, even before it is considered as landed by observation to be an outcome. What you are asking by analogy to be done is to accept that a lottery drum’s outcomes are explained without describing what forces the drum to spin. The fact is probability is reliant on action and all action requires an actuator. So if your model has particles as being the coins you still have to give a physical reality to not just the drum, yet what has it to be spun? If your model has the physicality of reality as strictly being waves, then you are in a worse situation, for although you have accounted for what represents as being the actuator of the spin, yet left with nothing to be spun as to be able to observe as an outcome. This is exactly the kind of thing that Bell was attempting to have laid bare with his inequality, as it indicated that the formalism (math) of QM mandated outcomes that required a correlated action demonstrated in outcomes separated by space and time exceeding that of ‘c’ and yet had no mechanism within its physical description that would account for such outcomes. So yes I would agree that the mathematics allows us to calculate outcome, yet it doesn’t then by itself able to embody the elements that have them to be and thus that’s why ‘beables’ should be trusted when evaluating the logical reality of models. Further, one could say that’s why we have explanations like many worlds as attempting to give probability a physical space for Feynman’s outcomes, or Cramer’s model to find the time instead as the solution. Then again we have all as being math contending that there is no physical embodiment of anything yet only the math which as far as I can see is what your contention would force us to consider as true. I don’t know about how you see all this, yet for me their seems to be room for other more direct physically attached explanations for what we call to be reality, which as you say would not force us to throw the baby, which in this case being reality, out with the bath water, with them only representing our false assumptions and prejudices. So yes I agree that one and one must lead to there being two, yet they both must still be allowed to exist to have such a result found as being significant tp begin with. 154. While the reality of the coin is seemingly not affected by the collapse of the statistical distribution describing its position, the same cannot be said about the electron. There are no hidden variables maintaining the reality of the electron while its wave function evolves or collapses. 155. It's a shame that Bee and I continue to disagree about the issue of determinism in math v. the apparent or actual "true randomness" of the universe. I don't even agree that it's off-topic, Bee, since it is very relevant the core issue of whether we do and/or should project our own cognitive biases on the universe. One of those modern biases apparently is the idea of mechanism, that outcomes should be determined by initial conditions. Well, that is how "math" works, but maybe not the universe. Perhaps Bee is thinking I'm trying to find purely internal contradictions in the MUH, but I'm not. It can be made OK by itself, in that every possibility does "exist" Platonically, and there is no other distinction to make (like, some are "real stuff" and others aren't.) That's the argument the modal realists make. In such a superspace, it is indeed true that the entire space of values of a random variable exist. It's like "all possible chess games" as an ideal. But it isn't like a device that can produce one sequence one time it is "run", another sequence in another instance etc. It is a "field." And no it doesn't matter what is continuous or discrete, that's beside the point of deterministically having to produce consistent outputs when actually *used.* But my point is, we don't know that MUH is rightly framed. What we have is one world we actually see, and unless I have "some of what MWI enthusiasts are smoking" I do not "see" or know of any other worlds. In our known world, measurably "identical particles" do not have identical behavior. That is absurd in logical, deterministic terms. We do have specific and varying outcomes of experiments, that is something we know and does not come from assumptions. I am sure you misunderstood, since you are aware of the implications of our being able to prepare a bunch of "identical" neutrons. One might decay after 5 minutes, another after 23 minutes. If there was an identical clockwork equivalent, the same "equation" or whatever inside each neutron, then each neutron would last the same duration. I think almost everyone agrees on that much, they just can't agree on "why" they have different lifetimes. In an MUH, we'd still have to account for different histories of different particles, given the deterministic nature of math. There are ways to do that. The world lines could be like sticks cut to different links. In such a case there is no real causality, just various 4-D structures in a block universe "with no real flowing time." Or, each particle could have its own separate equation or math process (like sqrt 3 for one neutron, cube of 1776 for another.) But the particles could not all be identical mathematical entities, and glibly saying "random variable" *inside each one* would not work. If each neutron started with the same inner algorithm or any actual math structure or process, it would last as long as any other. That is accepted in foundations of math, prove me wrong if you dare. 156. It is possible for different math-only "worlds" to have different algorithms. But if the same one applied to all particles in that world, then every particle would have to act the same since math is deterministic. Hence, we'd have the 5-minute-neutron world, the 23-minute-neutron world, etc. Our universe is clearly not like that, as empirically given. Hence, each of those apparently identical particles must have some peculiar nature in it, that is not describable my mathematical differences. And Ain Soph is IMHO wrong to say we can't ascribe probability to a single particle. Would you deny, that if I throw down one die it has "1/6 chance of showing a three"? Would you, even if the landing destroyed the die after that? What other choice do we have? Yes, we find out via an ensemble. But then why does the ensemble produce that statistics unless each element has some "property" of maybe doing or not, per some concept of chance? I think we have no choice. And as for thinking the wave function is just a way of talking about chances, isn't beable, whatever: then what do you think is the character of particle and photons in flight? If you want to deny realness and say our phenomenal world is like The Matrix, fine, but at least you can't have it both ways. And however overlong or crabby some of the comments might be, it would be instructive to consider some of my critique of DI/DM/MWI. Sorry there is much confusion over determinism and causality here, but there just is, period! Again, this is relevant. But no more about it per se unless someone prompts with yet another critique per se! (;-) Yet note, the general point is part of the continuing sub-thread here over quantum reality, since it has to be. 157. Last note for now: Thanks Phil for some very cogent comments, supporting my outlook in general but in your more dignified style ;-) As for the tossed coin: REM that in a classical world, for one coin to land on it's head and other, tails; was a pre-determined outcome of the prior state actually being a little different in each case! The coin destined to come up heads was already tipping that way, and at an earliet time its tipping that way was from a slightly different flip of my wrist, etc. It's not about whether the observation changes the coin, it's about the whole process being rigged in advance. One could think of the whole process as like a structure in space-time, with one outcome being one entire world-bundle, and the other outcome being another world-bundle. They are genuinely different (however slightly) all the way through! But in QM, we imagine two "identical states" from which outcomes are, incredible, different. (It is easy for forget, that really is logically incredible as Feynman noted, since we'd gotten used to it being the apparent case.) As I painstakingly explained, that is not derivable from coin-toss style reasoning. If you believe that the other outcomes really exist somewhere, it's your job to bring photos, samples, whatever or else just be a mystic. 158. Jules Henri Poincare (1854-1912) Mathematics and Science:Last Essays 8 Last Essays Let rolling pebbles be left subject to chance on the side of a mountain, and they will all end by falling into the valley. If we find one of them at the foot, it will be a commonplace effect which will teach us nothing about the previous history of the pebble; A Short History of Probability The Pascalian triangle(marble drop experiment perhaps) presented the opportunity for numbers systems to materialize out of such probabilities? If one assumes "all outcomes" one then believes that for every invention to exist, it only has to be discovered. These were Coxeters thoughts as well. Yet now we move beyond Boltzmann, to entropic valuations. The Topography of Energy Resting in the Valleys then becomes a move beyond the notions of true and false and becomes a culmination of all the geometrical moves ever considered? Sorry just had to get it out there for consideration. 159. Just consider the "gravity of the situation:) and deterministic valuation of the photon in flight has distinctive meanings in that context?:) But they have identical wave functions. 161. Phil: You conclude from my argument exactly the opposite of what I intended to show. Nothing could more clearly demonstrate the confounding effects of cognitive bias. I’m NOT saying that your cognition is biased and mine isn’t. I’m saying that a mismatch between our preconceptions leads us to ascribe opposite meaning to the same sentences – with or without beables! This results in a paradoxical state of affairs: it seems we agree, even though our attempts to express that agreement make is seem like we disagree. Okay, so let me try again... In my view, the probabilities are anything but the machinery! They are nothing more than a succinct way of encoding my knowledge of the state and structure of the machinery. Neither my view nor Feynman’s nor Bell’s treats probabilities as beables. The wave fronts of the functions which satisfy the Schrödinger equation are nothing other than the iso-surfaces of the classical action, which satisfies the Hamilton-Jacobi equation. The apparently non-local stationary action principle is enforced by the completely local Euler-Lagrange equation. This is no more or less mysterious than the apparently non-local interference of wave-functions. In the last analysis, they stem from the same root. Thus amplitudes are not real things. They are merely bookkeeping devices that record our knowledge about the space-time structure of the problem, while abstracting away much of the detail by representing its net effect as quantum phase. This is what Schrödinger was trying to tell us with his thought experiment about the cat. By the same token, probabilities are not real things. They, too, are only bookkeeping devices, which quantify our ignorance of details. A correctly assigned probability distribution is as wide as possible, given everything we know. It is therefore not surprising that our estimated probability distribution becomes suddenly much sharper when we update it with the results of a measurement. It was 1935 when Hermann showed that von Neumann’s no-hidden-variables argument was circular. It was 1946 when Cox showed that the calculus of Kolmogorovian probabilities is the only consistent way to generalize Boole’s calculus of deductive reasoning to deal with uncertainty. It was 1952 when Bohm published a completely deterministic, statistical, hidden-variables theory of quantum phenomena. It was 1957 when Jaynes showed that probabilities in statistical mechanics have no objective existence outside the mind of the observer. From 1964 to the end of his life, Bell could not disabuse people of the false notion that his theorem proved spooky action at a distance. An now, in 2010, cognitive bias still prevents the majority of physicists from connecting the dots. 162. Neil B: “prove me wrong if you dare” Ha! This is trivially easy. Each neutron in your example exists in its own unique milieu of external influences. Thus they are identical machines operating on different inputs, which therefore give different outputs. Their dependence on initial conditions is very sensitive, so there is no correlation between the moments at which different particles decay, even if they are very close together. Only the half life survives as a statistical regularity. QED. “Would you deny, that if I throw down one die it has 1/6 chance of showing a three? ... What other choice do we have?” I claim that the state of the die will evolve as determined by its initial conditions and various influences that affect it in transit and modify its trajectory. Since I have imperfect knowledge of the initial conditions, and cannot predict the transient influences, and since know that the final state depends very sensitively on these things, I have no rational choice but to treat the problem statistically. I will assign equal probabilities to the six faces only if I believe that the apparent symmetries of the die are real, and I will believe that only if I lack evidence to the contrary. However, if I see the die come up three, over and over again, I will have no rational choice but to adjust my assignment of probabilities, which amounts to revising my estimation about the symmetries. So you see, these is a statements of “having no rational choice but to assign certain probabilities” are statements about me, and about the evolution of my knowledge about the die. They are not statements about the die. With each observation I make, my estimate of the probabilities changes, but the die remains the same. And nowhere in any of that did I say anything about an ensemble. No ensemble is required. If you think you need an ensemble, then you have already accepted many-worlds, whether you think you have, or not. “It’s not about whether the observation changes the coin, it’s about the whole process being rigged in advance.” The belief that this is not true of quantum phenomena is one the cognitive biases that result from the incorrect understanding of the nature of probability. See also my remarks in reply to Phil. 163. Ain Soph, I appreciate finally getting a considered response. However, your position about environmental influences on something so fundamental as particle life-expectancy is very unorthodox and very unsupported AFAIK by any experiments. So you are a deterministic, who thinks there is some particular reason one neutron decays after one span, and for another to last a different span? Then we should be more able to do two things: 1. Make batches from different sources and environments, that have varying tendencies to decay even if we can control the environment. If there's a clockwork inside each neutron, we should be able to create batches with at least some varying time spectra, such as lumping towards a particular span etc. But no such batches can be made, can they? Nor can we (2.) Do things to the particles to stress them into later being short-lived, or long-lived, etc. That is unheard of. Most telling, is that if we let a bunch of particles decay for awhile and take, say, the remaining 1%, that i% decays from then on in the same probabilistic manner as the batch did as a whole up to that point. It is incredible, for a bunch of somethings with deterministic structure to have a subset which last longer, but then has no further distinction after that time is up. The remaining older and older neutrons can keep being separated out, and no residual signal of a deterministic structure can be found after they've "held off" for all that time. They'd have to be like the silly old homunculus theory of human sperm, like endless Russian dolls waiting for any future contingency (look it up.) It is absurd, sorry. It's looking at actual nuts and bolts and not semantics or understanding about "probability" that best shows the point. You're right about the probability just being bookkeeping or coding of ignorance in a classical world, but our world is probably (!) not like that. A fresh neutron should be like a die with the same facing up each time and just falling straight down. (BTW, an ensemble is the set of trials or particles in one world, it does not have to mean MWI. The other copy of a particle in our world is just as good a repletion as having the ostensible same thing happen elsewhere too.) The actual evidence supports the logically absurd idea that genuinely identical particles and states (empirical and theoretical basis up to the moment the similarity is shattered by a measurement or decay event, etc.) sometimes do one thing, sometimes another, for no imaginable reason as we understand and can model causality. Why? Because the universe is just weird. And it isn't about understanding "probability" per se, which of course does not really exist in math anyway - all the outcomes are precoded into earlier conditions etc., which means it's a matter of whether pseudo-random patterns that would seem to pass the smell test had been "put in by hand", in the Laplacian sense, by God or whatever started up the universe's clockwork. It is about understanding what our universe is like, when it is involved in what we loosely call "probability", without truly understanding what that means in the real world. It is wrong to project and impose our supposed philosophical needs or prejudices upon it. BTW, I was hoping you'd look at my experiment about recovering data after decoherence. 164. Neil B: Of the two issues you raise, you are wrong about the first one and right about the second one. In both cases, the correct understanding of the issue supports my argument. Firstly, if all neutrons are identical, then we definitely should not be able to prepare batches of neutrons with differing parameters. Further, particle decay obeys Poisson statistics, which are shift invariant. Hence knowing how long a given particle has lived tells you nothing about how much longer you can expect it to survive. Secondly, there is indeed something you can do to “stress” a neutron to systematically affect its half-life: you can put it into different nuclei, or leave it free. In that way, you can vary the half-life of a neutron from about 886 seconds to infinity. A “fresh” neutron will be in some unpredictable state determined by the unknown details of the process that created it. An ensemble is not an actual set of particles or trials. People use ensemble arguments when they want to define probabilities as the limiting frequencies of occurrence in an infinite number or trials. But determining what would happen if we could perform an infinite number of trials is based on symmetry arguments of the kind I’ve already outlined. If you can do that correctly, you don’t need an ensemble. If you can’t, all the ensembles in the multiverse won’t help you. Many-worlds is an attempt to rescue limiting frequencies in cases where postulating more than one trial makes no sense. For example, what is the probability that the sun will go nova in the next five minutes? Many-worlders claim to find that probability by counting the fraction of parallel universes in which the sun actually does go nova in the next five minutes. Yeah. Right. Many-worlds is the last resort of frustrated frequentists, desperately searching for ensembles in all the wrong places. Oh, and... what experiment about recovering data after decoherence? 165. Neil: What I was thinking you were saying is that MUH is in conflict with observation: "If actual outcomes, sample sequences which are the true 'data' from experiments, are genuinely "random" ... then... MUH is invalid. And I've tried to explain you several times why MUH is not in conflict with observation. Of course we don't know that MUH is correct. It's an assumption, as I've been telling you several times already. All I've been saying is that it is tautologically (by assumption) not in conflict with reality. About the neutrons: Their behavior is described by a random process. All values this process can take "exist" mathematically in the same sense. You just only see one particular value. Is the same I've been telling you several times now already. Nobody ever said you must be able to "see" all of the mathematical reality. This is one of the assumptions you have been implicitly making that I tried to point out. Incidentally you just called my replies to you inconsiderate. Which, given the time that I have spent, I don't find very considerate myself. Best, 166. This comment has been removed by the author. 167. This comment has been removed by the author. 168. HI Ain Soph , Perhaps as you say each of our biases have had us to see that we disagree in places that we don’t. There’s not much more that could be said about this discussion about beables, since as you admit whether the concept is useful to avoid biases really depends on your own biases.:-) Just a couple of comments as to what you said, then I think we should put this to rest, at least in terms of this blog. The first being would be to say I disagree that Feynman didn’t consider probabilities as a beable, for he certainly did. I won’t defend this other than just to say that you would have to point me to something more specific that would convince me otherwise. Lastly what Hermann demonstrated as what was wrong with von Neumann’s proof was not that it was a circular argument, yet rather it assigned the logic of the averaged value of an assemble to situations where they were they just couldn’t be demanded logically to hold. As to all the back and forth comments in respect to probability, what these in the end represent comes down to whether one believes, rather then knows, if there is such an entity as the random set, that is outside of it being something that can only be defined mathematically by what it isn’t, rather then what it is. This reminds of a time some years back when I was playing craps late into the evening in Atlantic City, with noticing one fellow off to the side scribbling each roll of the dice on a note pad. When it came time for me to leave the table, I asked this fellow if he believed what he was keeping track of would help him to win, with his reply being of course because it was all a matter of the probabilities. Then to continue I asked had he never heard that Einstein said that God doesn’t play dice and he replied yes I have, so what does that have to do with it. I then said what he meant which is of importance here is that even god could not have random work to have something known to be real, so then what chance do you think you have in being able to succeed? :-) 169. Hi Ain Soph, Just one thing I forgot to add is that from what I’m able to gather you are one of those that consider the working of reality as being that of a computer. Actually I have no difficulty with this as long as a computer is not limited to being digital. The way I look at it with respect to having both waves and particles as beables this would have this computer to be analogue which is restricted to digital output :-) 170. Bee, I don't know why you think I implied or said your replies were inconsiderate. When I said it's a shame we continue to disagree, I meant in the usual sense of "it's unfortunate it's that way" rather than "shame" over something bad. Or you might be confusing my use of "considered" in a reply to Ain Soph not you, in which I said I appreciated finally getting such a reply? The word "considered" means put effective thought into the comment instead of just tossing off IMHO assumptions etc. It does not mean the same as "considerate", meaning caring about someone else, being polite etc. REM that I am cross-talking to you and A.S. about nearly the same point, since you both seem to accept determinism (or its viability) and don't seem to appreciate my point about neutrons and math structures, etc. Perhaps also you have some lingering soft spots in practical English, although your writing is in general excellent and shows correct parsing of our terms and grammar at a high level. Note that English is full of pitfalls of words and phrases that mean very differently per context. Note also that when two people keep debating and neither yields, then both are "stubborn" in principle. I suggest seeking third-party insight, which I predict will be a consensus in the field of foundations (not applied math) that identical math structures must produce identical results (as A.S. now seems to admit - saying it's a matter of environmental influence, about which more in due course), and that a field of possibilities is just an abstraction. Hence it is not possibly a way to get one identical particle to last one duration, and another one; another duration. It is not a "machinery" for producing differential results in application. That is so regardless of what kind of universe we are in or how many others there are etc. Either we pre-ordain the behavior in the Laplacian sense, or it is inexplicably random and varying despite the identical beginning states. This is not my own idiosyncratic notion, but supported by extensive reading of historical documents in science and phil-sci that included works of founders of QM etc. Sure, we can't figure out "how can this be?" - it's just the breaks. In any case I'm sorry you felt put-down, but you can be relieved that isn't what I meant. 171. Further possible confusion: in practical (English?) discourse, if a comment is addressed to soandso then the statement: "Soandso, I appreciate finally getting a considered response..." is supposed to mean, "I appreciate finally getting _____"[from you] rather than, "I appreciate getting _____" from at least someone, at all, period. I'm not being a nit-picker about trivia, just don't want anyone to feel slighted. Ain Soph: I mean, the proposed experiment I describe at my name-linked blog, the latest post "Decoherence interpretation falsified?" (It's a draft.) Please, look it over, comment etc. 172. Neil: Thank you for the English lesson and I apologize for any confusion in case "inconsiderate" is not the opposite of "considered," which is what I meant. Yes, I was referring to your earlier comment addressed at Ain Soph. Your statement, using your description, implies that you think I have not "put effective thought" into my comments, which I find inappropriately dismissive. In fact, if you read through our exchange, I have given you repeatedly arguments why your claim is faulty which you never addressed. I am not "tossing off" assumptions, I am telling you that your logical chain is broken, and why so. It is not that I do not "appreciate" your point, I am telling you why you cannot use it to argue MUH is in disagreement with observation. This is not a "debate," Neil, it is you attempting an argumentum ad nauseum. Finally, to put things into the right perspective, nowhere have I stated whether I "accept" determinism or not, and for the argument this is irrelevant anyway. Nevertheless, when it comes to matters of opinion, I have told you several times already that I don't beleive neither in MUH nor in MWI. I am just simply telling you that your argumentation is not waterproof. Best, 173. Bee, I think you missed my followup to the explanation about "considered" - as I said there, I meant to Ain Soph that he/she had finally given me a "considered" [IMHO] reply, not that finally "someone" had - which would mean, no one else had either! So can we finally be straight about that, since you were not meant to be included? As for argumentum ad nauseum, I note that you keep mostly repeating yourself as well so wouldn't that apply to both of us if so? Also, I have provided some new ideas such as the example of neutrons, moving beyond more abstract complaints. So let's forget about MUH for awhile (and since it involves accepting "all possible math structures", which goes beyond merely saying that this world is fully describable by math.) Note also that even if a person's argument is not airtight, it can still be the most plausible one. Also, AFAIK I do have majority support (or used to?) in the sci-phil community. 174. BTW, I just got a FBF acceptance from Stefan! Thanks. The blog is good "you guys" (another colloquialism that in English can now include any gender) overall. To other readers: Bee's FB page is cute and interesting, much more than the typical scientist's. 175. Neil: Okay, let's forget about the considerable considerations, this is silly anyway. Of course I am repeating myself, because you are not addressing my arguments. Look, I am afraid that I read your "new ideas" simply as attempts to evade a reply to my arguments. But besides this, I addressed the neutrons already above. Best, 176. Hi Bee, ”I am just simply telling you that your argumentation is not waterproof.” Interesting much of this conversation ends up focused around semantics and looking at what you said to Neil reminded me it at times can be non trivial. That is particularly in today’s scientific climate I’d rather have my theory be bullet proof, while less concerned if it be water proof, as there is a significant diference between being all wet and dead:-) c.c. Neil Bates 177. I think we've come a long way from Spooky.:) In practice, entanglement is an extremely delicate condition. Background disturbances readily destroy the state—a bane for quantum computing in particular, because calculations are done only as long as the entanglement lasts. But for the first time, quantum physicist Seth Lloyd of the Massachusetts Institute of Technology suggests that memories of entanglement can survive its destruction. He compares the effect to Emily Brontë’s novel Wuthering Heights: “the spectral Catherine communicates with her quantum Heathcliff as a flash of light from beyond the grave.” The insight came when Lloyd investigated what happened if entangled photons were used for illumination. One might suppose they could help take better pictures. For instance, flash photography shines light out and creates images from photons that are reflected back from the object to be imaged, but stray photons from other objects could get mistaken for the returning signals, fuzzing up snapshots. If the flash emitted entangled photons instead, it would presumably be easier to filter out noise signals by matching up returning photons to linked counterparts kept as references. Still, given how fragile entanglement is, Lloyd did not expect quantum illumination to ever work. But “I was desperate,” he recalls, keen on winning funding from a Defense Advanced Research Projects Agency’s sensor program for imaging in noisy environments. Surprisingly, when Lloyd calculated how well quantum illumination might perform, it apparently not only worked, but “to gain the full enhancement of quantum illumination, all entanglement must be destroyed,” he explains. Lloyd and his colleagues detailed a proposal for practical implementation of quantum illumination in a paper submitted in 2008 to Physical Review Letters building off theoretical work presented in the September 12 Science An Introduction to String Theory A Talk by Steuard Jensen, 11 Feb 2004 178. Plato: I wonder if your piece on imaging with entangled photons is the same idea as this stunning report: Wired Magazine Danger Room Air Force Demonstrates ‘Ghost Imaging’ * By Sharon Weinberger Email Author * June 3, 2008 | * 11:00 am | * Categories: Science! Air Force funded researchers say they’ve made a breakthrough in a process called "ghost imaging" that could someday enable satellites to take pictures through clouds. 179. Phil: You have a point about Feynman. Although, on page 37 of his 1985 book, QED, we find ... the price of this great advancement of science is a retreat by physics to the position of being able to calculate only the probability that a photon will hit a detector, without offering a good model of how it actually happens. which draws a clear distinction between what we can calculate (probabilities) and what actually happens (beables), yet on page 82 he says ... the more you see how strangely Nature behaves, the harder it is to make a model that explains how even the simplest phenomena actually work. So theoretical physics has given up on that. by which I think he really means that he, himself, has given up on it -- which is sad, because his path integrals build such a clear bridge between quantum phase and classical action; they are bound to play a central role in the defeat of quantum mysticism. Also, I think your analogue computer, restricted to digital output, is an excellent metaphor! At least to first order. It reminds me of Anton Zeilinger’s remark that “a photon is just a click in a photon detector.” 180. Ain Soph, I think what you call "quantum mysticism" is just what nature is like. Why must She make sense? She is not like the Queen of England, she is like Lady Gaga: "I'm a freak bitch, baby!" About neutrons: yes in an extreme case, inside a nucleus, neutrons are stable. But in the bound state they are exchanging with other nucleons, not a proper dodge regarding in-flight differences. You seem to admit that a real mechanism would mean we could make a batch of "five minute neutrons", but almost no one thinks we could. We can't even make a batch that has a bias, etc. That is absurd. The consistent, Poisson distribution is "mystical", it is absurd. The alternative would be a ridiculous Rube-Goldberg world where intricate arrangements were made to program each little apparently identical particle with a mechanism that could never be exposed, never tricked into revealing the contrivance by how we grouped the particles, how we made them, waiting them out, nothing. The universe can't do that. It's something to accept. Again, re the proposed information recovery experiment: I describe it in my blog. 181. Neil B: Tyrranogenius??!!?! Ree-hee-hee-ly... Ahem. Anyway... I took a look at your post about recovering information after decoherence, and I pretty much agree with most of what you wrote. But let’s be clear about what this really implies about the nature of probability. This little thought experiment of yours clearly demonstrates my point, that there is nothing special or mysterious about quantum probabilities; they nothing other than classical probabilities applied to things that have a phase. There is a somewhat analogous experiment in statistical mechanics. One puts a drop of black ink in a viscous white fluid contained in the thin annular space between two transparent, rigid cylinders. Then one turns the outer cylinder relative to the inner one, and watches as the ink dot is smeared out around the circumference, becomes an increasingly diffuse grey region until it finally disappears completely. If the rotation is continued long enough, the distribution of ink can be made arbitrarily close to uniform, both circumferentially and longitudinally. Eventually, one concludes that entropy has increased to a maximum and the information about the original location of the ink drop has been irreversibly lost. However, if one then reverses the relative rotation of the cylinders, one can watch as the ink drop is reconstituted, returning to its original state exactly when the net relative rotation of the cylinders returns to zero. This works better with more viscous fluids, but only because that makes it easier to reverse the process. The ease of demonstrating the principle depends on the viscosity, but the principle itself does not. And the principle is this: information is never lost in a real process, but it can be transformed in ways that make it prohibitively difficult to recover. Of course, “prohibitively difficult” is in the eye of the beholder. It is not a statement about the system; it is a statement about the observer. If you come along after I’ve turned the cylinders, not knowing that they were turned, and I challenge you to find the ink droplet, you will measure the distribution of ink, find it so close to uniform that you declare the difference to be statistically insignificant, and conclude that my challenge is impossible, saying the information is irretrievably lost. That is, you will say that the mixture is thermalized. But then I say, no, this is not a mixed state at all; it is an entangled state. And to prove it, I turn the cylinders backwards until the ink drop reappears. Voila! So you see, the question is not, when is the information lost, but rather, at what point is recovering it more trouble than it’s worth? And the answer depends on what you know about the history of the situation. The moral of the story is, one man’s information is another man’s noise. There is no such thing as “true randomness.” And this is the real lesson to be learned from the whole messy subject of decoherence. 182. Neil B: I say, “if all neutrons are identical, then we definitely should not be able to prepare batches of neutrons with differing parameters.” And you reply “you seem to admit that a real mechanism would mean we could make a batch of five minute neutrons.” No wonder you think other people’s posts are not carefully considered. You don’t pay attention to what they write. 183. Ain Soph, thanks for looking at my blog and getting the point about recovering information, even if we don't agree about the significance (REM, I say we can recover the original biased of amplitude differences, not specific events.) As for the name, well it's supposed to be cute and creative. Neutrons: but the statements are flip sides of the same point: Right, so if they weren't identical, and were deterministic (as you seem to think they must be, and "God only knows" what Bee really thinks IMHO but I'll leave her alone anymore), then we would be able to prepare a batch of "five minute neutrons." They would of course be, a whole bunch that were the same as, that portion, that lasts five minutes, of a normal motley crew of varying life times. Of course almost no one thinks we can do that, hence neutrons are likely identical, hence looking for mechanism to break the mystical potential of events is likely hopeless. You need to think less one-dimensionally? 184. Hi Ain Soph, So I guess on the question of probabilities we find Feynman to have given them a physicality that just can’t be justified. He also held similar notions as to the meaning of information and what that implied in terms of physical reality, as what is to be considered as real physically and what isn’t. I think what separates the way we each look at all of this is rooted in what of our most basic of biases be and that what forms as constituting our ontological centres. So when I say analogue rendering only digital results, I mean just that, with having to attach a separate and distinct entity to both, while you seem able to have only one thing stand as being both. To me this is reminiscent of when as a child I would get these puzzles where one connects the dots , where after tracing between the dots something would appear as a figure, such a s boy or girl’s face, or some inanimate object. I find your way of looking at the world is to just to see the dots, while the lines between are spaces having no meaning or consequence. However for me it is to have the figure as the place that no matter where the dots are looked for and even when not found still exists, as do the dots. Now as much as I hate to admit it, this is one bias that I fear each of us will never be able to discard and as such as for both of us the how and the why of the world will be looked for from two distinctly different perspectives. That said I have no compliant as you being a Feynman fan, as then you having come by it honestly for this information (digital) perspective of reality can be attributed largely to him. To quote Mehra’s biography of Feynman ‘Beat of A Different Drum’ under the heading ‘24.2 Information as a Physical Reality’ (page 530)Feynman’s thoughts on this in summation reads: “This example has demonstrated that the information of the system contributes to its entropy and that information is a well-defined physical quantity, which enters into conservation laws” The thing is I have no problem with this statement, other to echo Bell’s compliant when this all is information view was proposed in asking “information about what”. With the Feynman perspective as with his diagrams this information represented only what the correlated assembly (the group of dots yield) without regard for what formed to be the cause of the correlations as in his diagrams having those wavy lines in between to be assigned no physically yet required all in the same. So once again for me it’s not how it happens that physics can demonstrate so well why there be no hidden variables, yet rather how can it even consider it a good beginning to deny made so evident to be deduced by reason of experiment. This is where I find the quantum mysticism to begin as the same reason given by Einstein to Heisenberg when he explained : "...every theory in fact contains unobservable quantities. The principle of employing only observable quantities simply cannot be consistently carried out." Anyway despite our biases I have to respect that you take your position seriously, as do I, yet I convinced that no matter what the outcome each would be more then grateful if an experiment could be devised which could have made clear which is simply wishful thinking and which is nature’s way of being. 185. Phil, REM that experimental proposal of mine that you've read (and correctly understood, as did Ain Soph.) If we can recover such information about input amplitudes after the phases are scrambled - and the direct optical calcuation, which has never been wrong, says we can - that is a game changer. The output from BS2 in my setup "should" be a "mixture" in QM, ie equivalent to whole photons randomly going out one face or the other. But if not, then the fundamentals have to be reworked and we can't use traditional DM or mixture language. I'm serious as a heart attack, it's not braggadocio but a clear logical consequence. (BTW anyone, that blog name is supposed to be cute and camp, not to worry.) 186. Remark to All: Many valid arguments have been presented over the years that should be, to use Neil’s phrase, “game changers.” But they’re not. Einstein, Schrödinger, Bohm and Bell, put together, were not able to counter the irrationality that was originated by the “Copenhagen Mafia” and continues to be aggressively promoted today. As we have strikingly demonstrated in this very thread, contemporary physics is hobbled by an inability to agree on the meaning of such basic terms as “reality,” “probability,” “random” and “quantum” -- just to name a few. Thus, endless semantic quibbling has been imported into physics and drowns out any vestige of substantive debate that could lead to real progress. Willis E. Lamb, awarded the 1955 Nobel Prize in Physics for discovering the Lamb shift, states categorically in a 1995 article [Appl. Phys. B, 60(2-3):77] that “there is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists.” But there is quite some evidence to indicate that these errors are anything but accidental. In Disturbing the Memory, an unpublished manuscript written by Edwin T. Jaynes in 1984, he describes why he had to switch from doing a Ph.D. in quantum electrodynamics under J. R. Oppenheimer to doing one on group theoretic foundations of statistical mechanics under Eugene Wigner: “Mathematically, the Feynman electromagnetic propagator made no use of [QED’s] superfluous degrees of freedom; it was equally well a Green’s function for an unquantized EM field. So I wanted to reformulate electrodynamics from the ground up without using field quantization. ... If this meant standing in contradiction with the Copenhagen interpretation, so be it. ... But I sensed that Oppenheimer would never tolerate a grain of this; he would crush me like an eggshell if I dared to express a word of such subversive ideas. “Oppenheimer would never countenance any retreat from the Copenhagen position, of the kind advocated by Schrödinger and Einstein. He derived some great emotional satisfaction from just those elements of mysticism that Schrödinger and Einstein had deplored, and always wanted to make the world still more mystical, and less rational. ... Some have seen this as a fine humanist trait. I saw it increasingly as an anomaly -- a basically anti-scientific attitude in a person posing as a scientist.” Whether or not it started out that way, in the end the truth was of no importance in all of this, as exemplified by Oppenheimer’s remark (quoted by F. David Peat, on page 133 of Infinite Potential, his 1997 biography of David Bohm): There are other stories like this. Is this evidence of an innocent cognitive bias, or something more dangerous? 187. Ain Soph, I must admit to still being confused by your position, for on one hand you seem to agree with Lamb that there is no such thing as a photon and then have empathy for Bohm. This itself turns out to be completely opposite views ontologically which I would have thought you would have to first settle to move forward if only for yourself. So I would ask which is it to be Lamb or Bohm? 188. Phil: Actually, I think it would be a mistake to follow either too dogmatically. I’m not sure that the two of them are as incompatible as they seem at first sight, unless one insists on treating fields and particles identically. I can see the appeal in that, but it does cause a lot of problems. To be sure, the renormalization procedures of quantum electrodynamics can be made to yield impressive numerical accuracy, but this in itself does not validate the underlying physics: Ptolemaic epicycles can be made to reproduce planetary motions with arbitrary accuracy, even though the underlying model is essentially theological. For radiation, Jaynes notes that only emission and absorption can be shown unequivocally to be quantized, and that only two coefficients are required to completely specify each field mode. Lamb gave completely classical treatments of the laser and the Mössbauer effect, showing that neither photons nor phonons are strictly required. Jaynes also showed that the arguments claiming that the Lamb shift, stimulated emission and the Casimir effect prove the physical reality of the zero-point energy are circular; they assume that these things are quantum effects at the outset. For every effect that is commonly held to prove the physical reality of the zero-point energy, one can find an alternative classical derivation from electromagnetic back-reaction. So I have yet to see a valid argument that compels me to quantize the field. In regard to Bohm, I must say that I prefer the crisp clarity of his earlier work to his later dalliance with the mysticism of the implicate order. His demonstration, together with Aharonov, that the vector potential is a real physical entity, was masterful. And his pilot wave theory proves unequivocally that a hidden-variables theory is possible. But I am not ready to commit to any detailed interpretation of the pilot wave, primarily because of the treatment of the Dirac equation given by Hestenes, who takes the zitterbewegung to be physically real. From that starting point, one can not only construct models of the electron reminiscent of the non-radiating current distributions of Barut and Zanghi, but one can also recover the full U(1) x SU(2) x SU(3) symmetry of the standard model. In short, unless your interest in physics is motivated only by the desire to build gadgets, it would be a grave error to follow David Mermin’s curt injunction to “shut up and calculate.” 189. Ain Soph, I think you are blaming the wrong agents here! It's not the fault of scientists and philosophers trying to get a handle on the odd situation of quantum mechanics. Sure, many of them sure aren't doing the best they can - I rap in particular, the wretched circular argument and semantic slight of hand of advocates for decoherence as excuse for collapse or not seeing macro supers. No, the "fault" is not in (most of ;-) us but is in the stars: it's the universe just, really, being weird. It really doesn't make sense. Why should it? But yeah, maybe pilot waves can do something but I consider it a cheesy kludge. And even if it handles "particles" trajectories (uh, I'm still trying to imagine what funny kind of nugget a "photon" would be ... polarized light in 2 DOFs? Differing coherence lengths based on formation time? Hard for even the real Ain Soph to straighten out), what about neutron and muon decay and all that? As for my proposed experiment: as I said, its significance transcends interpretation squabbles. Nor is it in vein to previous paradoxes. It means getting more information out than was thought possible before. I say that is indeed a game changer 190. Neil: “... the universe just, really, being weird. It really doesn’t make sense.” That’s exactly what THEY want you to think! Seriously: that has got to be the most self-defeating bullsh!t I’ve ever heard. Reality cannot be inconsistent with itself. A is A. Of course it makes sense. We just haven’t figured it out yet. Quantum mechanics is no weirder than classical mechanics. The universe may very well be non-local, but simply cannot be acausal. If you learn nothing else from John Bell and David Bohm, learn that. 191. Reality cannot be inconsistent with itself. It isn't, I suppose in the circularly necessary and thus worthless sense - it's just inconsistent with what is conceptually convenient or sensible to us (or to being put into MUH.) A is A. Of course it makes sense. Randian QM? Just as unrealistic as for the human world. Quantum mechanics is no weirder than classical mechanics. Yes, it is. Even with concepts like PWs, what are we going to do about electron orbitals and shells, their not radiating unless perturbed - and then the process of radiation itself, tunneling and all that (and still, neutrons and muons etc which almost no one accepts as being a matter of outside diddling. What about the particles that don't last long enough to be diddled?) So I guess you think everything is determined, so we have to worry why each muon decayed at all those specific times etc. What a clunky mess, why not let it go? My reply is: It can be whatever it wants to be. I think, horribly to the usual suspects around here, that it's here first for a "big" reason like our existence, and only second to be logically nice. That may be mysticism but so is the idea the second purpose is uppermost. They were bright folks but ideological imposers. I want to say Bell should know better because of entangled properties (that are not supposed to be like "Bertlmann's socks" which are preselected ordinary properties) but maybe he thought their was a clever way to set it all up. But even if you imagine a pre-related pair of photons, the experimenter has to be dragged into the conspiracy too. Bob has to be forced to set a polarizer at some convenient angle, so he and Alice can get the same result. It's not enough for the photons to be eg "really at 20 degrees linear polarized" because if A & B use 35 degrees, they still get the same result as each other. Yet it can't be an inevitable result of the 15 degree difference either, since there is a pattern of yes and no - the correlation is what matters. If pilot waves can arrange all that, they might as well just be the real entities anyway. BTW your anonymity is your business, but if you drop a blog name etc. it might be worthwhile. PS I've had a heck of trouble with Google tonight, are the Chinese really messing with them that much? 192. Causality and determinism are two different ideas. The world can be causal and non-deterministic. 193. This comment has been removed by the author. 194. Hi Ain Soph, Well that was certainly as nice way of dancing around the question and perhaps as such you feel you suffer being less biased and maybe rightfully so. In this respect I guess I’m not as fortunate as you, for I see the world as something that’s always moving from becoming to being, as to be driven there by potential. So no matter which way you care to express it, for me there must be something that stands for the source of potential and another that stands for its observed result and both must be physical in nature for them to be considered as real. The fact is nature has demonstrated to be biased, through things like symmetry, conservation and probability, with then having these biases manifest themselves consequentially as invariance, covariance, action principle and so on. The job of science is then by way of observation (experiment) to discover how nature is biased and then through the use of reason to consider how such biases must be necessary to find things the way that they are; or in other words why. However, if all that a scientist feels their job as being is to figure out the recipe for having things to be real, without seeing it required to ask why, that is their failing and not a bias mandated by science itself. This is the bias expressed first by Newton himself, which Bohr merely served to echo later that those like Descartes, Einstein and Bohm never did agree with. So I find in relation to science this to be the only bias that holds any significance in terms of its ultimate success. - Albert Einstein- September 1944[Born-Einstein Letters], 195. Arun said: "Causality and determinism are two different ideas. The world can be causal and non-deterministic." Very true. If event A happens, then either event B or C might happen. In which case event B or C would be caused by event A, but it would still be non-deterministic. Ain Soph: "Quantum mechanics is no weirder than classical mechanics." Well, It seems pretty weird to me! Do you have access to some information the rest of us don't have? 196. Andrew - good distinction about causality v. determinism. That's basically what I meant when disagreeing with Ain Soph, forgetting the difference. Hence, we can't IMHO explain the specifics of the outcomes. But in common use, "causality" is made to be about the timing itself, so people say "the decay was not caused to be at that specific time by some pre-existing process, or law (the "law" such as it is, applies only to the probability being X.) You would likely have an interest in my proposal to recover apparently lost amplitude information. It's couched in terms of disproving that decoherence solves macro collapse problems, but there is no need to agree with me about that particular angle. Getting the scrambled info back is significant in any case, and the expectation it couldn't be is orthodox and not a school debate. I've gotten some interest from e.g blogger quantummoxie, but indeed I need a diagram! 197. Neil: Reality may look strange when you can’t take the speed of light to be infinite, or neglect the quantum of action, or treat your densities as delta functions, and even stranger in the face of all three. But that’s not the same as not making sense. A is A... these days, you hear it in the form, “it is what it is.” To deny it is to deny reason. But you just blow it off with a non sequitur. I guess that’s what you have to do if you want to believe that reality makes no sense. In your reply to Andrew, you are back to pretending there is a difference between “completely unpredictable” and “truly random.” Again, I guess you have to, otherwise you can’t cling to the idea that reality makes no sense. By the way, thanks for the expression of interest, but I don’t have a blog. Comment moderation on this blog is turned on.
c3ee287b79e959dd
How to think about Quantum Mechanics—Part 8: The quantum-classical limit as music [Other parts in this series: 1,2,3,4,5,6,7,8.] On microscopic scales, sound is air pressure f(t) fluctuating in time t. Taking the Fourier transform of f(t) gives the frequency distribution \hat{f}(\omega), but in an eternal way, applying to the entire time interval for t\in [-\infty,\infty]. Yet on macroscopic scales, sound is described as having a frequency distribution as a function of time, i.e., a note has both a pitch and a duration. There are many formalisms for describing this (e.g., wavelets), but a well-known limitation is that the frequency \omega of a note is only well-defined up to an uncertainty that is inversely proportional to its duration \Delta t. At the mathematical level, a given wavefunction \psi(x) is almost exactly analogous: macroscopically a particle seems to have a well-defined position and momentum, but microscopically there is only the wavefunction \psi. The mapping of the analogyI am of course not the first to emphasize this analogy. For instance, while writing this post I found “Uncertainty principles in Fourier analysis” by de Bruijn (via Folland’s book), who calls the Wigner function of an audio signal f(t) the “musical score” of f.a   is \{t,\omega,f\} \to \{x,p,\psi\}. Wavefunctions can of course be complex, but we can restrict ourself to a real-valued wavefunction without any trouble; we are not worrying about the dynamics of wavefunctions, so you can pretend the Hamiltonian vanishes if you like. In order to get the acoustic analog of Planck’s constant \hbar, it helps to imagine going back to a time when the pitch of a note was measured with a unit that did not have a known connection to absolute frequency, i.e.,… [continue reading] How to think about Quantum Mechanics—Part 7: Quantum chaos and linear evolution [Other parts in this series: 1,2,3,4,5,6,7,8.] You’re taking a vacation to Granada to enjoy a Spanish ski resort in the Sierra Nevada mountains. But as your plane is coming in for a landing, you look out the window and realize the airport is on a small tropical island. Confused, you ask the flight attendant what’s wrong. “Oh”, she says, looking at your ticket, “you’re trying to get to Granada, but you’re on the plane to Grenada in the Caribbean Sea.” A wave of distress comes over your face, but she reassures you: “Don’t worry, Granada isn’t that far from here. The Hamming distance is only 1!”. After you’ve recovered from that side-splitting humor, let’s dissect the frog. What’s the basis of the joke? The flight attendant is conflating two different metrics: the geographic distance and the Hamming distance. The distances are completely distinct, as two named locations can be very nearby in one and very far apart in the other. Now let’s hear another joke from renowned physicist Chris Jarzynski: The linear Schrödinger equation, however, does not give rise to the sort of nonlinear, chaotic dynamics responsible for ergodicity and mixing in classical many-body systems. This suggests that new concepts are needed to understand thermalization in isolated quantum systems. – C. Jarzynski, “Diverse phenomena, common themes” [PDF] Ha! Get it? This joke is so good it’s been told by S. Wimberger“Since quantum mechanics is the more fundamental theory we can ask ourselves if there is chaotic motion in quantum systems as well.[continue reading] How to think about Quantum Mechanics—Part 1: Measurements are about bases [This post was originally “Part 0”, but it’s been moved. Other parts in this series: 1,2,3,4,5,6,7,8.] In an ideal world, the formalism that you use to describe a physical system is in a one-to-one correspondence with the physically distinct configurations of the system. But sometimes it can be useful to introduce additional descriptions, in which case it is very important to understand the unphysical over-counting (e.g., gauge freedom). A scalar potential V(x) is a very convenient way of representing the vector force field, F(x) = \partial V(x), but any constant shift in the potential, V(x) \to V(x) + V_0, yields forces and dynamics that are indistinguishable, and hence the value of the potential on an absolute scale is unphysical. One often hears that a quantum experiment measures an observable, but this is wrong, or very misleading, because it vastly over-counts the physically distinct sorts of measurements that are possible. It is much more precise to say that a given apparatus, with a given setting, simultaneously measures all observables with the same eigenvectors. More compactly, an apparatus measures an orthogonal basis – not an observable.We can also allow for the measured observable to be degenerate, in which case the apparatus simultaneously measures all observables with the same degenerate eigenspaces. To be abstract, you could say it measures a commuting subalgebra, with the nondegenerate case corresponding to the subalgebra having maximum dimensionality (i.e., the same number of dimensions as the Hilbert space). Commuting subalgebras with maximum dimension are in one-to-one correspondence with orthonormal bases, modulo multiplying the vectors by pure phases.a   You can probably start to see this by just noting that there’s no actual, physical difference between measuring X and X^3; the apparatus that would perform the two measurements are identical.… [continue reading] How to think about Quantum Mechanics—Part 6: Energy conservation and wavefunction branches [Other parts in this series: 1,2,3,4,5,6,7,8.] In discussions of the many-worlds interpretation (MWI) and the process of wavefunction branching, folks sometimes ask whether the branching process conflicts with conservations laws like the conservation of energy.Here are some related questions from around the web, not addressing branching or MWI. None of them get answered particularly well.a   There are actually two completely different objections that people sometimes make, which have to be addressed separately. First possible objection: “If the universe splits into two branches, doesn’t the total amount of energy have to double?” This is the question Frank Wilczek appears to be addressing at the end of these notes. I think this question can only be asked by someone who believes that many worlds is an interpretation that is just like Copenhagen (including, in particular, the idea that measurement events are different than normal unitary evolution) except that it simply declares that new worlds are created following measurements. But this is a misunderstanding of many worlds. MWI dispenses with collapse or any sort of departure from unitary evolution. The wavefunction just evolves along, maintaining its energy distributions, and energy doesn’t double when you mathematically identify a decomposition of the wavefunction into two orthogonal components. Second possible objection: “If the universe starts out with some finite spread in energy, what happens if it then ‘branches’ into multiple worlds, some of which overlap with energy eigenstates outside that energy spread?” Or, another phrasing: “What happens if the basis in which the universe decoheres doesn’t commute with energy basis? Is it then possible to create energy, at least in some branches?”… [continue reading] How to think about Quantum Mechanics—Part 5: Superpositions and entanglement are relative concepts [Other parts in this series: 1,2,3,4,5,6,7,8.] People often talk about “creating entanglement” or “creating a superposition” in the laboratory, and quite rightly think about superpositions and entanglement as resources for things like quantum-enhanced measurements and quantum computing. However, it’s often not made explicit that a superposition is only defined relative to a particular preferred basis for a Hilbert space. A superposition \vert \psi \rangle = \vert 1 \rangle + \vert 2 \rangle is implicitly a superposition relative to the preferred basis \{\vert 1 \rangle, \vert 2 \rangle\}. Schrödinger’s cat is a superposition relative to the preferred basis \{\vert \mathrm{Alive} \rangle, \vert \mathrm{Dead} \rangle\}. Without there being something special about these bases, the state \vert \psi \rangle is no more or less a superposition than \vert 1 \rangle and \vert 2 \rangle individually. Indeed, for a spin-1/2 system there is a mapping between bases for the Hilbert space and vector directions in real space (as well illustrated by the Bloch sphere); unless one specifies a preferred direction in real space to break rotational symmetry, there is no useful sense of putting that spin in a superposition. Likewise, entanglement is only defined relative to a particular tensor decomposition of the Hilbert space into subsystems, \mathcal{H} = \mathcal{A} \otimes \mathcal{B}. For any given (possibly mixed) state of \mathcal{H}, it’s always possible to write down an alternate decomposition \mathcal{H} = \mathcal{X} \otimes \mathcal{Y} relative to which the state has no entanglement. So where do these preferred bases and subsystem structure come from? Why is it so useful to talk about these things as resources when their very existence seems to be dependent on our mathematical formalism? Generally it is because these preferred structures are determined by certain aspects of the dynamics out in the real world (as encoded in the Hamiltonian) that make certain physical operations possible and others completely infeasible.… [continue reading] How to think about Quantum Mechanics—Part 4: Quantum indeterminism as an anomaly [Other parts in this series: 1,2,3,4,5,6,7,8.] I am firmly of the view…that all the sciences are compatible and that detailed links can be, and are being, forged between them. But of course the links are subtle… a mathematical aspect of theory reduction that I regard as central, but which cannot be captured by the purely verbal arguments commonly employed in philosophical discussions of reduction. My contention here will be that many difficulties associated with reduction arise because they involve singular limits….What nonclassical phenomena emerge as h 0? This sounds like nonsense, and indeed if the limit were not singular the answer would be: no such phenomena.Michael Berry One of the great crimes against humanity occurs each year in introductory quantum mechanics courses when students are introduced to an \hbar \to 0 limit, sometimes decorated with words involving “the correspondence principle”. The problem isn’t with the content per se, but with the suggestion that this somehow gives a satisfying answer to why quantum mechanics looks like classical mechanics on large scales. Sometimes this limit takes the form of a path integral, where the transition probability for a particle to move from position x_1 to x_2 in a time T is (1)   \begin{align*} P_{x_1 \to x_2} &= \langle x_1 \vert e^{-i H T} \vert x_2 \rangle \\ &\propto \int_{x_1,x_2} \mathcal{D}[x(t)] e^{-i S[x(t),x'(t)]/\hbar} = \int_{x_1,x_2} \mathcal{D}[x(t)] e^{-i \int_0^T \mathrm{d}t L(x(t),x'(t))/\hbar} \end{align*} where \int_{x_1,x_2} \mathcal{D}[x(t)] is the integral over all paths from x_1 to x_2, and S[x(t),x'(t)]= \int_0^T \mathrm{d}t L(x(t),x'(t)) is the action for that path (L being the Lagrangian corresponding to the Hamiltonian H). As \hbar \to 0, the exponent containing the action spins wildly and averages to zero for all paths not in the immediate vicinity of the classical path that make the action stationary. Other times this takes the form of Ehrenfest’s theorem, which shows that the expectation values of functions of position and momentum follow the classical equations of motion.… [continue reading] How to think about Quantum Mechanics—Part 2: Vacuum fluctuations [Other parts in this series: 1,2,3,4,5,6,7,8.] Although it is possible to use the term “vacuum fluctuations” in a consistent manner, referring to well-defined phenomena, people are usually way too sloppy. Most physicists never think clearly about quantum measurements, so the term is widely misunderstood and should be avoided if possible. Maybe the most dangerous result of this is the confident, unexplained use of this term by experienced physicists talking to students; it has the awful effect of giving these student the impression that their inevitable confusion is normal and not indicative of deep misunderstanding“Professor, where do the wiggles in the cosmic microwave background come from?” “Quantum fluctuations”. “Oh, um…OK.” (Yudkowsky has usefully called this a “curiosity-stopper”, although I’m sure there’s another term for this used by philosophers of science.)a  . Here is everything you need to know: 1. A measurement is specified by a basis, not by an observable. (If you demand to think in terms of observables, just replace “measurement basis” with “eigenbasis of the measured observable” in everything that follows.) 2. Real-life processes amplify microscopic phenomena to macroscopic scales all the time, thereby effectively performing a quantum measurement. (This includes inducing the implied wave-function collapse). These do not need to involve a physicist in a lab, but the basis being measured must be an orthogonal one.W. H. Zurek, Phys. Rev. A 76, 052110 (2007). [arXiv:quant-ph/0703160]b   3. “Quantum fluctuations” are when any measurement (whether involving a human or not) is made in a basis which doesn’t commute with the initial state of the system. 4. A “vacuum fluctuation” is when the ground state of a system is measured in a basis that does not include the ground state; it’s merely a special case of a quantum fluctuation. [continue reading]
d14f5f8d992bde0f
Dr. Ahmed G. Abo-Khalil Electrical Engineering Department Magnetic vector po The magnetic vector potential A is a vector field defined with the electric potential (a scalar field) ϕ by the equations: mathbf{B} = abla imes mathbf{A},,quad mathbf{E} = - abla phi - frac { partial mathbf{A} } { partial t },, where B is the magnetic field and E is the electric field. In magnetostatics where there is no time-varying charge distribution, only the first equation is needed. (In the context of electrodynamics, the terms "vector potential" and "scalar potential" are used for "magnetic vector potential" and "electric potential", respectively. In mathematics, vector potential and scalar potential have more general meanings.) Defining the electric and magnetic fields from potentials automatically satisfies two of Maxwell's equations: Gauss's law for magnetism and Faraday's Law. For example, if A is continuous and well-defined everywhere, then it is guaranteed not to result in magnetic monopoles. (In the mathematical theory of magnetic monopoles, A is allowed to be either undefined or multiple-valued in some places; see magnetic monopole for details). Starting with the above definitions: abla cdot mathbf{B} = abla cdot ( abla imes mathbf{A}) = 0 abla imes mathbf{E} = abla imes left( - ight) = - frac { partial } { partial t } ( abla imes mathbf{A}) = - frac { partial mathbf{B} } { partial t }. Alternatively, the existence of A and φ is guaranteed from these two laws using the Helmholtz's theorem. For example, since the magnetic field is divergence-free (Gauss's law for magnetism), i.e. ∇ • B = 0, A always exists that satisfies the above definition. The vector potential A is used when studying the Lagrangian in classical mechanics and in quantum mechanics (see Schrödinger equation for charged particles, Dirac equation, Aharonov-Bohm effect). In the SI system, the units of A are V·s·m−1 and are the same as that of momentum per unit charge. Although the magnetic field B is a pseudovector (also called axial vector), the vector potential A is a polar vector instead. This means that if the right-hand rule for cross products were replaced with a left-hand rule, but without changing any other equations or definitions, then B would switch signs, but A would not change. This is an example of a general theorem: The curl of a polar vector is a pseudovector, and vice-versa. Office Hours No office hours My Timetable email: [email protected] [email protected] Phone: 2570 Welcome To Faculty of Engineering Almajmaah University Links of Interest Travel Web Sites ستقام اختبارات الميدتيرم يوم الثلاثاء 26-6-1440 حسب الجدول المعلن بلوحات الاعلان Summer training Academic advising Class registration week 1 برنامج التجسير إحصائية الموقع عدد الصفحات: 2879 البحوث والمحاضرات: 1280 الزيارات: 70581
4aaf3c3748523677
LOG#070. Natural Units. Happy New Year 2013 to everyone and everywhere! Let me apologize, first of all, by my absence… I have been busy, trying to find my path and way in my field, and I am busy yet, but finally I could not resist without a new blog boost… After all, you should know the fact I have enough materials to write many new things. So, what’s next? I will dedicate some blog posts to discuss a nice topic I began before, talking about a classic paper on the subject here: The topic is going to be pretty simple: natural units in Physics. First of all, let me point out that the election of any system of units is, a priori, totally conventional. You are free to choose any kind of units for physical magnitudes. Of course, that is not very clever if you have to report data, so everyone can realize what you do and report. Scientists have some definitions and popular systems of units that make the process pretty simpler than in the daily life. Then, we need some general conventions about “units”. Indeed, the traditional wisdom is to use the international system of units, or S (Iabbreviated SI from French language: Le Système international d’unités). There, you can find seven fundamental magnitudes and seven fundamental (or “natural”) units: 1) Space: \left[ L\right]=\mbox{meter}=m 2) Time: \left[ T\right]=\mbox{second}=s 3) Mass: \left[ M\right]=\mbox{kilogram}=kg 4) Temperature: \left[ t\right]=\mbox{Kelvin degree}= K 5) Electric intensity: \left[ I\right]=\mbox{ampere}=A 6) Luminous intensity: \left[ I_L\right]=\mbox{candela}=cd 7) Amount of substance: \left[ n\right]=\mbox{mole}=mol(e) The dependence between these 7 great units and even their definitions can be found here http://en.wikipedia.org/wiki/International_System_of_Units and references therein. I can not resist to show you the beautiful graph of the 7 wonderful units that this wikipedia article shows you about their “interdependence”: In Physics, when you build a radical new theory, generally it has the power to introduce a relevant scale or system of units. Specially, the Special Theory of Relativity, and the Quantum Mechanics are such theories. General Relativity and Statistical Physics (Statistical Mechanics) have also intrinsic “universal constants”, or, likely, to be more precise, they allow the introduction of some “more convenient” system of units than those you have ever heard ( metric system, SI, MKS, cgs, …). When I spoke about Barrow units (see previous comment above) in this blog, we realized that dimensionality (both mathematical and “physical”), and fundamental theories are bound to the election of some “simpler” units. Those “simpler” units are what we usually call “natural units”. I am not a big fan of such terminology. It is confusing a little bit. Maybe, it would be more interesting and appropiate to call them “addapted X units” or “scaled X units”, where X denotes “relativistic, quantum,…”. Anyway, the name “natural” is popular and it is likely impossible to change the habits. In fact, we have to distinguish several “kinds” of natural units. First of all, let me list “fundamental and universal” constants in different theories accepted at current time: 1. Boltzmann constant: k_B. Essential in Statistical Mechanics, both classical and quantum. It measures “entropy”/”information”. The fundamental equation is:     \[ \boxed{S=k_B\ln \Omega}\] It provides a link between the microphysics and the macrophysics ( it is the code behind the equation above). It can be understood somehow as a measure of the “energetic content” of an individual particle or state at a given temperature. Common values for this constant are:     \[ k_B=1.3806488(13)\times 10^{-23}J/K = 8.6173324(78)\times 10^{-5}eV/K\]     \[ k_B=1.3806488(13)\times 10^{-16}erg/K\] Statistical Physics states that there is a minimum unit of entropy or a minimal value of energy at any given temperature. Physical dimensions of this constant are thus entropy, or since E=TS, \left[ k_B\right] =E/t=J/K, where t denotes here dimension of temperature. 2. Speed of light.  c. From classical electromagnetism:     \[ \boxed{c^2=\dfrac{1}{\sqrt{\varepsilon_0\mu_0}}}\] The speed of light, according to the postulates of special relativity, is a universal constant. It is frame INDEPENDENT. This fact is at the root of many of the surprising results of special relativity, and it took time to be understood. Moreover, it also connects space and time in a powerful unified formalism, so space and time merge into spacetime, as we do know and we have studied long ago in this blog. The spacetime interval in a D=3+1 dimensional space and two arbitrary events reads: In fact, you can observe that “c” is the conversion factor between time-like and space-like coordinates.  How big the speed of light is? Well, it is a relatively large number from our common and ordinary perception. It is exactly:     \[ \boxed{c=299,792,458m/s}\] although you often take it as c\approx 3\cdot 10^{8}m/s=3\cdot 10^{10}cm/s.  However, it is the speed of electromagnetic waves in vacuum, no matter where you are in this Universe/Polyverse. At least, experiments are consistent with such an statement. Moreover, it shows that c is also the conversion factor between energy and momentum, since     \[ \mathbf{P}^2c^2-E^2=-m^2c^4\] and c^2 is the conversion factor between rest mass and pure energy, because, as everybody knows,  E=mc^2! According to the special theory of relativity, normal matter can never exceed the speed of light. Therefore, the speed of light is the maximum velocity in Nature, at least if specially relativity holds. Physical dimensions of c are \left[c\right]=LT^{-1}, where L denotes length dimension and T denotes time dimension (please, don’t confuse it with temperature despite the capital same letter for both symbols). 3. Planck’s constant. h or generally rationalized \hbar=h/2\pi. Planck’s constant (or its rationalized version), is the fundamental universal constant in Quantum Physics (Quantum Mechanics, Quantum Field Theory). It gives     \[ \boxed{E=h\nu=\hbar \omega}\] Indeed, quanta are the minimal units of energy. That is, you can not divide further a quantum of light, since it is indivisible by definition! Furthermore, the de Broglie relationship relates momentum and wavelength for any particle, and it emerges from the combination of special relativity and the quantum hypothesis:     \[ \lambda=\dfrac{h}{p}\leftrightarrow \bar{\lambda}=\dfrac{\hbar}{p}\] In the case of massive particles, it yields     \[ \lambda=\dfrac{h}{Mv}\leftrightarrow \bar{\lambda}=\dfrac{\hbar}{Mv}\] In the case of massless particles (photons, gluons, gravitons,…)     \[ \lambda=\dfrac{hc}{E}\]     \[ \bar{\lambda}=\dfrac{\hbar c}{E}\] Planck’s constant also appears to be essential to the uncertainty principle of Heisenberg:     \[ \boxed{\Delta x \Delta p\geq \hbar/2}\]     \[\boxed{\Delta E \Delta t\geq \hbar/2}\]     \[ \boxed{\Delta A\Delta B\geq \hbar/2}\] Some particularly important values of this constant are:     \[ h=6.62606957(29)\times 10^{-34} J\cdot s\]     \[ h=4.135667516(91)\times 10^{-15}eV\cdot s\]     \[ h=6.62606957(29)\times 10^{-27} erg\cdot s\]     \[ \hbar =1.054571726(47)\times 10^{-34} J\cdot s\]     \[ \hbar =6.58211928(15)\times 10^{-16} eV\cdot s\]     \[ \hbar= 1.054571726(47)\times 10^{-27}erg\cdot s\] It is also useful to know that     \[ hc=1.98644568\times 10^{-25}J\cdot m\]     \[ hc=1.23984193 eV\cdot \mu m\]     \[ \hbar c=0.1591549hc\]     \[ \hbar c=197.327 eV\cdot nm\] Planck constant has dimension of \mbox{Energy}\times \mbox{Time}=\mbox{position}\times \mbox{momentum}=ML^2T^{-1}. Physical dimensions of this constant coincide also with angular momentum (spin), i.e., with L=mvr. 4. Gravitational constant. G_N. Apparently, it is not like the others but it can also define some particular scale when combined with Special Relativity. Without entering into further details (since I have not discussed General Relativity yet in this blog), we can calculate the escape velocity of a body moving at the speed of light \dfrac{1}{2}mv^2-G_N\dfrac{Mm}{R}=0 with v=c implies a new length scale where gravitational relativistic effects do appear, the so-called Schwarzschild radius R_S:     \[ \boxed{R_S=\dfrac{2G_NM}{c^2}=\dfrac{2G_NM_{\odot}}{c^2}\left(\dfrac{M}{M_{\odot}}\right)\approx 2.95\left(\dfrac{M}{M_{\odot}}\right)km}\] 5. Electric fundamental charge. e. It is generally chosen as fundamental charge the electric charge of the positron (positive charged “electron”). Its value is:     \[ e=1.602176565(35)\times 10^{-19}C\] where C denotes Coulomb. Of course, if you know about quarks with a fraction of this charge, you could ask why we prefer this one. Really, it is only a question of hystory of Science, since electrons were discovered first (and positrons). Quarks, with one third or two thirds of this amount of elementary charge, were discovered later, but you could define the fundamental unit of charge as multiple or entire fraction of this charge. Moreover, as far as we know, electrons are “elementary”/”fundamental” entities, so, we can use this charge as unit and we can define quark charges in terms of it too. Electric charge is not a fundamental unit in the SI system of units. Charge flow, or electric current, is. An amazing property of the above 5 constants is that they are “universal”. And, for instance, energy is related with other magnitudes in theories where the above constants are present in a really wonderful and unified manner:     \[ \boxed{E=N\dfrac{k_BT}{2}=Mc^2=TS=Pc=N\dfrac{h\nu}{2}=N\dfrac{\hbar \omega}{2}=\dfrac{R_Sc^4}{2G_N}=\hbar c k=\dfrac{hc}{\lambda}}\] Caution: k is not the Boltzmann constant but the wave number. There is a sixth “fundamental” constant related to electromagnetism, but it is also related to the speed of light, the electric charge and the Planck’s constant in a very subtle way. Let me introduce you it too… 6. Coulomb constant. k_C. This is a second constant related to classical electromagnetism, like the speed of light in vacuum. Coulomb’s constant, the electric force constant, or the electrostatic constant (denoted k_C) is a proportionality factor that takes part in equations relating electric force between  point charges, and indirectly it also appears (depending on your system of units) in expressions for electric fields of charge distributions. Coulomb’s law reads     \[ F_C=k_C\dfrac{Qq}{r^2}\] Its experimental value is     \[ k_C=\dfrac{1}{4\pi \varepsilon_0}=\dfrac{c^2\mu_0}{4\pi}=c^2\cdot 10^{-7}H\cdot m^{-1}= 8.9875517873681764\cdot 10^9 Nm^2/C^2\] Generally, the Coulomb constant is dropped out and it is usually preferred to express everything using the electric permitivity of vacuum \varepsilon_0 and/or numerical factors depending on the pi number \pi if you choose the gaussian system of units  (read this wikipedia article http://en.wikipedia.org/wiki/Gaussian_system_of_units ), the CGS system, or some hybrid units based on them. H.E.P. units High Energy Physicists use to employ units in which the velocity is measured in fractions of the speed of light in vacuum, and the action/angular momentum is some multiple of the Planck’s constant. These conditions are equivalent to set     \[ \boxed{c=1_c=1}\]     \[ \boxed{\hbar=1_\hbar=1}\] Complementarily, or not, depending on your tastes and preferences, you can also set the Boltzmann’s constant to the unit as well     \[ k_B=1_{k_B}=1\] and thus the complete HEP system is defined if you set     \[ \boxed{c=\hbar=k_B=1}\] This “natural” system of units is lacking yet a scale of energy. Then, it is generally added the electron-volt eV as auxiliary quantity defining the reference energy scale. Despite the fact that this is not a “natural unit” in the proper sense because it is defined by a natural property, the electric charge,  and the anthropogenic unit of electric potential, the volt. The SI prefixes multiples of eV are used as well: keV, MeV, GeV, etc. Here, the eV is used as reference energy quantity, and with the above election of “elementary/natural units” (or any other auxiliary unit of energy), any quantity can be expressed. For example, a distance of 1 m can be expressed in terms of eV, in natural units, as     \[ 1m=\dfrac{1m}{\hbar c}\approx 510eV^{-1}\] This system of units have remarkable conversion factors A) 1 eV^{-1} of length is equal to 1.97\cdot 10^{-7}m =(1\text{eV}^{-1})\hbar c B) 1 eV of mass is equal to 1.78\cdot 10^{-36}kg=1\times \dfrac{eV}{c^2} C) 1 eV^{-1} of time is equal to 6.58\cdot 10^{-16}s=(1\text{eV}^{-1})\hbar D) 1 eV of temperature is equal to 1.16\cdot 10^4K=1eV/k_B E) 1 unit of electric charge in the Lorentz-Heaviside system of units is equal to 5.29\cdot 10^{-19}C=e/\sqrt{4\pi\alpha} F) 1 unit of electric charge in the Gaussian system of units is equal to 1.88\cdot 10^{-18}C=e/\sqrt{\alpha} This system of units, therefore, leaves free only the energy scale (generally it is chosen the electron-volt) and the electric measure of fundamentl charge. Every other unit can be related to energy/charge. It is truly remarkable than doing this (turning invisible the above three constants) you can “unify” different magnitudes due to the fact these conventions make them equivalent. For instance, with natural units: 1) Length=Time=1/Energy=1/Mass. It is due to x=ct, E=Mc^2 and E=hc/\lambda equations. Setting c and h or \hbar provides x=t, E=M and E=1/\lambda. Note that natural units turn invisible the units we set to the unit! That is the key of the procedure. It simplifies equations and expressions. Of course, you must be careful when you reintroduce constants! 2) Energy=Mass=Momentum=Temperature. It is due to E=k_BT, E=Pc and E=Mc^2 again. One extra bonus for theoretical physicists is that natural units allow to build and write proper lagrangians and hamiltonians (certain mathematical operators containing the dynamics of the system enconded in them), or equivalently the action functional, with only the energy or “mass” dimension as “free parameter”. Let me show how it works. Natural units in HEP identify length and time dimensions. Thus \left[L\right]=\left[T\right]. Planck’s constant allows us to identify those 2 dimensions with 1/Energy (reciprocals of energy) physical dimensions. Therefore, in HEP units, we have The speed of light identifies energy and mass, and thus, we can often heard about “mass-dimension” of a lagrangian in the following sense. HEP units can be thought as defining “everything” in terms of energy, from the pure dimensional ground. That is, every physical dimension is (in HEP units) defined by a power of energy:     \[ \boxed{\left[E\right]^n}\] Thus, we can refer to any magnitude simply saying the power of such physical dimension (or you can think logarithmically to understand it easier if you wish). With this convention, and recalling that energy dimension is mass dimension, we have that     \[ \left[L\right]=\left[T\right]=-1\]     \[ \left[E\right]=\left[M\right]=1\] Using these arguments, the action functional is a pure dimensionless quantity, and thus, in D=4 spacetime dimensions, lagrangian densities must have dimension 4 ( or dimension D is a general spacetime).     \[ \displaystyle{S=\int d^4x \mathcal{L}\rightarrow \left[\mathcal{L}\right]=4}\]     \[ \displaystyle{S=\int d^Dx \mathcal{L}\rightarrow \left[\mathcal{L}\right]=D}\] In D=4 spacetime dimensions, it can be easily showed that     \[ \left[\partial_\mu\right]=\left[\Phi\right]=\left[A^\mu\right]=1\]     \[ \left[\Psi_D\right]=\left[\Psi_M\right]=\left[\chi\right]=\left[\eta\right]=\dfrac{3}{2}\] where \Phi is a scalar field, A^\mu is a vector field (like the electromagnetic or non-abelian vector gauge fields), and \Psi_D, \Psi_M, \chi, \eta are a Dirac spinor, a Majorana spinor, and \chi, \eta are Weyl spinors (of different chiralities). Supersymmetry (or SUSY) allows for anticommuting c-numbers (or Grassmann numbers) and it forces to introduce auxiliary parameters with mass dimension -1/2. They are the so-called SUSY transformation parameters \zeta_{SUSY}=\epsilon. There are some speculative spinors called ELKO fields that could be non-standandard spinor fields with mass dimension one! But it is an advanced topic I am not going to discuss here today. In general D spacetime dimensions a scalar (or vector) field would have mass dimension (D-2)/2, and a spinor/fermionic field in D dimensions has generally (D-1)/2 mass dimension (excepting the auxiliary SUSY grassmanian fields and the exotic idea of ELKO fields).  This dimensional analysis is very useful when theoretical physicists build up interacting lagrangians, since we can guess the structure of interaction looking at purely dimensional arguments every possible operator entering into the action/lagrangian density! In summary, therefore, for any D:     \[ \boxed{\left[\Phi\right]=\left[A_\mu\right]=\dfrac{D-2}{2}\equiv E^{\frac{D-2}{2}}=M^{\frac{D-2}{2}}}\]     \[ \boxed{\left[\Psi\right]=\dfrac{D-1}{2}\equiv E^{\frac{D-1}{2}}=M^{\frac{D-1}{2}}}\] Remark (for QFT experts only): Don’t confuse mass dimension with the final transverse polarization degrees or “degrees of freedom” of a particular field, i.e., “components” minus “gauge constraints”. E.g.: a gauge vector field has D-2 degrees of freedom in D dimensions. They are different concepts (although both closely related to the spacetime dimension where the field “lives”). In summary: i) HEP units are based on QM (Quantum Mechanics), SR (Special Relativity) and Statistical Mechanics (Entropy and Thermodynamics). ii) HEP units need to introduce a free energy scale, and it generally drives us to use the eV or electron-volt as auxiliary energy scale. iii) HEP units are useful to dimensional analysis of lagrangians (and hamiltonians) up to “mass dimension”. Stoney Units In Physics, the Stoney units form a alternative set of natural units named after the Irish physicist George Johnstone Stoney, who first introduced them as we know it today in 1881. However, he presented the idea in a lecture entitled “On the Physical Units of Nature” delivered to the British Association before that date, in 1874. They are the first historical example of natural units and “unification scale” somehow. Stoney units are rarely used in modern physics for calculations, but they are of historical interest but some people like Wilczek has written about them (see, e.g., http://arxiv.org/abs/0708.4361). These units of measurement were designed so that certain fundamental physical constants are taken as reference basis without the Planck scale being explicit, quite a remarkable fact! The set of constants that Stoney used as base units is the following: A) Electric charge, e=1_e. B) Speed of light in vacuum, c=1_c. C) Gravitational constant, G_N=1_{G_N}. D) The Reciprocal of Coulomb constant, 1/k_C=4\pi \varepsilon_0=1_{k_C^{-1}}=1_{4\pi \varepsilon_0}. Stony units are built when you set these four constants to the unit, i.e., equivalently, the Stoney System of Units (S) is determined by the assignments:     \[ \boxed{e=c=G_N=4\pi\varepsilon_0=1}\] Interestingly, in this system of units, the Planck constant is not equal to the unit and it is not “fundamental” (Wilczek remarked this fact here ) but:     \[ \hbar=\dfrac{1}{\alpha}\approx 137.035999679\] Today, Planck units are more popular than Stoney units in modern physics, and even there are many physicists who don’t know about the Stoney Units! In fact, Stoney was one of the first scientists to understand that electric charge was quantized!; from this quantization he deduced the units that are now named after him. The Stoney length and the Stoney energy are collectively called the Stoney scale, and they are not far from the Planck length and the Planck energy, the Planck scale. The Stoney scale and the Planck scale are the length and energy scales at which quantum processes and gravity occur together. At these scales, a unified theory of physics is thus likely required. The only notable attempt to construct such a theory from the Stoney scale was that of H. Weyl, who associated a gravitational unit of charge with the Stoney length and who appears to have inspired Dirac’s fascination with the large number hypothesis. Since then, the Stoney scale has been largely neglected in the development of modern physics, although it is occasionally discussed to this day. Wilczek likes to point out that, in Stoney Units, QM would be an emergent phenomenon/theory, since the Planck constant wouldn’t be present directly but as a combination of different constants. By the other hand, the Planck scale is valid for all known interactions, and does not give prominence to the electromagnetic interaction, as the Stoney scale does. That is, in Stoney Units, both gravitation and electromagnetism are on equal footing, unlike the Planck units, where only the speed of light is used and there is no more connections to electromagnetism, at least, in a clean way like the Stoney Units do. Be aware, sometimes, rarely though, Planck units are referred to as Planck-Stoney units. What are the most interesting Stoney system values? Here you are the most remarkable results: 1) Stoney Length, L_S.     \[ \boxed{L_S=\sqrt{\dfrac{G_Ne^2}{(4\pi\varepsilon_0)c^4}}\approx 1.38\cdot 10^{-36}m}\] 2) Stoney Mass, M_S.     \[ \boxed{M_S=\sqrt{\dfrac{e^2}{G_N(4\pi\varepsilon_0)}}\approx 1.86\cdot 10^{-9}kg}\] 3) Stoney Energy, E_S.     \[ \boxed{E_S=M_Sc^2=\sqrt{\dfrac{e^2c^4}{G_N(4\pi\varepsilon_0)}}\approx 1.67\cdot 10^8 J=1.04\cdot 10^{18}GeV}\] 4) Stoney Time, t_S.     \[ \boxed{t_S=\sqrt{\dfrac{G_Ne^2}{c^6(4\pi\varepsilon_0)}}\approx 4.61\cdot 10^{-45}s}\] 5) Stoney Charge, Q_S.     \[ \boxed{Q_S=e\approx 1.60\cdot 10^{-19}C}\] 6) Stoney Temperature, T_S.     \[ \boxed{T_S=E_S/k_B=\sqrt{\dfrac{e^2c^4}{G_Nk_B^2(4\pi\varepsilon_0)}}\approx 1.21\cdot 10^{31}K}\] Planck Units The reference constants to this natural system of units (generally denoted by P) are the following 4 constants: 1) Gravitational constant. G_N 2) Speed of light. c. 3) Planck constant or rationalized Planck constant. \hbar. 4) Boltzmann constant. k_B. The Planck units are got when you set these 4 constants to the unit, i.e.,     \[ \boxed{G_N=c=\hbar=k_B=1}\] It is often said that Planck units are a system of natural units that is not defined in terms of properties of any prototype, physical object, or even features of any fundamental particle. They only refer to the basic structure of the laws of physics: c and G are part of the structure of classical spacetime in the relativistic theory of gravitation, also known as general relativity, and ℏ captures the relationship between energy and frequency which is at the foundation of elementary quantum mechanics. This is the reason why Planck units particularly useful and common in theories of quantum gravity, including string theory or loop quantum gravity. This system defines some limit magnitudes, as follows: 1) Planck Length, L_P.     \[ \boxed{L_P=\sqrt{\dfrac{G_N\hbar}{c^3}}\approx 1.616\cdot 10^{-35}s}\] 2) Planck Time, t_P.     \[ \boxed{t_P=L_P/c=\sqrt{\dfrac{G_N\hbar}{c^5}}\approx 5.391\cdot 10^{-44}s}\] 3) Planck Mass, M_P.     \[ \boxed{M_P=\sqrt{\dfrac{\hbar c}{G_N}}\approx 2.176\cdot 10^{-8}kg}\] 4) Planck Energy, E_P.     \[ \boxed{E_P=M_Pc^2=\sqrt{\dfrac{\hbar c^5}{G_N}}\approx 1.96\cdot 10^9J=1.22\cdot 10^{19}GeV}\] 5) Planck charge, Q_P. In Lorentz-Heaviside electromagnetic units     \[ \boxed{Q_P=\sqrt{\hbar c \varepsilon_0}=\dfrac{e}{\sqrt{4\pi\alpha}}\approx 5.291\cdot 10^{-19}C}\] In Gaussian electromagnetic units     \[ \boxed{Q_P=\sqrt{\hbar c (4\pi\varepsilon_0)}=\dfrac{e}{\sqrt{\alpha}}\approx 1.876\cdot 10^{-18}C}\] 6) Planck temperature, T_P.     \[ \boxed{T_P=E_P/k_B=\sqrt{\dfrac{\hbar c^5}{G_Nk_B^2}}\approx 1.417\cdot 10^{32}K}\] From these “fundamental” magnitudes we can build many derived quantities in the Planck System: 1) Planck area.     \[ A_P=L_P^2=\dfrac{\hbar G_N}{c^3}\approx 2.612\cdot 10^{-70}m^2\] 2) Planck volume.     \[ V_P=L_P^3=\left(\dfrac{\hbar G_N}{c^3}\right)^{3/2}\approx 4.22\cdot 10^{-105}m^3\] 3) Planck momentum.     \[ P_P=M_Pc=\sqrt{\dfrac{\hbar c^3}{G_N}}\approx 6.52485 kgm/s\] A relatively “small” momentum! 4) Planck force.     \[ F_P=E_P/L_P=\dfrac{c^4}{G_N }\approx 1.21\cdot 10^{44}N\] It is independent from Planck constant! Moreover, the Planck acceleration is     \[ a_P=F_P/M_P=\sqrt{\dfrac{c^7}{G_N\hbar}}\approx 5.561\cdot 10^{51}m/s^2\] 5) Planck Power.     \[ \mathcal{P}_P=\dfrac{c^5}{G_N}\approx 3.628\cdot 10^{52}W\] 6) Planck density.     \[ \rho_P=\dfrac{c^5}{\hbar G_N^2}\approx 5.155\cdot 10^{96}kg/m^3\] Planck density energy would be equal to     \[ \rho_P c^2=\dfrac{c^7}{\hbar G_N^2}\approx 4.6331\cdot 10^{113}J/m^3\] 7) Planck angular frequency.     \[ \omega_P=\sqrt{\dfrac{c^5}{\hbar G_N}}\approx 1.85487\cdot 10^{43}Hz\] 8) Planck pressure.     \[ p_P=\dfrac{F_P}{A_P}=\dfrac{c^7}{G_N^2\hbar}=\rho_P c^2\approx 4.6331\cdot 10^{113}Pa\] Note that Planck pressure IS the Planck density energy! 9) Planck current.     \[ I_P=Q_P/t_P=\sqrt{\dfrac{4\pi\varepsilon_0 c^6}{G_N}}\approx 3.4789\cdot 10^{25}A\] 10) Planck voltage.     \[ v_P=E_P/Q_P=\sqrt{\dfrac{c^4}{4\pi\varepsilon_0 G_N}}\approx 1.04295\cdot 10^{27}V\] 11) Planck impedance.     \[ Z_P=v_P/I_P=\dfrac{\hbar^2}{Q_P}=\dfrac{1}{4\pi \varepsilon_0 c}\approx 29.979\Omega\] A relatively small impedance! 12) Planck capacitor.     \[ C_P=Q_P/v_P=4\pi\varepsilon_0\sqrt{\dfrac{\hbar G_N}{ c^3}} \approx 1.798\cdot 10^{-45}F\] Interestingly, it depends on the gravitational constant! Some Planck units are suitable for measuring quantities that are familiar from daily experience. In particular: 1 Planck mass is about 22 micrograms. 1 Planck momentum is about 6.5 kg m/s 1 Planck energy is about 500kWh. 1 Planck charge is about 11 elementary (electronic) charges. 1 Planck impendance is almost 30 ohms. i) A speed of 1 Planck length per Planck time is the speed of light, the maximum possible speed in special relativity. ii) To understand the Planck Era and “before” (if it has sense), supposing QM holds yet there, we need a quantum theory of gravity to be available there. There is no such a theory though, right now. Therefore, we have to wait if these ideas are right or not. iii) It is believed that at Planck temperature, the whole symmetry of the Universe was “perfect” in the sense the four fundamental foces were “unified” somehow. We have only some vague notios about how that theory of everything (TOE) would be. The physical dimensions of the known Universe in terms of Planck units are “dramatic”: i) Age of the Universe is about t_U=8.0\cdot 10^{60} t_P. ii) Diameter of the observable Universe is about d_U=5.4\cdot 10^{61}L_P iii) Current temperature of the Universe is about 1.9 \cdot 10^{-32}T_P iv) The observed cosmological constant is about 5.6\cdot 10^{-122}t_P^{-2} v) The mass of the Universe is about 10^{60}m_p. vi) The Hubble constant is 71km/s/Mpc\approx 1.23\cdot 10^{-61}t_P^{-1} Schrödinger Units The Schrödinger Units do not obviously contain the term c, the speed of light in a vacuum. However, within the term of the Permittivity of Free Space [i.e., electric constant or vacuum permittivity], and the speed of light plays a part in that particular computation. The vacuum permittivity results from the reciprocal of the speed of light squared times the magnetic constant. So, even though the speed of light is not apparent in the Schrödinger equations it does exist buried within its terms and therefore influences the decimal placement issue within square roots. The essence of Schrödinger units are the following constants: A) Gravitational constant G_N. B) Planck constant \hbar. C) Boltzmann constant k_B. D) Coulomb constant or equivalently the electric permitivity of free space/vacuum k_C=1/4\pi\varepsilon_0. E) The electric charge of the positron e. In this sistem \psi we have     \[\boxed{G_N=\hbar =k_B =k_C =1}\] 1) Schrödinger Length L_{Sch}.     \[ L_\psi=\sqrt{\dfrac{\hbar^4 G_N(4\pi\varepsilon_0)^3}{e^6}}\approx 2.593\cdot 10^{-32}m\] 2) Schrödinger time t_{Sch}.     \[ t_\psi=\sqrt{\dfrac{\hbar^6 G_N(4\pi\varepsilon_0)^5}{e^{10}}}\approx 1.185\cdot 10^{-38}s\] 3) Schrödinger mass M_{Sch}.     \[ M_\psi=\sqrt{\dfrac{e^2}{G_N(4\pi\varepsilon_0)}}\approx 1.859\cdot 10^{-9}kg\] 4) Schrödinger energy E_{Sch}.     \[ E_\psi=\sqrt{\dfrac{e^{10}}{\hbar^4(4\pi\varepsilon_0)^5G_N}}\approx 8890 J=5.55\cdot 10^{13}GeV\] 5) Schrödinger charge Q_{Sch}.     \[ Q_\psi =e=1.602\cdot 10^{-19}C\] 6) Schrödinger temperature T_{Sch}.     \[ T_\psi=E_\psi/k_B=\sqrt{\dfrac{e^{10}}{\hbar^4(4\pi\varepsilon_0)^5G_Nk_B^2}}\approx 6.445\cdot 10^{26}K\] Atomic Units There are two alternative systems of atomic units, closely related: 1) Hartree atomic units:      \[ \boxed{e=m_e=\hbar=k_B=1}\]     \[ \boxed{c=\alpha^{-1}}\] 2) Rydberg atomic units:     \[ \boxed{\dfrac{e}{\sqrt{2}}=2m_e=\hbar=k_B=1}\]     \[ \boxed{c=2\alpha^{-1}}\] There, m_e is the electron mass and \alpha is the electromagnetic fine structure constant. These units are designed to simplify atomic and molecular physics and chemistry, especially the quantities related to the hydrogen atom, and they are widely used in these fields. The Hartree units were first proposed by Doublas Hartree, and they are more common than the Rydberg units. The units are adapted to characterize the behavior of an electron in the ground state of a hydrogen atom. For example, using the Hartree convention, in the Böhr model of the hydrogen atom, an electron in the ground state has orbital velocity = 1, orbital radius = 1, angular momentum = 1, ionization energy equal to 1/2, and so on. Some quantities in the Hartree system of units are: 1) Atomic Length (also called Böhr radius):     \[ L_A=a_0=\dfrac{\hbar^2 (4\pi\varepsilon_0)}{m_ee^2}\approx 5.292\cdot 10^{-11}m=0.5292\AA\] 2) Atomic Time:     \[ t_A=\dfrac{\hbar^3(4\pi\varepsilon_0)^2}{m_ee^4}\approx 2.419\cdot 10^{-17}s\] 3) Atomic Mass:     \[ M_A=m_e\approx 9.109\cdot 10^{-31}kg\] 4) Atomic Energy:     \[ E_A=m_ec^2=\dfrac{m_ee^4}{\hbar^2(4\pi\varepsilon_0)^2} \approx 4.36\cdot 10^{ -18}J=27.2eV=2\times(13.6)eV=2Ry\] 5) Atomic electric Charge:     \[ Q_A=q_e=e\approx 1.602\cdot 10^{-19}C\] 6) Atomic temperature:     \[ T_A=E_A/k_B=\dfrac{m_ee^4}{\hbar^2(4\pi\varepsilon_0)^2k_B}\approx 3.158\cdot 10^5K\] The fundamental unit of energy is called the Hartree energy in the Hartree system and the Rydberg energy in the Rydberg system. They differ by a factor of 2. The speed of light is relatively large in atomic units (137 in Hartree or 274 in Rydberg), which comes from the fact that an electron in hydrogen tends to move much slower than the speed of light. The gravitational constant  is extremely small in atomic units (about 10−45), which comes from the fact that the gravitational force between two electrons is far weaker than the Coulomb force . The unit length, LA, is the so-called and well known Böhr radius, a0. The values of c and e shown above imply that e=\sqrt{\alpha \hbar c}, as in Gaussian units, not Lorentz-Heaviside units. However, hybrids of the Gaussian and Lorentz–Heaviside units are sometimes used, leading to inconsistent conventions for magnetism-related units. Be aware of these issues! QCD Units In the framework of Quantum Chromodynamics, a quantum field theory (QFT) we know as QCD, we can define the QCD system of units based on: 1) QCD Length L_{QCD}.     \[ L_{QCD}=\dfrac{\hbar}{m_pc}\approx 2.103\cdot 10^{-16}m\] and where m_p is the proton mass (please, don’t confuse it with the Planck mass M_P). 2) QCD Time t_{QCD}.     \[ t_{QCD}=\dfrac{\hbar}{m_pc^2}\approx 7.015\cdot 10^{-25}s\] 3) QCD Mass M_{QCD}.     \[ M_{QCD}=m_p\approx 1.673\cdot 10^{-27}kg\] 4) QCD Energy E_{QCD}.     \[ E_{QCD}=M_{QCD}c^2=m_pc^2\approx 1.504\cdot 10^{-10}J=938.6MeV=0.9386GeV\] Thus, QCD energy is about 1 GeV! 5) QCD Temperature T_{QCD}.     \[ T_{QCD}=E_{QCD}/k_B=\dfrac{m_pc^2}{k_B}\approx 1.089\cdot 10^{13}K\] 6) QCD Charge Q_{QCD}. In Heaviside-Lorent units:     \[ Q_{QCD}=\dfrac{1}{\sqrt{4\pi\alpha}}e\approx 5.292\cdot 10^{-19}C\] In Gaussian units:     \[ Q_{QCD}=\dfrac{1}{\sqrt{\alpha}}e\approx 1.876\cdot 10^{-18}C\] Geometrized Units The geometrized unit system, used in general relativity, is not a completely defined system. In this system, the base physical units are chosen so that the speed of light and the gravitational constant are set equal to unity. Other units may be treated however desired. By normalizing appropriate other units, geometrized units become identical to Planck units. That is, we set:     \[ \boxed{G_N=c=1}\] and the remaining constants are set to the unit according to your needs and tastes. Conversion Factors This table from wikipedia is very useful: i) \alpha is the fine-structure constant, approximately 0.007297. ii) \alpha_G=\dfrac{m_e^2}{M_P^2}\approx 1.752\cdot 10^{-45} is the gravitational fine-structure constant. Some conversion factors for geometrized units are also available: Conversion from kg, s, C, K into m: G_N/c^2  [m/kg] c [m/s] \sqrt{G_N/(4\pi\varepsilon_0)}/c^2 [m/C] G_Nk_B/c^2 [m/K] Conversion from m, s, C, K into kg: c^2/G_N [kg/m] c^3/G_N [kg/s] 1/\sqrt{G_N4\pi\varepsilon_0} [kg/C] Conversion from m, kg, C, K into s 1/c [s/m] \sqrt{\dfrac{G_N}{4\pi\varepsilon_0}}/c^3 [s/C] G_Nk_B/c^5 [s/K] Conversion from m, kg, s, K into C (G_N4\pi\varepsilon_0)^{1/2} [C/kg] k_B\sqrt{G_N4\pi\varepsilon_0}/c^2   [C/K] Conversion from m, kg, s, C into K c^2/k_B [K/kg] c^5/(G_Nk_B) [K/s] c^2/(k_B\sqrt{G_N4\pi\varepsilon_0}) [K/C] Or you can read off factors from this table as well: Advantages and Disadvantages of Natural Units Natural units have some advantages (“Pro”): 1) Equations and mathematical expressions are simpler in Natural Units. 2) Natural units allow for the match between apparently different physical magnitudes. 3) Some natural units are independent from “prototypes” or “external patterns” beyond some clever and trivial conventions. 4) They can help to unify different physical concetps. However, natural units have also some disadvantages (“Cons”): 1) They generally provide less precise measurements or quantities. 2) They can be ill-defined/redundant and own some ambiguity. It is also caused by the fact that some natural units differ by numerical factors of pi and/or pure numbers, so they can not help us to understand the origin of some pure numbers (adimensional prefactors) in general. Moreover, you must not forget that natural units are “human” in the sense you can addapt them to your own needs, and indeed,you can create your own particular system of natural units! However, said this, you can understand the main key point: fundamental theories are who finally hint what “numbers”/”magnitudes” determine a system of “natural units”. Remark: the smart designer of a system of natural unit systems must choose a few of these constants to normalize (set equal to 1). It is not possible to normalize just any set of constants. For example, the mass of a proton and the mass of an electron cannot both be normalized: if the mass of an electron is defined to be 1, then the mass of a proton has to be \approx 6\pi^5\approx 1936. In a less trivial example, the fine-structure constant, α≈1/137, cannot be set to 1, because it is a dimensionless number. The fine-structure constant is related to other fundamental constants through a very known equation:     \[\alpha=\dfrac{k_Ce^2}{\hbar c}\] where k_C is the Coulomb constant, e is the positron electric charge (elementary charge), ℏ is the reduced Planck constant, and c is the again the speed of light in vaccuum. It is believed that in a normal theory is not possible to simultaneously normalize all four of the constants c, ℏ, e, and kC. Fritzsch-Xing  plot Fritzsch and Xing have developed a very beautiful plot of the fundamental constants in Nature (those coming from gravitation and the Standard Model). I can not avoid to include it here in the 2 versions I have seen it. The first one is “serious”, with 29 “fundamental constants”: However, I prefer the “fun version” of this plot. This second version is very cool and it includes 28 “fundamental constants”: The Okun Cube Long ago, L.B. Okun provided a very interesting way to think about the Planck units and their meaning, at least from current knowledge of physics! He imagined a cube in 3d in which we have 3 different axis. Planck units are defined as we have seen above by 3 constants c, \hbar, G_N plus the Boltzmann constant. Imagine we arrange one axis for c-Units, one axis for \hbar-units and one more for G_N-units. The result is a wonderful cube: Or equivalently, sometimes it is seen as an equivalent sketch ( note the Planck constant is NOT rationalized in the next cube, but it does not matter for this graphical representation): Classical physics (CP) corresponds to the vanishing of the 3 constants, i.e., to the origin (0,0,0). Newtonian mechanics (NM) , or more precisely newtonian gravity plus classical mechanics, corresponds to the “point” (0,0,G_N). Special relativity (SR) corresponds to the point (0,1/c,0), i.e., to “points” where relativistic effects are important due to velocities close to the speed of light. Quantum mechanics (QM) corresponds to the point (h,0,0), i.e., to “points” where the action/angular momentum fundamental unit is important, like the photoelectric effect or the blackbody radiation. Quantum Field Theory (QFT) corresponds to the point (h,1/c,0), i.e, to “points” where both, SR and QM are important, that is, to situations where you can create/annihilate pairs, the “particle” number is not conserved (but the particle-antiparticle number IS), and subatomic particles manifest theirselves simultaneously with quantum and relativistic features. Quantum Gravity (QG) would correspond to the point (h,0,G_N) where gravity is quantum itself. We have no theory of quantum gravity yet, but some speculative trials are effective versions of (super)-string theory/M-theory, loop quantum gravity (LQG) and some others. Finally, the Theory Of Everything (TOE) would be the theory in the last free corner, that arising in the vertex (h,1/c,G_N). Superstring theories/M-theory are the only serious canditate to TOE so far. LQG does not generally introduce matter fields (some recent trials are pushing into that direction, though) so it is not a TOE candidate right now. Some final remarks and questions 1) Are fundamental “constants” really constant? Do they vary with energy or time? 2) How many fundamental constants are there? This questions has provided lots of discussions. One of the most famous was this one: The trialogue (or dialogue if you are precise with words) above discussed the opinions by 3 eminent physicists about the number of fundamental constants: Michael Duff suggested zero, Gabriel Veneziano argued that there are only 2 fundamental constants while L.B. Okun defended there are 3 fundamental constants 3) Should the cosmological constant be included as a new fundamental constant? The cosmological constant behaves as a constant from current cosmological measurements and cosmological data fits, but is it truly constant? It seems to be…But we are not sure. Quintessence models (some of them related to inflationary Universes) suggest that it could vary on cosmological scales very slowly. However, the data strongly suggest that     \[ P_\Lambda=-\rho c^2\] It is simple, but it is not understood the ultimate nature of such a “fluid” because we don’t know what kind of “stuff” (either particles or fields) can make the cosmological constant be so tiny and so abundant (about the 72% of the Universe is “dark energy”/cosmological constant) as it seems to be. We do know it can not be “known particles”. Dark energy behaves as a repulsive force, some kind of pressure/antigravitation on cosmological scales. We suspect it could be some kind of scalar field but there are many other alternatives that “mimic” a cosmological constant. If we identify the cosmological constant with the vacuum energy we obtain about 122 orders of magnitude of mismatch between theory and observations. A really bad “prediction”, one of the worst predictions in the history of physics! Be natural and stay tuned! View ratings Rate this article LOG#070. Natural Units. — 1 Comment Leave a Reply
588a1739a4effdd8
Succesful renormalization of a QCD-inspired Hamiltonian Hans-Christian Pauli Max-Planck-Institut für Kernphysik, D-69029 Heidelberg, 1 18 September 2003 The long standing problem of non perturbative renormalization of a gauge field theoretical Hamiltonian is addressed and explicitly carried out within an (effective) light-cone Hamiltonian approach to QCD. The procedure is in line with the conventional ideas: The Hamiltonian is first regulated by suitable cut-off functions, and subsequently renormalized by suitable counter terms to make it cut-off independent. Emphasized is the considerable freedom in the cut-off function which eventually can modify the Coulomb potential of two charges at sufficiently small distances. The approach provides new physical insight into nature of gauge theory and the potential energy of QCD and QED near the origin. The so obtained formalism is applied to physical mesons with a different flavor of quark and anti-quark. The excitation spectrum of the -meson with its excellent agreement between theory and experiment is discussed as a pedagogical example. 11.10.Ef and 12.38.Aw and 12.38.Lg and 12.39.-x 1 Introduction When starting in 1984 with Discretized Light-Cone Quantization (DLCQ) PauBro85a and with a revival of Dirac’s Hamiltonian front form dynamics dir49 , all challenges of a gauge field Hamiltonian theory were essentially open questions, particularly the non perturbative bound state problem, the many-body aspects, regularization, renormalization, confinement, chirality, vacuum structure and condensates, just to name a few. The step from the gauge field QCD Lagrangian down to a non relativistic Schrödinger equation was completely mysterious. Now we know better BroPauPin98 . We have learned how to partition the problem and how to shape our thinking in four major steps: We have understood, for example, that the chiral phase transition, in which the quarks are supposed to get their mass, is not the major challenge. The challenge is to understand what happens after the phase transition, at zero temperature. The challenge is to understand the spectrum of physical hadrons and to get the corresponding eigenfunctions, the light cone wave functions. Regularization of the interaction by vertex regularization. In a matrix element, as illustrated on the left for a vertex, a quark changes its four-momentum from Regularization of the interaction by vertex regularization. a quark changes its four-momentum from Figure 1: Regularization of the interaction by vertex regularization. In a matrix element, as illustrated on the left for a vertex, a quark changes its four-momentum from to , i.e. . The vertex interaction is regulated by multiplying with a form factor , as indicated by the circle. – Instantaneous interactions are treated correspondingly, as illustration on the right for a seagull. The light-cone wave functions for a hadron with mass encode all possible quark and gluon momentum, helicity and flavor correlations and, in principle, are obtained by diagonalizing the QCD light-cone Hamiltonian , where , in a complete basis of Fock states with increasing complexity. For example, the positive pion has the Fock expansion: representing the expansion of the exact QCD eigenstate at scale in terms of non-interacting quarks and gluons. The particles in a Fock state () have longitudinal light-cone momentum fractions and relative transverse momenta , with The form of is invariant under longitudinal and transverse boosts; i.e., the light-cone wave functions expressed in the relative coordinates and are independent of the total momentum (, ) of the hadron. The first term in the expansion is referred to as the valence Fock state, as it relates to the hadronic description in the constituent quark model. The higher terms are related to the sea components of the hadronic structure. It has been shown that the rest of the light-cone wave function is determined once the valence Fock state is known Mue94 ; Pau99b , with explicit expressions given in Pau99b . The key issue is to overcome the problem of any gauge theory, that the unregulated theory exposes logarithmic singularities. The problem of regularization and renormalization has been solved in the perturbative context of scattering theory, but not in the non perturbative context of a Hamiltonian. It is addressed to in the first two sections and applied in the remainder of this paper. 2 Regularization Canonical field theory with the conventional QCD Lagrangian allows to derive the components of the total canonical four-momentum . Its front form version BroPauPin98 rests on two assumptions, the light cone gauge LepBro80 and the suppression of all zero modes BroPauPin98 ; Kal95 . The front form vacuum is then trivial. I find it helpful to discuss the problem in terms of DLCQ PauBro85a ; BroPauPin98 . In the back of my mind I visualize an explicit finite dimensional matrix representation of the Light-Cone Hamiltonian as it occurs for finite harmonic resolution. Such one is schematically displayed in Fig. 2 of BroPauPin98 . All of its matrix elements are finite for any finite and . The problem arises for ever increasing harmonic resolution, on the way to the continuum limit: The numerical eigenvalues are numerically unstable and diverge logarithmically KraPauWoe92 ; TriPau00 , contrary to the calculations in 1+1 dimension PauBro85a ; see also actual DLCQ calculations in 3+1 by Hiller Hil00 . The reason is inherent to Dirac’s relativistic vertex interaction , in which some particle ‘1’ is scattered into two particles ‘2’ and ‘3’ with their respective four-momenta and helicities , see Fig. 1. The matrix element for bremsstrahlung, for example, is proportional to , , see Table 9 in BroPauPin98 , when the quark maintains its helicity while irradiating a gluon with four-momentum . Singularities arise typically when squares of such matrix elements are integrated over all as in the integrations of perturbation theory. The singularities are avoided a priori by vertex regularization, by multiplying each (typically off-diagonal) matrix element with a regulating form factor : It took several years to realize that it is the Feynman four-momentum transfer across a vertex, , which governs any effective interaction. The minimal requirement for such a form factor is The job would be done by a step function, . The limit suppresses the interaction all together, the limit restores the interaction and its problems. Any finite value of restricts to be finite and eliminates the singularities. But the sharp cut-off generates problems in an other corner of the theory and must be an analytic function of , as to be seen below. Vertex regularization takes thus care of the ultraviolet divergences. The (light-cone) infrared singularities are taken care of as usual by a kinematical gluon mass. As usual, regularization is not unique and many ways can do that. Dimensional regularization, for example, is not applicable in a matrix approach which is stuck with the precisely 3+1 dimensions of the physical world. Vertex regularization should be confronted with the Fock space regularization of Lepage and Brodsky LepBro80 , see also BroPauPin98 , which has blocked the renormalization aspects for many years. It also should be confronted with wil89 and WilWalHar94 . After applauding the light-cone approach wil89 , Wilson and collaborators WilWalHar94 have attempted to base their considerations almost entirely on a renormalization group analysis, but no concrete technology has emerged thus far. 3 Renormalization The non perturbative renormalization of the Hamiltonian was stuck for many years by the fact that the coupling constant and the regulator function multiply each other in Eq.(3). It was always clear that one may add non local counter terms WilWalHar94 , but is was not clear how they could be constructed. Progress has come from recent work on a particular model FrewerFrePau02 , which did allow to formulate a paradigmatic example in modern renormalization theory. Here is the general but abstract procedure. Suppose to have solved Eq.(2) for a fixed value of the 7 ‘bare’ parameters in the Lagrangian, for the coupling constant and the 6 flavor quark masses , and for a fixed value of exterior cut-off scale . Suppose further that these 7+1 parameters are chosen such, that the calculated agree with the corresponding experimental values. Next, suppose to change the cut-off by a small amount . Every calculated eigenvalue will then change by . Renormalization theory is then the attempt to reformulate the Hamiltonian, such, that all changes vanish identically. The fundamental renormalization group equation is therefore: for all eigenstates . Equivalently one requires that the Hamiltonian is stationary with respect to small : Hence forward reference to (), to the ‘renormalization point’, will be suppressed. The Hamiltonian can be made stationary by making and the functions of , by introducing physical coupling constants and masses, and , respectively, which themselves are functions of the bare and , and which are functionals of the regulator . The variation of reads then with the familiar variational derivatives. However, since and are themselves functionals of , this reduces to Eq.(5) as the fundamental equation of renormalization theory is then replaced by since the variational derivative of the Hamiltionian with respect to the regulator is unlikely to vanish. It can be solved by counter term technology, as follows. A counter term is added to the Hamiltonian, whose interaction has exactly the same structure except that the regulator is replaced by . This defines subject to the constraint that the counter term vanishes at the renormalization point, The fundamental equation (7) defines then a differential equation which, in its integral form, includes the initial condition The renormalized regulator function, , is manifestly independent of . By construction, the value of is determined by experiment. One should emphasize an important point: In deriving Eq.(12), use was made of assuming the regulator function has well defined derivatives with respect to . The theta function of the sharp cut-off, however, is a distribution with only ill defined derivatives. This raises an other important point: If is a function of other than a theta function, one must specify how the function approaches the limiting values of Eq.(4). The case of the ‘soft’ regulator is only a very special example. In a more general approach the soft regulator plays the role of a generating function The partials are dimensionless and independent of a change in . The arbitrarily many coefficients are renormalization group invariants and, as such, subject to be determined by experiment. 4 The effective (light-cone) Hamiltonian In a field theory, one is confronted with a many-body problem of the worst kind: Not even the particle number is conserved. For to formulate effective Hamiltonians more systematically, a novel many-body technique had to be developed, the method of iterated resolvents Pau99b ; Pau98 , whose details are not important here. Important is that the effective light-cone Hamiltonian has the same eigenvalue as the full light-cone Hamiltonian and that it generates the bound state wave function of valence quarks by an one-body integral equation in (): One has achieved step 2 of Eq.(1): . Here, is the eigenvalue of the invariant-mass squared. The associated eigenfunction is the probability amplitude for finding the quark with momentum fraction , transversal momentum and helicity , and correspondingly the anti-quark. Expressions for the (effective) quark masses and and the (effective) coupling function are given in Pau98 . and are the Feynman momentum transfers of quark and anti-quark, respectively, and and are their Dirac spinors in Lepage Brodsky convention LepBro80 , given explicitly in BroPauPin98 . They arrange themselves in the Lorenz scalar spinor matrix which is a rather complicated (matrix) function of its six arguments , as tabulated in Pau00c . Finally, the form factors restrict the range of integration and regulate the interaction. Note that the equation is fully relativistic and covariant. It should be emphasized that Eq.(15) is valid only for quark and anti-quark having different flavors Pau99b ; Pau98 . The additional annihilation term for identical flavors is omitted. At present, it is investigated by Kra04 . It should also be emphasized that the same structure was obtained with a completely different method, with Wegner’s Hamiltonian flow equations Wegner00 . In Wegner00 is also shown why the concept of a ‘mean momentum transfer’, is a meaningful simplification. It allows to replace Eq.(15) by The form factors have made their way into the regulator function . Krautgärtner et al KraPauWoe92 and Trittmann et al TriPau00 have shown how to solve numerically such an equation with a high precision. But since the numerical effort is so considerable, it is reasonable to work first with (over-)simplified models, as specified next. The Singlet-Triplet model. Quarks are at relative rest when and , with . An inspection of Eq.(33) in Pau00c reveals that for very small deviations from the equilibrium values, the spinor matrix is proportional to the unit matrix, For very large deviations, particularly for , holds The Singlet-Triplet (ST) model combines these aspects: For anti parallel helicities (singlets) the model interpolates between two extremes: For small momentum transfer , the ‘2’ in Eq.(18) is unimportant and the Coulomb aspects of the first term prevail. For large , the Coulomb aspects are unimportant and the hyperfine interaction is dominant. The ‘2’ carries the singlet triplet mass difference: Its value is understood by , with the spin-g factor . For parallel helicities (triplets) the model reduces to the Coulomb kernel. The model over emphasizes many aspects but its simplicity has proven useful for fast and analytical calculations. Most importantly, the model allows to drop the helicity summations which technically simplifies the problem enormously. A more detailed investigation of the spinor matrix can be found in KrassPau02 . The model can not be justified in the sense of an approximation, but it emphasizes the point that the ‘2’, or any other constant in the kernel of an integral equation, leads to numerically undefined equations and thus singularities. Replacing the function by the strong coupling constant completes the model assumptions. Hence forward, the overline bars for the effective quantitites will be suppressed. 5 The potential energy It is possible to subtract a c-number from and to define an effective Hamiltonian implicitly by Its eigenvalues have the dimension of an energy achieving this way step 3 of Eq.(1). Note that mass and energy in the front form, on the light cone, are related by and not by , as usual. Only if the energy is negligible as compared to the quark masses, i.e. only if , the two relations coincide. A rather drastic technical simplification is achieved by a transformation of the integration variable. One can substitute the integration variable by the integration variable , which, for all practical purposes, can be interpreted BroPauPin98 as the -component of a 3-momentum vector . For equal masses , the transformation is, together with its inverse, Inserting these substitutions into Eq.(16) and defining the reduced wave function by leads to an integral equation in the components of , in which all reference to light-cone variables has disappeared. Using in addition the ST-model of Eq.(19), Eq.(16) translates for singlets identically into with . The equation for the triplets is obtained by dropping the ‘2’. In the ST-model, the helicity arguments in the wave functions can be suppressed. Applying the relation between mass and energy, as given in Eq.(22), the equation is converted to since the reduced mass for is . The first term in this equation, , coincides with the kinetic energy in a conventional non-relativistic Hamiltonian. This is remarkable in view of the fact that no approximation to this extent has been made. The fully relativistic and covariant light-cone approach has no relativistic corrections in the kinetic energy! Since the first term in Eq.(5) is a kinetic energy, the second must be a potential energy — in a momentum representation. In principle, it could be Fourier transformed with to a configuration space with the variable . But due to the factor in the kernel, the resulting potential energy would be non-local, see f.e. PauMer97 . The non-locality of the potential is certainly mathematically exact. But I do not expect this to generate aspects of leading importance, and avoid it by the simplification , both in Eqs.(25) and (5). With , the mean four momentum transfer reduces to the three momentum transfer . In consequence, the kernel of Eq.(5), depends only on . Its Fourier transform is a local function, which plays the role of a conventional potential energy in the Fourier transform of Eq.(5), i.e. in Here is the Schrödinger equation from Eq.(1) ! Despite its conventional structure it is a front form equation, designed to calculate the light-cone wave function . I conclude this section with a subtle point, which needs clarification in the future. The simplification is different from a non-relativistic approximation. The approach is certainly valid also for relativistic momenta , particularly Eqs.(5) and (30). The reason is that occurs only under the integral. There, the large momenta are suppressed by the regulator, anyway. 6 The renormalized Coulomb potential Hence forward, I restrict consideration to the triplet case, i.e. to Coulomb kernels like . The renormalized Coulomb potential is always finite at the origin, as opposed to the conventional –singularity. It is instructive to verify this explicitly for two regulators: The Fourier transform according to Eq.(29) gives where is the Integral Sine. Asymptotically holds: Both cut-offs produce the conventional Coulomb potential. Near the origin, however, holds: The renormalized Coulomb potential is finite but the constant is cut-off dependent. Even the -dependence differs: The soft cut-off gives a linear and the sharp cut-off a quadratic dependence. The cut-off dependence near the origin is one of the most important aspects of the present work and has a deep physical reason to be discussed below. Recalling the discussion in Sec. 3 and replacing the soft cut-off in analogy to Eq.(14) with gives straightforwardly the generalized Coulomb potential with . This result illustrates an other important point: The Laguerre polynomials are a complete set of functions. The term added to the -1 in Eq.(37) is thus potentially able to reproduce an arbitrary function of . The description in terms of a generating function, as in Eqs.(36) or (37), is therefore complete. Schematic behavior of the renormalized Coulomb potential, see also the discussion in the text. Figure 2: Schematic behavior of the renormalized Coulomb potential, see also the discussion in the text. The physical picture which develops is illustrated in Fig. 2. In the far zone, for sufficiently large , the potential energy coincides with the conventional Coulomb potential . Since the potential is attractive, it can host bound states which are probably those realized in weak binding. In the near zone, for sufficiently small , the potential behaves like a power series which potentially can host the bound states of strong coupling, provided the actual parameter values allow for that. In the intermediate zone, the actual potential must interpolate between these two extremes, since Eq.(37) is an analytic function of . Most likely this is done by developing a barrier of finite height, depending on the actual parameter values. The onset of the near and intermediate regimes must occur for relative distances of the quarks, which are comparable to the Compton wave length associated with their reduced mass. If the distance is smaller, one expects deviations from the classical regime by elementary considerations on quantum mechanics, indeed. The dimensionless Coulomb potential Figure 3: The dimensionless Coulomb potential is plotted versus the radius parameter for different , i.e. from bottom to top for . The large number of parameter in Eq.(37) can be controlled by the following construction: The coefficients in Eq.(37) are expressed in terms of only three parameters , , and , by The first few coefficients are then explicitly As a consequence, the dimensionless Coulomb potential, which depends on only through the dimensionless combination , is at most a quadratic function of , in the near zone, and thus independent of . The remainder starts at most with power . A value of and should therefore yield a linear set of functions in the near zone. As shown in Fig. 3 this happens to be true for surprisingly large values of , i.e. not only for . The value of essentially controls the height of the barrier. Similarly, generates a set of functions which are strictly quadratic in the near zone. Again, controls the height of the barrier, as to be seen below in Fig. 5. 7 Determining the parameters by experiment The QCD-inspired model developed thus far has a considerable number of renormalization group invariant parameters, which must be determined once and for all by experiment. In doing this FrePauZho02b , we have been inspired by the work of Anisovich et al. AniAniSar02 . Enumerating the excited states of a hadron by a counting index , these authors have found the linear relation for practically all hadrons. As an example, I present in Fig. 4 the spectrum of the - and the -meson. The invariant mass-squares of all available Figure 4: The invariant mass-squares of all available – and –states are plotted versus a counting index . The straight lines correspond to , with the value , taken from Anisovich et al. AniAniSar02 . The filled circles correspond to states which have been seen empirically PDG98 , the empty ones correspond to the predictions AniAniSar02 . — Plot courtesy of Shan-Gui Zhou. The linear relation between mass–squared and energy on the light cone, Eq.(22), allows then to conclude that the potential energy in the near zone must be a pure oscillator, at least to first approximation, and that in Eq.(41). If one addresses to reproduce the spectra of all flavor off-diagonal triplet mesons (pseudo-vector mesons), except the topped ones, one has to determine 6 parameters: The 2 constants from the oscillator model, and , and the 4 effective flavor quark masses , , , and . To determine them, one needs 6 experimental numbers, and I take from PDG98 : all in GeV. The notation should be self-explanatory. For example, refers to the first excited state of the . The so obtained parameter values are: The numbers differ slightly from those in FrePauZho02b , due to choosing the empirical data set different from Eq.(43), but yield about the same overall agreement with all available experimental states of pseudo-vector mesons. Reverting the argument, one concludes as in FrePauZho02b that the oscillator model in Eq.(42) explains quite naturally the systematics found by Anisovich et al. AniAniSar02 . But one can do even better. The continuous lines display the generalized Coulomb potential Figure 5: The continuous lines display the generalized Coulomb potential in physical units as function of , for the values from bottom to top. The circles indicate the experimental eigenvalues for the . They agree with the calculated eigenvalues for , shown by the horizontal lines. — The dashed line displays the harmonic approximation; the horizontal lines on the left indicate the oscillator states. — See the discussion in the text. 8 Relating the oscillator model to QCD The oscillator model in Eq.(42) is only the harmonic approximation to the QCD–inspired, generalized Coulomb potential in Eq.(37). Their parameters are related obviously by One needs more experimental information to pin down the value of , and . Choosing as the QCD scale, i.e. one can use the expressions for in Pau98 to calculate from the measured value of the coupling constant at the -mass , as to be shown in greater detail in Pau03b . Having fixed and allows to calculate and from and , i.e. We are thus able to draw the generalized Coulomb potential for different as done in Fig. 5. The ‘experimental’ eigenvalues for the –meson, obtained by means of , see Eq.(22), are also inserted, including the empirical limits of error. The experimental error  GeV (thus  GeV) is hypothetical, since is not confirmed. Taking it for granted, the lowest possible value for is thus This completes the determination of all parameters. They are universal within the model. I thank Harun OmerOme04 for giving me the exact eigenvalues prior to publication. 9 Summary and Conclusions This work is an important mile stone on the long way from the canonical Lagrangian for quantum chromo dynamics down to the composition of physical hadrons in terms of their constituting quarks and gluons, by the eigenfunctions of a Hamiltonian. As part of a on-going effort, a denumerable number of simplifying assumptions had to be phrased for getting a manageable formalism Pau99b . Among them is the formulation of an effective interaction by the method of iterated resolvents Pau98 , but the strongest assumption in the present work is probably the simplifying Singlet-Triplet model in Sec. 4. As long as the assumption are not proven at least a posteriori, one must speak of an approach inspired by QCD. It is advantageous, however, to have a sufficiently simple formalism for penetrating the physical content of gauge theory by analytical relations. The biggest progress of the present work can be found in Sects. 2 and 3. It is related to a consistent regularization and renormalization of a gauge theory. The ultraviolet divergences in gauge theory are caused less by the possibly large momenta of the constituent particles, but by the large momentum transfers in the interaction. In a Hamiltonian approach, such as the present, one has not much choice else than to chop them off by a regulating form factor in the elementary vertex interaction. The form factor makes its way into a regulator function which suppresses the large momentum transfers in the Fourier transform of the Coulomb interaction, see Sec. 6. The arbitrariness in chopping off the large momentum transfers is reflected in the arbitrariness of the potential at small relative distances. It is this arbitrariness which allows for a pocket in the potential which binds the quarks in a hadron. The problem is then how to fix this function with its many parameters, by experiment. In practice this is less difficult than anti-cipated, see Sec. 7. It suffices to determine only three parameters, two continuous ones and one counting index. The potential energy of the present work vanishes at an infinite separation of the quarks. This seems be be in conflict with the potential energies of phenomenological models GodIsg85 which rise forever. It also seems to be in conflict with lattice gauge calculations Schilling2000 ; Schierholz00 . Is a finite ionization limit in conflict also with ‘confinement’, i.e. with the empirical fact that free quarks have not been observed? — The present model prohibits free quarks as a stable solution, since the sum of the constituent quark masses is always larger than the mass of the corresponding hadron and a pion. Free constituent quarks would hadronize very quickly into bound states. This is different from atomic physics with its free constituents, where the binding energy is always much smaller than the mass of positronium proper. The most disturbing aspect of the present work is its obvious conflict with lattice gauge calculations Schilling2000 ; Schierholz00 and their successes. Several points however should be made: I have not checked to which extent a linear term in the potential is consistent with the excellent agreement between theory and experiment presented in this work. – Even with present day computers lattice gauge calculations can be extrapolated down to such light systems as the or the only with a head ake. – The calculation of the potential energy on the lattice rests on the assumptions of static quarks, of quarks with an infinitely large mass. Whether this object is the potential energy to be used in a non relativistic Hamiltonian is an open question, as well as whether its eigenvalue can simply be added to the constituent masses to get the invariant mass of physical hadrons. In principle, the relation is justified only only for sufficiently small coupling constants. The present work opens a broad avenue of further applications, among them also the baryons and physical nuclei. But much work must be done in the future before such a simple approach as the present must be taken serious. It is a first step only.
73f202b01c6dd7ae
nl en Quantum Mechanics 2 Admission requirements Quantum Mechanics 1, Statistical Physics 1, Classical Mechanics B, AN3na, LA2na The usefulness of quantum mechanics does not stop with the analytical solution of the Bohr model of the hydrogen atom. This course deepens the understanding of quantum mechanics by studying important quantum phenomena and applications of quantum mechanics in technologies like MRI and the laser. This comprises the study of indistinguishable quantum particles and their statistical distributions and the use of perturbation methods to understand the energy levels of atoms. The details of the observed emission spectrum of hydrogen are inconsistent with the Bohr model and simple analytical solutions of the Schrödinger equation. Most notably, the so called spin-orbit coupling gives rise to small shifts and splittings of the Bohr levels. The coupling of an atom to an external oscillating field gives rise to stimulated emission and can be understood in the framework of time-dependent perturbation theory. The following topics are treated: • Quantum statistical description of indistinguishable particles • Fermi-Dirac, Bose-Einstein, and Planck distribution • The free electron gas, Bose-Einstein condensation, and the law of Stefan and Boltzmann. • The structure of atoms and the Periodic Table • Time-independent perturbation theory and application in the fine-structure and hyperfinestructure of the hydrogen atom • Influence of external magnetic field (Zeeman-effect) and electrical field (Stark-effect) on spectral lines • Time-dependent perturbation theory and application to two-level systems • Nuclear magnetic resonance and its use in Magnetic Resonance Imaging (MRI). • Einstein theory of radiation processes: absorption, stimulated and spontaneous emission and its use in the laser. • selection rules for radiative transitions An introduction to more advanced and/or modern topics in quantum mechanics is given: entanglement, quantum information, Dirac equation. Course objectives After the course the student you will be able to discuss and explain the following concepts and topics and to apply these concepts in calculations. • Quantum statistical description of indistinguishable particles • Fermi-Dirac, Bose-Einstein and Planck distribution • Properties of the free electron gas, the free Bose gas, and the role of the density of states • How quantum mechanics averts the ultraviolet catastrophy • Apply time-independent perturbation theory to calculate the fine-structure and hyperfinestructure of the spectrum of hydrogen atoms • How external magnetic (Zeeman-effect) and electrical fields (Stark-effect) affect the spectra of atoms • Apply time-dependent perturbation theory to two-level systems and explain the essence of magnetic resonance imaging • Explain the radiative processes: absorption, stimulated and spontaneous emission (Einstein theory) and perform calculations of the corresponding transition rates. You will be able to explain or describe in your own words the following concepts or topics: • How the laser (and maser) work • Entanglement and quantum information • Dirac equation for relativistic electrons Algemene Vaardigheden (Soft Skills) • You are able to paraphrase your reasonings clearly • You plan your time in such a way that your study load is well divided over the various study activities that are needed in this course: studying the book, preparing and making exercises Mode of instruction Lectures, tutorials (exercise classes) and homework assignments Course Load Total course load 5 EC = 140 hours, of which 42 hours are spent attending lectures and tutorials (11x2 hours lectures + 10x2 hours tutorials). Approximately 40 hours are needed to study the course material. The remaining 58 hours are spent on completing the assignments and preparing for the exam Assessment method Written exam (closed book) with open questions. The final grade is calculated using the grade of the exam and adding a bonus of maximally 1 point to be earned by handing in homework assignments. For the retake exam the bonus does not apply. Course material is on blackboard To access Blackboard you need your ULCN-account Blackboard UL Reading list David J. Griffiths, Introduction to Quantum Mechanics, 2nd edition, ISBN 0-13-191175-9. This is the same book as used in the Quantum Mechanics 1 course. Errata and a warning about incomplete international editions of the textbook can be found on the personal homepage of David Griffiths Contactgegevens Docent:Dr.Peter Denteneer
b7f2e45cbf2e54ec
• 图案背景 • 纯色背景 •   |  注册 • / • 批注本地保存成功,开通会员云端永久保存 去开通 • kuo08099 Physics - The Basic Tools Of Quantum Mechanics in Chemistry 内容提示: Words to the reader about how to use this textbookI. What This Book Does and Does Not ContainThis text is intended for use by beginning graduate students and advanced upperdivision undergraduate students in all areas of chemistry.It provides:(i) An introduction to the fundamentals of quantum mechanics as they apply to chemistry,(ii) Material that provides brief introductions to the subjects of molecular spectroscopy andchemical dynamics,(iii) An introduction to computational chemistry applied to the treatm... 亚博足球app下载格式:PDF| 浏览次数:6| 上传日期:2015-03-04 12:20:02| 亚博足球app下载星级: Words to the reader about how to use this textbookI. What This Book Does and Does Not ContainThis text is intended for use by beginning graduate students and advanced upperdivision undergraduate students in all areas of chemistry.It provides:(i) An introduction to the fundamentals of quantum mechanics as they apply to chemistry,(ii) Material that provides brief introductions to the subjects of molecular spectroscopy andchemical dynamics,(iii) An introduction to computational chemistry applied to the treatment of electronicstructures of atoms, molecules, radicals, and ions,(iv) A large number of exercises, problems, and detailed solutions.It does not provide much historical perspective on the development of quantummechanics. Subjects such as the photoelectric effect, black-body radiation, the dual natureof electrons and photons, and the Davisson and Germer experiments are not evendiscussed.To provide a text that students can use to gain introductory level knowledge ofquantum mechanics as applied to chemistry problems, such a non-historical approach hadto be followed. This text immediately exposes the reader to the machinery of quantummechanics.Sections 1 and 2 (i.e., Chapters 1-7), together with Appendices A, B, C and E,could constitute a one-semester course for most first-year Ph. D. programs in the U. S. A.Section 3 (Chapters 8-12) and selected material from other appendices or selections fromSection 6 would be appropriate for a second-quarter or second-semester course. Chapters13- 15 of Sections 4 and 5 would be of use for providing a link to a one-quarter or one-semester class covering molecular spectroscopy. Chapter 16 of Section 5 provides a briefintroduction to chemical dynamics that could be used at the beginning of a class on thissubject.There are many quantum chemistry and quantum mechanics textbooks that covermaterial similar to that contained in Sections 1 and 2; in fact, our treatment of this materialis generally briefer and less detailed than one finds in, for example, H. Eyring, J. Walter, and G. E. Kimball, J. Wiley and Sons, New York, N.Y. (1947), Quantum Chemistry , D. A. McQuarrie, University Science Books, Mill Valley, Ca.(1983), Molecular Quantum Mechanics , P. W. Atkins, Oxford Univ. Press, Oxford,England (1983), or Quantum Chemistry , I. N. Levine, Prentice Hall, Englewood Cliffs,Quantum Chemistry , N. J. (1991), Depending on the backgrounds of the students, our coverage may have to besupplemented in these first two Sections.By covering this introductory material in less detail, we are able, within theconfines of a text that can be used for a one-year or a two-quarter course, to introduce thestudent to the more modern subjects treated in Sections 3, 5, and 6. Our coverage ofmodern quantum chemistry methodology is not as detailed as that found in Quantum Chemistry , A. Szabo and N. S. Ostlund, Mc Graw-Hill, New York (1989),which contains little or none of the introductory material of our Sections 1 and 2.By combining both introductory and modern up-to-date quantum chemistry materialin a single book designed to serve as a text for one-quarter, one-semester, two-quarter, orone-year classes for first-year graduate students, we offer a unique product.It is anticipated that a course dealing with atomic and molecular spectroscopy willfollow the student's mastery of the material covered in Sections 1- 4. For this reason,beyond these introductory sections, this text's emphasis is placed on electronic structureapplications rather than on vibrational and rotational energy levels, which are traditionallycovered in considerable detail in spectroscopy courses.ModernIn brief summary, this book includes the following material:1. The Section entitled The Basic Tools of Quantum Mechanics treatsthe fundamental postulates of quantum mechanics and several applications to exactlysoluble model problems. These problems include the conventional particle-in-a-box (in oneand more dimensions), rigid-rotor, harmonic oscillator, and one-electron hydrogenicatomic orbitals. The concept of the Born-Oppenheimer separation of electronic andvibration-rotation motions is introduced here. Moreover, the vibrational and rotationalenergies, states, and wavefunctions of diatomic, linear polyatomic and non-linearpolyatomic molecules are discussed here at an introductory level. This section alsointroduces the variational method and perturbation theory as tools that are used to deal withproblems that can not be solved exactly.2. The Section Simple Molecular Orbital Theory deals with atomic andmolecular orbitals in a qualitative manner, including their symmetries, shapes, sizes, andenergies. It introduces bonding, non-bonding, and antibonding orbitals, delocalized,hybrid, and Rydberg orbitals, and introduces Hückel-level models for the calculation ofmolecular orbitals as linear combinations of atomic orbitals (a more extensive treatment of several semi-empirical methods is provided in Appendix F). This section also developsthe Orbital Correlation Diagram concept that plays a central role in using Woodward-Hoffmann rules to predict whether chemical reactions encounter symmetry-imposedbarriers.3. The Electronic Configurations, Term Symbols, and StatesSection treats the spatial, angular momentum, and spin symmetries of the many-electronwavefunctions that are formed as antisymmetrized products of atomic or molecular orbitals.Proper coupling of angular momenta (orbital and spin) is covered here, and atomic andmolecular term symbols are treated. The need to include Configuration Interaction toachieve qualitatively correct descriptions of certain species' electronic structures is treatedhere. The role of the resultant Configuration Correlation Diagrams in the Woodward-Hoffmann theory of chemical reactivity is also developed.4. The Section on Molecular Rotation and Vibration provides anintroduction to how vibrational and rotational energy levels and wavefunctions areexpressed for diatomic, linear polyatomic, and non-linear polyatomic molecules whoseelectronic energies are described by a single potential energy surface. Rotations of &quot;rigid&quot;molecules and harmonic vibrations of uncoupled normal modes constitute the starting pointof such treatments.5. The Time Dependent Processes Section uses time-dependent perturbationtheory, combined with the classical electric and magnetic fields that arise due to theinteraction of photons with the nuclei and electrons of a molecule, to derive expressions forthe rates of transitions among atomic or molecular electronic, vibrational, and rotationalstates induced by photon absorption or emission. Sources of line broadening and timecorrelation function treatments of absorption lineshapes are briefly introduced. Finally,transitions induced by collisions rather than by electromagnetic fields are briefly treated toprovide an introduction to the subject of theoretical chemical dynamics.6. The Section on More Quantitive Aspects of Electronic StructureCalculations introduces many of the computational chemistry methods that are usedto quantitatively evaluate molecular orbital and configuration mixing amplitudes. TheHartree-Fock self-consistent field (SCF), configuration interaction (CI),multiconfigurational SCF (MCSCF), many-body and Møller-Plesset perturbation theories, coupled-cluster (CC), and density functional or Xα-like methods are included. Thestrengths and weaknesses of each of these techniques are discussed in some detail. Havingmastered this section, the reader should be familiar with how potential energyhypersurfaces, molecular properties, forces on the individual atomic centers, and responsesto externally applied fields or perturbations are evaluated on high speed computers.II. How to Use This Book: Other Sources of Information and Building NecessaryBackgroundIn most class room settings, the group of students learning quantum mechanics as itapplies to chemistry have quite diverse backgrounds. In particular, the level of preparationin mathematics is likely to vary considerably from student to student, as will the exposureto symmetry and group theory. This text is organized in a manner that allows students toskip material that is already familiar while providing access to most if not all necessarybackground material. This is accomplished by dividing the material into sections, chaptersand Appendices which fill in the background, provide methodological tools, and provideadditional details.The Appendices covering Point Group Symmetry and Mathematics Review areespecially important to master. Neither of these two Appendices provides a first-principlestreatment of their subject matter. The students are assumed to have fulfilled normalAmerican Chemical Society mathematics requirements for a degree in chemistry, so only areview of the material especially relevant to quantum chemistry is given in the MathematicsReview Appendix. Likewise, the student is assumed to have learned or to besimultaneously learning about symmetry and group theory as applied to chemistry, so thissubject is treated in a review and practical-application manner here. If group theory is to beincluded as an integral part of the class, then this text should be supplemented (e.g., byusing the text Chemical Applications of Group Theory , F. A. Cotton, Interscience, NewYork, N. Y. (1963)).The progression of sections leads the reader from the principles of quantummechanics and several model problems which illustrate these principles and relate tochemical phenomena, through atomic and molecular orbitals, N-electron configurations,states, and term symbols, vibrational and rotational energy levels, photon-inducedtransitions among various levels, and eventually to computational techniques for treatingchemical bonding and reactivity. At the end of each Section, a set of Review Exercises and fully worked outanswers are given. Attempting to work these exercises should allow the student todetermine whether he or she needs to pursue additional background building via theAppendices .In addition to the Review Exercises , sets of Exercises and Problems, andtheir solutions, are given at the end of each section.The exercises are brief and highly focused on learning a particular skill. They allow thestudent to practice the mathematical steps and other material introduced in the section. Theproblems are more extensive and require that numerous steps be executed. They illustrateapplication of the material contained in the chapter to chemical phenomena and they helpteach the relevance of this material to experimental chemistry. In many cases, new materialis introduced in the problems, so all readers are encouraged to become actively involved insolving all problems.To further assist the learning process, readers may find it useful to consult othertextbooks or literature references. Several particular texts are recommended for additionalreading, further details, or simply an alternative point of view. They include the following(in each case, the abbreviated name used in this text is given following the properreference):1. Quantum Chemistry , H. Eyring, J. Walter, and G. E. Kimball, J. Wileyand Sons, New York, N.Y. (1947)- EWK.2. Quantum Chemistry , D. A. McQuarrie, University Science Books, Mill Valley, Ca.(1983)- McQuarrie.3. Molecular Quantum Mechanics , P. W. Atkins, Oxford Univ. Press, Oxford, England(1983)- Atkins.4. The Fundamental Principles of Quantum Mechanics , E. C. Kemble, McGraw-Hill, NewYork, N.Y. (1937)- Kemble.5. The Theory of Atomic Spectra , E. U. Condon and G. H. Shortley, Cambridge Univ.Press, Cambridge, England (1963)- Condon and Shortley.6. The Principles of Quantum Mechanics , P. A. M. Dirac, Oxford Univ. Press, Oxford,England (1947)- Dirac.7. Molecular Vibrations , E. B. Wilson, J. C. Decius, and P. C. Cross, Dover Pub., NewYork, N. Y. (1955)- WDC.8. Chemical Applications of Group Theory , F. A. Cotton, Interscience, New York, N. Y.(1963)- Cotton.9. Angular Momentum , R. N. Zare, John Wiley and Sons, New York, N. Y. (1988)-Zare. 10. Introduction to Quantum Mechanics , L. Pauling and E. B. Wilson, Dover Publications,Inc., New York, N. Y. (1963)- Pauling and Wilson.11. Modern Quantum Chemistry , A. Szabo and N. S. Ostlund, Mc Graw-Hill, New York(1989)- Szabo and Ostlund.12. Quantum Chemistry , I. N. Levine, Prentice Hall, Englewood Cliffs, N. J. (1991)-Levine.13. Energetic Principles of Chemical Reactions , J. Simons, Jones and Bartlett, PortolaValley, Calif. (1983), Section 1 The Basic Tools of Quantum MechanicsChapter 1Quantum Mechanics Describes Matter in Terms of Wavefunctions and Energy Levels.Physical Measurements are Described in Terms of Operators Acting on WavefunctionsI. Operators, Wavefunctions, and the Schrödinger EquationThe trends in chemical and physical properties of the elements described beautifullyin the periodic table and the ability of early spectroscopists to fit atomic line spectra bysimple mathematical formulas and to interpret atomic electronic states in terms of empiricalquantum numbers provide compelling evidence that some relatively simple frameworkmust exist for understanding the electronic structures of all atoms. The great predictivepower of the concept of atomic valence further suggests that molecular electronic structureshould be understandable in terms of those of the constituent atoms.Much of quantum chemistry attempts to make more quantitative these aspects ofchemists' view of the periodic table and of atomic valence and structure. By starting from'first principles' and treating atomic and molecular states as solutions of a so-calledSchrödinger equation, quantum chemistry seeks to determine quantum numbers, orbitals, the aufbau principle and the concept of valence used byspectroscopists and chemists, in some cases, even prior to the advent of quantummechanics.Quantum mechanics is cast in a language that is not familiar to most students ofchemistry who are examining the subject for the first time. Its mathematical content andhow it relates to experimental measurements both require a great deal of effort to master.With these thoughts in mind, the authors have organized this introductory section in amanner that first provides the student with a brief introduction to the two primaryconstructs of quantum mechanics, operators and wavefunctions that obey a Schrödingerequation, then demonstrates the application of these constructs to several chemicallyrelevant model problems, and finally returns to examine in more detail the conceptualstructure of quantum mechanics.By learning the solutions of the Schrödinger equation for a few model systems, thestudent can better appreciate the treatment of the fundamental postulates of quantummechanics as well as their relation to experimental measurement because the wavefunctionsof the known model problems can be used to illustrate.what underlies the empirical A. OperatorsEach physically measurable quantity has a corresponding operator. The eigenvaluesof the operator tell the values of the corresponding physical property that can be observedIn quantum mechanics, any experimentally measurable physical quantity F (e.g.,energy, dipole moment, orbital angular momentum, spin angular momentum, linearmomentum, kinetic energy) whose classical mechanical expression can be written in termsof the cartesian positions {qi} and momenta {pi} of the particles that comprise the systemof interest is assigned a corresponding quantum mechanical operator F. Given F in termsof the {qi} and {pi}, F is formed by replacing pj by -ih∂/∂qj and leaving qj untouched.For example, ifF=Σl=1,N (pl2/2ml + 1/2 k(ql-ql0)2 + L(ql-ql0)),thenF=Σl=1,N (- h2/2ml ∂2/∂ql2 + 1/2 k(ql-ql0)2 + L(ql-ql0))is the corresponding quantum mechanical operator. Such an operator would occur when,for example, one describes the sum of the kinetic energies of a collection of particles (theΣl=1,N (pl2/2ml ) term, plus the sum of &quot;Hookes' Law&quot; parabolic potentials (the 1/2 Σl=1,Nk(ql-ql0)2), and (the last term in F) the interactions of the particles with an externallyapplied field whose potential energy varies linearly as the particles move away from theirequilibrium positions {ql0}.The sum of the z-components of angular momenta of a collection of N particles hasF=Σj=1,N (xjpyj - yjpxj),and the corresponding operator isF=-ih Σj=1,N (xj∂/∂yj - yj∂/∂xj).The x-component of the dipole moment for a collection of N particles hasF=Σj=1,N Zjexj, andF=Σj=1,N Zjexj ,where Zje is the charge on the jth particle.The mapping from F to F is straightforward only in terms of cartesian coordinates.To map a classical function F, given in terms of curvilinear coordinates (even if they areorthogonal), into its quantum operator is not at all straightforward. Interested readers arereferred to Kemble's text on quantum mechanics which deals with this matter in detail. Themapping can always be done in terms of cartesian coordinates after which a transformationof the resulting coordinates and differential operators to a curvilinear system can beperformed. The corresponding transformation of the kinetic energy operator to sphericalcoordinates is treated in detail in Appendix A. The text by EWK also covers this topic inconsiderable detail.The relationship of these quantum mechanical operators to experimentalmeasurement will be made clear later in this chapter. For now, suffice it to say that theseoperators define equations whose solutions determine the values of the correspondingphysical property that can be observed when a measurement is carried out; only the valuesso determined can be observed. This should suggest the origins of quantum mechanics'prediction that some measurements will produce discrete or quantized values of certainvariables (e.g., energy, angular momentum, etc.).B. WavefunctionsThe eigenfunctions of a quantum mechanical operator depend on the coordinatesupon which the operator acts; these functions are called wavefunctionsIn addition to operators corresponding to each physically measurable quantity,quantum mechanics describes the state of the system in terms of a wavefunction Ψ that is afunction of the coordinates {qj} and of time t. The function |Ψ(qj,t)|2 = Ψ*Ψ gives theprobability density for observing the coordinates at the values qj at time t. For a many-particle system such as the H2O molecule, the wavefunction depends on many coordinates.For the H2O example, it depends on the x, y, and z (or r,θ, and φ) coordinates of the ten electrons and the x, y, and z (or r,θ, and φ) coordinates of the oxygen nucleus and of thetwo protons; a total of thirty-nine coordinates appear in Ψ.In classical mechanics, the coordinates qj and their corresponding momenta pj arefunctions of time. The state of the system is then described by specifying qj(t) and pj(t). Inquantum mechanics, the concept that qj is known as a function of time is replaced by theconcept of the probability density for finding qj at a particular value at a particular time t:|Ψ(qj,t)|2. Knowledge of the corresponding momenta as functions of time is alsorelinquished in quantum mechanics; again, only knowledge of the probability density forfinding pj with any particular value at a particular time t remains.C. The Schrödinger EquationThis equation is an eigenvalue equation for the energy or Hamiltonian operator; itseigenvalues provide the energy levels of the system1. The Time-Dependent EquationIf the Hamiltonian operator contains the time variable explicitly, one must solve thetime-dependent Schrödinger equationHow to extract from Ψ(qj,t) knowledge about momenta is treated below in Sec. III.A, where the structure of quantum mechanics, the use of operators and wavefunctions tomake predictions and interpretations about experimental measurements, and the origin of'uncertainty relations' such as the well known Heisenberg uncertainty condition dealingwith measurements of coordinates and momenta are also treated.Before moving deeper into understanding what quantum mechanics 'means', it isuseful to learn how the wavefunctions Ψ are found by applying the basic equation ofquantum mechanics, the Schrödinger equation , to a few exactly soluble model problems.Knowing the solutions to these 'easy' yet chemically very relevant models will thenfacilitate learning more of the details about the structure of quantum mechanics becausethese model cases can be used as 'concrete examples'.The Schrödinger equation is a differential equation depending on time and on all ofthe spatial coordinates necessary to describe the system at hand (thirty-nine for the H2Oexample cited above). It is usually writtenH Ψ = i h ∂Ψ/∂t where Ψ(qj,t) is the unknown wavefunction and H is the operator corresponding to thetotal energy physical property of the system. This operator is called the Hamiltonian and isformed, as stated above, by first writing down the classical mechanical expression for thetotal energy (kinetic plus potential) in cartesian coordinates and momenta and then replacingall classical momenta pj by their quantum mechanical operators pj = - ih∂/∂qj .For the H2O example used above, the classical mechanical energy of all thirteenparticles isE = Σi { pi2/2me + 1/2 Σj e2/ri,j - Σa Zae2/ri,a }+ Σa {pa2/2ma + 1/2 Σb ZaZbe2/ra,b },where the indices i and j are used to label the ten electrons whose thirty cartesiancoordinates are {qi} and a and b label the three nuclei whose charges are denoted {Za}, andwhose nine cartesian coordinates are {qa}. The electron and nuclear masses are denoted meand {ma}, respectively.The corresponding Hamiltonian operator isH = Σi { - (h2/2me) ∂2/∂qi2 + 1/2 Σj e2/ri,j - Σa Zae2/ri,a }+ Σa { - (h2/2ma) ∂2/∂qa2+ 1/2 Σb ZaZbe2/ra,b }.Notice that H is a second order differential operator in the space of the thirty-nine cartesiancoordinates that describe the positions of the ten electrons and three nuclei. It is a secondorder operator because the momenta appear in the kinetic energy as pj2 and pa2, and thequantum mechanical operator for each momentum p = -ih ∂/∂q is of first order.The Schrödinger equation for the H2O example at hand then readsΣi { - (h2/2me) ∂2/∂qi2 + 1/2 Σj e2/ri,j - Σa Zae2/ri,a } Ψ+ Σa { - (h2/2ma) ∂2/∂qa2+ 1/2 Σb ZaZbe2/ra,b } Ψ= i h ∂Ψ/∂t.2. The Time-Independent Equation If the Hamiltonian operator does not contain the time variable explicitly, one cansolve the time-independent Schrödinger equationIn cases where the classical energy, and hence the quantum Hamiltonian, do not contain terms that are explicitly time dependent (e.g., interactions with time varyingexternal electric or magnetic fields would add to the above classical energy expression timedependent terms discussed later in this text), the separations of variables techniques can beused to reduce the Schrödinger equation to a time-independent equation.In such cases, H is not explicitly time dependent, so one can assume that Ψ(qj,t) isof the formΨ(qj,t) = Ψ(qj) F(t).Substituting this 'ansatz' into the time-dependent Schrödinger equation givesΨ(qj) i h ∂F/∂t = H Ψ(qj) F(t) .Dividing by Ψ(qj) F(t) then givesF-1 (i h ∂F/∂t) = Ψ-1 (H Ψ(qj) ).Since F(t) is only a function of time t, and Ψ(qj) is only a function of the spatialcoordinates {qj}, and because the left hand and right hand sides must be equal for allvalues of t and of {qj}, both the left and right hand sides must equal a constant. If thisconstant is called E, the two equations that are embodied in this separated Schrödingerequation read as follows:H Ψ(qj) = E Ψ(qj),i h ∂F(t)/∂t = ih dF(t)/dt = E F(t).The first of these equations is called the time-independent Schrödinger equation; itis a so-called eigenvalue equation in which one is asked to find functions that yield aconstant multiple of themselves when acted on by the Hamiltonian operator. Such functionsare called eigenfunctions of H and the corresponding constants are called eigenvalues of H. For example, if H were of the form - h2/2M ∂2/∂φ2 = H , then functions of the form exp(imφ) would be eigenfunctions because{ - h2/2M ∂2/∂φ2} exp(i mφ) = { m2 h2 /2M } exp(i mφ).In this case, { m2 h2 /2M } is the eigenvalue.When the Schrödinger equation can be separated to generate a time-independentequation describing the spatial coordinate dependence of the wavefunction, the eigenvalueE must be returned to the equation determining F(t) to find the time dependent part of thewavefunction. By solvingih dF(t)/dt = E F(t)once E is known, one obtainsF(t) = exp( -i Et/ h),and the full wavefunction can be written asΨ(qj,t) = Ψ(qj) exp (-i Et/ h).For the above example, the time dependence is expressed byF(t) = exp ( -i t { m2 h2 /2M }/ h).Having been introduced to the concepts of operators, wavefunctions, theHamiltonian and its Schrödinger equation, it is important to now consider several examplesof the applications of these concepts. The examples treated below were chosen to providethe learner with valuable experience in solving the Schrödinger equation; they were alsochosen because the models they embody form the most elementary chemical models ofelectronic motions in conjugated molecules and in atoms, rotations of linear molecules, andvibrations of chemical bonds.II. Examples of Solving the Schrödinger EquationA. Free-Particle Motion in Two Dimensions The number of dimensions depends on the number of particles and the number ofspatial (and other) dimensions needed to characterize the position and motion of eachparticle1. The Schrödinger EquationConsider an electron of mass m and charge e moving on a two-dimensional surfacethat defines the x,y plane (perhaps the electron is constrained to the surface of a solid by apotential that binds it tightly to a narrow region in the z-direction), and assume that theelectron experiences a constant potential V0 at all points in this plane (on any real atomic ormolecular surface, the electron would experience a potential that varies with position in amanner that reflects the periodic structure of the surface). The pertinent time independentSchrödinger equation is:- h2/2m (∂2/∂x2 +∂2/∂y2)ψ(x,y) +V0ψ(x,y) = E ψ(x,y).Because there are no terms in this equation that couple motion in the x and y directions(e.g., no terms of the form xayb or ∂/∂x ∂/∂y or x∂/∂y), separation of variables can be usedto write ψ as a product ψ(x,y)=A(x)B(y). Substitution of this form into the Schrödingerequation, followed by collecting together all x-dependent and all y-dependent terms, gives;- h2/2m A-1∂2A/∂x2 - h2/2m B-1∂2B/∂y2 =E-V0.Since the first term contains no y-dependence and the second contains no x-dependence,both must actually be constant (these two constants are denoted Ex and Ey, respectively),which allows two separate Schrödinger equations to be written:- h2/2m A-1∂2A/∂x2 =Ex, and- h2/2m B-1∂2B/∂y2 =Ey.The total energy E can then be expressed in terms of these separate energies Ex and Ey asEx + Ey =E-V0. Solutions to the x- and y- Schrödinger equations are easily seen to be:A(x) = exp(ix(2mEx/h2)1/2) and exp(-ix(2mEx/h2)1/2) , B(y) = exp(iy(2mEy/h2)1/2) and exp(-iy(2mEy/h2)1/2).Two independent solutions are obtained for each equation because the x- and y-spaceSchrödinger equations are both second order differential equations.2. Boundary ConditionsThe boundary conditions, not the Schrödinger equation, determine whether theeigenvalues will be discrete or continuousIf the electron is entirely unconstrained within the x,y plane, the energies Ex and Eycan assume any value; this means that the experimenter can 'inject' the electron onto the x,yplane with any total energy E and any components Ex and Ey along the two axes as long asEx + Ey = E. In such a situation, one speaks of the energies along both coordinates asbeing 'in the continuum' or 'not quantized'.In contrast, if the electron is constrained to remain within a fixed area in the x,yplane (e.g., a rectangular or circular region), then the situation is qualitatively different.Constraining the electron to any such specified area gives rise to so-called boundaryconditions that impose additional requirements on the above A and B functions.These constraints can arise, for example, if the potential V0(x,y) becomes very large forx,y values outside the region, in which case, the probability of finding the electron outsidethe region is very small. Such a case might represent, for example, a situation in which themolecular structure of the solid surface changes outside the enclosed region in a way that ishighly repulsive to the electron.For example, if motion is constrained to take place within a rectangular regiondefined by 0 ≤ x ≤ Lx; 0 ≤ y ≤ Ly, then the continuity property that all wavefunctions mustobey (because of their interpretation as probability densities, which must be continuous)causes A(x) to vanish at 0 and at Lx. Likewise, B(y) must vanish at 0 and at Ly. Toimplement these constraints for A(x), one must linearly combine the above two solutionsexp(ix(2mEx/h2)1/2) and exp(-ix(2mEx/h2)1/2) to achieve a function that vanishes at x=0:A(x) = exp(ix(2mEx/h2)1/2) - exp(-ix(2mEx/h2)1/2).One is allowed to linearly combine solutions of the Schrödinger equation that have the sameenergy (i.e., are degenerate) because Schrödinger equations are linear differential equations. An analogous process must be applied to B(y) to achieve a function thatvanishes at y=0:B(y) = exp(iy(2mEy/h2)1/2) - exp(-iy(2mEy/h2)1/2).Further requiring A(x) and B(y) to vanish, respectively, at x=Lx and y=Ly, givesequations that can be obeyed only if Ex and Ey assume particular values:exp(iLx(2mEx/h2)1/2) - exp(-iLx(2mEx/h2)1/2) = 0, andexp(iLy(2mEy/h2)1/2) - exp(-iLy(2mEy/h2)1/2) = 0.These equations are equivalent tosin(Lx(2mEx/h2)1/2) = sin(Ly(2mEy/h2)1/2) = 0.Knowing that sin(θ) vanishes at θ=nπ, for n=1,2,3,..., (although the sin(nπ) functionvanishes for n=0, this function vanishes for all x or y, and is therefore unacceptablebecause it represents zero probability density at all points in space) one concludes that theenergies Ex and Ey can assume only values that obey:Lx(2mEx/h2)1/2 =nxπ,Ly(2mEy/h2)1/2 =nyπ, orEx = nx2π2 h2/(2mLx2), andEy = ny2π2 h2/(2mLy2), with nx and ny =1,2,3, ...It is important to stress that it is the imposition of boundary conditions, expressing the factthat the electron is spatially constrained, that gives rise to quantized energies. In the absenceof spatial confinement, or with confinement only at x =0 or Lx or onlyat y =0 or Ly, quantized energies would not be realized.In this example, confinement of the electron to a finite interval along both the x andy coordinates yields energies that are quantized along both axes. If the electron wereconfined along one coordinate (e.g., between 0 ≤ x ≤ Lx) but not along the other (i.e., B(y) is either restricted to vanish at y=0 or at y=Ly or at neither point), then the total energy Elies in the continuum; its Ex component is quantized but Ey is not. Such cases arise, forexample, when a linear triatomic molecule has more than enough energy in one of its bondsto rupture it but not much energy in the other bond; the first bond's energy lies in thecontinuum, but the second bond's energy is quantized.Perhaps more interesting is the case in which the bond with the higher dissociationenergy is excited to a level that is not enough to break it but that is in excess of thedissociation energy of the weaker bond. In this case, one has two degenerate states- i. thestrong bond having high internal energy and the weak bond having low energy (ψ1), andii. the strong bond having little energy and the weak bond having more than enough energyto rupture it (ψ2). Although an experiment may prepare the molecule in a state that containsonly the former component (i.e., ψ= C1ψ1 + C2ψ2 with C1&gt;&gt;C2), coupling between thetwo degenerate functions (induced by terms in the Hamiltonian H that have been ignored indefining ψ1 and ψ2) usually causes the true wavefunction Ψ = exp(-itH/h) ψ to acquire acomponent of the second function as time evolves. In such a case, one speaks of internalvibrational energy flow giving rise to unimolecular decomposition of the molecule.3. Energies and Wavefunctions for Bound StatesFor discrete energy levels, the energies are specified functions the depend onquantum numbers, one for each degree of freedom that is quantizedReturning to the situation in which motion is constrained along both axes, theresultant total energies a... • 新浪微博 • 关注微信公众号 • 打印亚博足球app下载 • 复制文本 • 下载Physics - The Basic Tools Of Quantum Mechanics in Chemistry.XDF • 您选择了以下内容
5840427afa7fd2f6
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I have been wondering about the axiom of choice and how it relates to physics. In particular, I was wondering how many (if any) experimentally-verified physical theories require axiom of choice (or well-ordering) and if any theories actually require constructability. As a math student, I have always been told the axiom of choice is invoked because of the beautiful results that transpire from its assumption. Do any mainstream physical theories require AoC or constructability, and if so, how do they require AoC or constructability? share|cite|improve this question I've never bothered tracing what depends on AC and what doesn't, but I suspect it runs deep enough to touch most of the math underlying physics. For instance, it's good to know that we're talking about something that exists when we use bases for infinite-dimensional vector spaces. – Chris White Nov 10 '12 at 4:41 I think that Banach - Tarski theorem which depends crucially upon choice axiom may have some physical meaning - e.g. in terms of creation of more than one particles out of one when given with enough energy. However, the question of whether this is so or not belongs more to the domain of philosophy than physics. – user10001 Nov 10 '12 at 12:20 @ChrisWhite: right, however physicist very often assume other things that actually don't exist for e.g. general infinite-dimensional vector spaces, neither with or without the axiom of choice. – leftaroundabout Nov 10 '12 at 12:45 I suspect much of physics wouldn't need the full strength AC. Much can be done with countable AC. But, as @ChrisWhite says, measure theory would founder without full AC, although I suspect someone will come up with a measure theory without full AC one day. For me, though, the classic example that catches my eye here is not measure theory, but Tychonov's theorem - the product of compact sets is compact - is equivalent to AC but this is a very applied-maths-sounding theorem: it would be hard to say "throw that one out" - it's going to underpin many mathematical physics ideas. – WetSavannaAnimal aka Rod Vance Nov 3 '13 at 23:06 up vote 19 down vote accepted No, nothing in physics depends on the validity of the axiom of choice because physics deals with the explanation of observable phenomena. Infinite collections of sets – and they're the issue of the axiom of choice – are obviously not observable (we only observe a finite number of objects), so experimental physics may say nothing about the validity of the axiom of choice. If it could say something, it would be very paradoxical because axiom of choice is about pure maths and moreover, maths may prove that both systems with AC or non-AC are equally consistent. Theoretical physics is no different because it deals with various well-defined, "constructible" objects such as spaces of real or complex functions or functionals. For a physicist, just like for an open-minded evidence-based mathematician, the axiom of choice is a matter of personal preferences and "beliefs". A physicist could say that any non-contractible object, like a particular selected "set of elements" postulated to exist by the axiom of choice, is "unphysical". In mathematics, the axiom of choice may simplify some proofs but if I were deciding, I would choose a stronger framework in which the axiom of choice is invalid. A particular advantage of this choice is that one can't prove the existence of unmeasurable sets in the Lebesgue theory of measure. Consequently, one may add a very convenient and elegant extra axiom that all subsets of real numbers are measurable – an advantage that physicists are more likely to appreciate because they use measures often, even if they don't speak about them. share|cite|improve this answer You're really hating on the axiom of choice, and it's not clear why. If you want a new measure theory, you're perfectly free to come up with a new definition of "measure." No need to throw out a huge chunk of math to do it. And all the "open-minded" mathematicians you speak of died a long time ago. – Chris White Nov 10 '12 at 20:17 I am not "hating it", I am mostly indifferent towards it and slightly prefer non-AC over AC. I hope it's not a heresy yet. ;-) No one needs to throw any papers in maths – I just said that the detailed technical parts of those papers that depend on the axiom of choice are irrelevant for physics and irrelevant for any branch of maths that resembles the methods in physics. And that there's no scientific evidence - and can't be any scientific evidence - in favor or against the axiom of choice. – Luboš Motl Nov 10 '12 at 21:12 Whether someone died isn't decisive about statements of validity and consistency of assumptions and theories in maths or science. And the independence of the axiom of choice of the other axioms - i.e. the consistency of the other axioms with AC as well as non-AC (one of them) - was proved by Paul Cohen in the 1960s. Whoever doesn't understand that this means that AC and non(AC) are equally consistent with maths shouldn't call himself or herself a mathematician. Maybe he or she is an activist but not a rationally thinking person. – Luboš Motl Nov 10 '12 at 21:15 The following paper may be of interest: Norbert Brunner, Karl Svozil, Matthias Baaz, "The Axiom of Choice in Quantum Theory". Mathematical Logic Quarterly, vol. 42 (1) pp. 319-340 (1996). The abstract is as follows: We construct peculiar Hilbert spaces from counterexamples to the axiom of choice. We identify the intrinsically effective Hamiltonians with those observables of quantum theory which may coexist with such spaces. Here a self adjoint operator is intrinsically effective if and only if the Schrödinger equation of its generated semigroup is soluble by means of eigenfunction series expansions. Also relevant is the fact that classical analysis doesn't require much more than dependent choice, which is consistent with "All sets of reals are Lebesgue measurable". However the combination of the two statements requires a stronger assumption as a theory (inaccessible cardinals). What does baffle me, however, with physicists that have strong objections to the Banach-Tarski paradox, that it makes much less sense that a set can be partitioned into strictly more [non-empty] parts than elements. And that is a consequence of having all sets Lebesgue measurable. So while you may sleep quietly knowing that you cannot partition an orange into five parts and combining the parts into two oranges (thus solving world hunger), you have an equally disturbing problem. You can cut out a line [read: the real numbers] into more parts than points. share|cite|improve this answer Rigorous arguments in functional analisis are made much simpler by employing the axiom of choice. As we are free to model our physics in any set theory we like, and any set theory containing ZF contains a model of ZFC, we are entitled to use this simplification without fear of inconsistency. Discarding the axiom of choice would only make concepts and proofs more tedious, without giving any higher degree of assurance of the results. For example, the standard proof of the spectral theorem for self-adjoint operators depends on the axiom of choice, I believe, and much in mathematical physics depends on the spectral theorem. On the other hand, already on the level of theoretical physics, one often replaces scrupulously integral by finite sums, takes limits irrespective of their mathematical existence, and employs lots of other mathematically dubious trickery to get quickly at the results. So on this level of reasoning, nothing depends on subtilities that make a difference only when one begins to care about precise definitions and arguments in the presence of infinity. share|cite|improve this answer I taught a graduate course in math physics a couple years ago where I proved the spectral theorem for unbounded operators in separable Hilbert spaces. I did not use the AC but just the (countable) axiom of dependent choice. The two places in the proof that I remember where the AC can show its ugly face are: 1) decomposing the space into cyclic subspaces, 2) things around the Riesz-Markoff theorem for constructing spectral measures of given vectors. 1) is fine in a separable space but 2) gave me more troubles. There was the issue with mass escaping at infinity and all that... – Abdelmalek Abdesselam Aug 5 '15 at 13:58 ...which in its simplest form relates to the dual of $l^{\infty}$. When I searched the literature, I was quite shocked to discover that $(l^{\infty})'\neq l^1$ can only be proven using AC. Because of that, I now adopted the point of view that no mathematical physics should use the metaphysics of the AC, somewhat similar to the Luboš' view. – Abdelmalek Abdesselam Aug 5 '15 at 14:01 @AbdelmalekAbdesselam: Why does AC have an ugly face? The problems inherited by and avoided by AC are well understood, and that there are problems only shows that once beyond the countable realm one has multiple notions of infinity, so that one can choose the more tractable one. Since the constructible model within any model of ZF satisfies ZFC, there cannot be anything wrong with the axiom. – Arnold Neumaier Aug 25 '15 at 16:08 as you know De gustibus non est disputandum, so I find the AC ugly, you find it beautiful, but there is no problem with this disagreement. I am perfectly fine with pure mathematicians working on or using the AC. However, in mathematical physics I don't think it should be used, and so far I did not see an instance where one needs to use it. – Abdelmalek Abdesselam Sep 4 '15 at 12:36 The textbook formulation of functional analysis depends on the axiom of choice, eg via Hahn-Banach. This means that discarding the axoim of choice will break the textbook formulation of quantum mechanics as well. However, as we're dealing with (separable) Hilbert spaces, there exists countable bases and we should be able to replace the axiom of choice with a less 'paradox' alternative like the Solovay model and still get the right physics. The full Hahn-Banach theorem cannot be recovered, though, as it implies the existence of an unmeasurable set. share|cite|improve this answer This is just wrong, Christoph. A textbook presentation of a math problem may decide to believe the axiom of choice but one may do all the things at least equally well in systems, like Solovay models, that assume the AC is false. Nothing in quantum physics would break down if one used non-AC in all textbooks. Your suggestion that one uses the AC with infinite bases in QM is wrong, too. All the structures that matter in QM, like the Hilbert space of L^2 integrable functions (well, some equivalence classes), are continuous and well-behaved, incompatible with the discrete AC-like selection. – Luboš Motl Nov 10 '12 at 7:21 @LubošMotl: please re-read my answer - I do not disagree – Christoph Nov 10 '12 at 7:29 @LubošMotl: clarified my answer a bit, but imo it was fine as it was... – Christoph Nov 10 '12 at 7:43 All the functional analysis that physicists use it is restricted to cases where the Hahn-Banach theorem is only used with at worst countable dependent choice, and you really don't need it for physics, as Lubos Motl explains clearly. This is pro-choice FUD, the "full Hahn-Banach theorem" is going on about vector spaces of basis size aleph_continuum, and nonsense like that. – Ron Maimon Nov 11 '12 at 4:54 Your Answer