chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
d8eb71a020ab59d5 | Emotions, Diffusive Emotional Control and the Motivational Problem for Autonomous Cognitive Systems
C. Gros (J.W. Goethe University Frankfurt, Germany)
DOI: 10.4018/978-1-60566-354-8.ch007
All self-active living beings need to solve the motivational problem—the question of what to do at any moment of their life. For humans and non-human animals at least two distinct layers of motivational drives are known, the primary needs for survival and the emotional drives leading to a wide range of sophisticated strategies, such as explorative learning and socializing. Part of the emotional layer of drives has universal facets, being beneficial in an extended range of environmental settings. Emotions are triggered in the brain by the release of neuromodulators, which are, at the same time, are the agents for meta-learning. This intrinsic relation between emotions, meta-learning and universal action strategies suggests a central importance for emotional control for the design of artificial intelligences and synthetic cognitive systems. An implementation of this concept is proposed in terms of a dense and homogeneous associative network (dHan).
Chapter Preview
In order to elucidate the general functional purposes of emotions we start by considering the motivational problem of self-determined living creatures, whether biological or artificial. We use here and in the following the general term `cognitive system’ for such an autonomous and self-determined being. The question then regards the general motivational drives for cognitive systems.
Key Terms in this Chapter
Diffusive Control: Diffusive control is intrinsically related for biological cognitive systems to the release of neuromodulators. Neuromodulators are generally released in the inter-neural medium, from where they physically diffuse, affecting a large ensemble of surrounding neurons. The neuromodulators do not affect directly the cognitive information processing, viz the dynamical state of individual neurons. They act as the prime agents for transmitting extended signals for meta learning. Diffusive control signals come in two versions, neutral and emotional. (A) Neutral diffusive control is automatically activated when certain conditions are present in the cognitive system, irrespectively of the frequency and the level of past activations of the diffusive control. (B) Emotional diffusive control has a preset preferred level of activation frequency and strength. Deviation of the preset activity-level results in negative reinforcement signals, viz the system feels `uneasy’ or `uncomfortable’.
Motivational Problem: Biological cognitive systems are `autonomous’, viz they decided by themselves what to do. Highly developed cognitive systems, like the one of mammals, regularly respond to sensory stimuli and information but are generally not driven by the incoming sensory information, i.e. the sensory information does not force them to any specific action. The motivational problem then deals with the central issue of how a highly developed cognitive system selects its actions and targets. This is the domain of instincts and emotions, even for humans. Note, that rational selection of a primary target is impossible, rational and logical reasoning being useful only for the pursue of primary targets set by the underlying emotional network. Most traditional research in artificial intelligence disregards the motivational problem, assuming internal primary goal selection is non-essential and that explicit primary target selection by supervising humans is both convenient and sufficient.
Autonomous Cognitive System: Cognitive systems are generally autonomous, i.e. self-determined, setting their own goals. This implies that they are not driven, under normal circumstances, by external sensory signals. I.e. an autonomous cognitive system is not forced to perform a specific action by a given sensory stimuli. Autonomy does not exclude the possibility to acquire information from external teachers, given that internal mechanisms allow an autonomous cognitive system to decide whether or not to focus attention on external teaching signals. In terms of a living dynamical system an autonomous cognitive system possesses a non-trivial and self-sustained dynamics, viz an ongoing autonomous dynamical activity.
Living Dynamical System: A living dynamical system is a dynamical system containing a set of variables denoted `survival variables’. The system is defined to be living as long as the value of these variables remain inside a certain preset range and defined to be dead otherwise. Cognitive systems are instances of living dynamical systems and the survival variables correspond for the case of a biological cognitive system to the heart frequency, the blood pressure, the blood sugar level and so on.
Universal Cognitive System: Simple cognitive systems are mostly ruled by preset stimuli-reaction rules. E.g. an earthworm will automatically try to meander towards darkness. Universal principles, i.e. algorithms applicable to a wide range of different environmental settings, become however predominant in highly developed cognitive systems. We humans, to give an example, are constantly, and most of the time unconsciously trying to predict the outcome of actions and movements taking place in the world around us, even if these outcomes are not directly relevant for our intentions at the given time, allowing us to extract regularities in the observed processes for possible later use. Technically this attitude corresponds to a time-series prediction-task which is quite universal in its applicability. We use it, e.g., to obtain unconsciously knowledge on the ways a soccer ball rolls and flies as well as to extract from the sentences we listen-to the underlying grammatical rules of our mother-tongue.
Complex System Theory: Complex system theory deals with `complex’ dynamical systems, viz with dynamical systems containing a very large number of interacting dynamical variables. Preeminent examples of complex systems are the gen-regulation network at basis of all living, self-organizing phase transitions in physics like superconductivity and magnetism, and cognitive systems, the later being the most sophisticated and probably also the least understood of all complex dynamical systems.
Dynamical System: A dynamical system is a set of variables together with a set of rules determining the time-development of theses variables. The time might be either discrete, viz 1,2,3,... or continuous. In the latter case the dynamical system is governed by a set of differential equations. Dynamical system theory is at the heart of all natural laws, famous examples being Newton’s law of classical mechanics, the Schrödinger equation of quantum mechanics and Einstein’s geometric theory of gravity, general relativity.
Biologically Inspired Cognitive System: In principle one may attempt to develop artificial cognitive systems starting with an empty blueprint. Biological cognitive systems are at present however the only existing real-world autonomous cognitive systems we know of, and it makes sense to make good use of the general insights obtained by neurobiology for the outline of cognitive system theory. An example such a paradigmal insight is the importance of competitive neural dynamics, viz of neural ensembles competing with each other trying to form winning coalitions of brain regions, suppressing transiently the activity of other neural ensembles. Another example is the intrinsic connection between diffusive emotional control and learning mechanisms involving reinforcement signals.
Meta Learning: Meta learning and `homeostatic self-regulation’ are closely related. Both are needed for the long-term stability of the cognitive system, regulating internal thresholds, learning-rates, attention fields and so on. They do not affect directly the primary cognitive information processing, e.g. they do not change directly the firing state of individual neurons, nor do they affect the primary learning, i.e. changes of synaptic strengths. The regulation of the sensibility of the synaptic plasticities with respect to the pre- and to the post-synaptic firing state is, on the other hand, a prime task for both meta learning and homeostatic self-regulation. Homeostatic self-regulation is local, always active and present, irrespectively of any global signal. Meta learning is, on the other hand, triggered by global signals, the diffusive control signals, generated by the cognitive system itself through distinct sub-components.
Cognitive System: A cognitive system is an abstract identity, consisting of the set of equations determining the time-evolution of the internal dynamical variables. It needs a physical support unit in order to function properly, a datum also denoted as `embedded intelligence’. The primary task for a cognitive system is to retain functionality in certain environments. For this purpose it needs an operational physical support unit for acting and for obtaining sensory information about the environment. The cognitive system remains operational as long as its physical support unit, its body, survives. A cognitive system might be either biological (humans and non-human animals) or synthetic. Non-trivial cognitive systems are capable of learning and of adapting to a changing environment. High-level cognitive systems may show various degrees of intelligence.
Physical Support Unit: Also denoted `body’ for biological cognitive systems. Generally it can be subdivided into four functional distinct components. (A) The component responsible for evaluating the time-evolution equations of the cognitive system, viz the brain. (B) The actuators, viz the limbs, responsible for processing the output-signals of the cognitive system. (C) The sensory organs providing appropriate input information on both the external environment and on the current status of the physical support unit. (D) The modules responsible for keeping the other components alive, viz the internal organs. Artificial cognitive systems dispose of equivalent functional components.
Reinforcement Signal: Reinforcement signals can be either positive or negative, i.e. a form of reward or punishment. The positive or negative consequences of an action, or of a series of consecutive actions, are taken to reinforce or to suppress the likelihood of selecting the same set of actions when confronted with a similar problem-setting in the future. A reinforcement signal can be generated by a cognitive system only when a nominal target outcome is known. When this target value is given `by hand’ from the outside, viz by an external teacher, one speaks of `supervised learning’. When the target value is generated internally one speaks of `unsupervised learning’. The internal generation of meaningful target values constitutes the core of the motivational problem.
Complete Chapter List
Search this Book: |
549da2a20a312b5a | [<] [>]
Condensed Matter Physics
Scanning probe microscopies
Scanning probe microscopies
TEM and SEM are based on the interaction of an electron beam with a sample, either by analysing the selective absorptivity of regions of the sample (TEM) or by probing a variety of scattering mechanisms occurring as the beam is scanned across the sample.
Scanning probe microscopies, on the other hand, are based on the concept of an interaction between a fine tip of atomic dimensions with a sample surface. Different types of interaction can be utilised, but in all cases, and image of exclusively the surface topography of the sample is recorded point by point. The lack of depth information is usually considered an advantage rather than an impediment if topography is the object of study since it allows to concentrate on this aspect of the structure without having to filter out signal received from buried layers which may be difficult to distinguish.
Blunt vs. sharp tip in scanning probe techniques.
Scanning probe techniques rely on the use of an atomically sharp tip. These techniques measure interactions between atoms on either side of the gap between the tip and the specimen surface. If the tip is sharp, these interactions relate to specific atoms on either side of the gap which are closest to one another at any given moment, producing a crisp image with potentially atomic resolution. However, if the tip is blunt, there will be several atom pairs at similar distances, and as a result the image will be blurry at best. Tips therefore are consumables and have to be replaced when worn.
The two main scanning-probe techiques in general use are Scanning Tunnelling Microscopy (STM) and Atomic Force Microscopy (AFM).
Scanning tunnelling microscopy
The scanning tunnelling microscope is based on the concept of quantum-mechanical tunnelling. The wave function of an electron in a square well can be obtained by solving the Schrödinger equation for a particle in a well. The boundary condition that requires the wave function to decay to zero at the edges of an infinite well forces the solution to be a sine wave. The probability density of the electron is proportional to the square of the wave function (shown in the top part of the figure).
In the case of a finite potential well (as in the case of the gap between tip and sample), the only boundary condition applying at the edge of the well is that the wave function is continuous. This means that the oscillatory function inside the well turns into a decaying exponential inside the barrier. The same situation occurs on the other side of the gap. The tunnelling probability depends on the overlap integral of the two decaying wave functions inside the barrier.
Of course a real tip or surface atom isn't simply a square well. This changes the precise shape of the wave function but not the underlying principle.
Tip and surface.
The basic idea is quite straightforward: Each of the surface atoms of the sample is a quantum well. A sharp tip with a single atom at its point is moved across the sample. A bias voltage is applied across the gap between tip and sample. If the tip is close enough to the surface, electrons from one side of the gap can tunnel through the finite potential barrier that is the gap and reach the atom (well) on the opposite side. As long as the bias is maintained a steady stream of single electrons is crossing the gap: the tunneling current.
Scanning with fixed tip and sample.
The easiest way to scan a surface would be to move either the tip or the sample while measuring the tunnelling current. If there is an island on the surface, the tunnelling current would be enhanced because the gap would narrow correspondingly. Similarly, a groove in the surface leads to a widening of the gap and hence a decrease in the current.
The problem with this approach is that if the sample is terraced or the surface is slightly tilted, the tip will crash sideways into the obstacle and be destroyed. Since such effects are on the scale of the surface features to be observed, this would invariably occur in every single experiment at some stage.
Scanning at fixed current.
To avoid this, the distance between the tip and the sample is adjusted continuously such as to keep the tunnelling current at a constant level. This requires a third piezo-driven axis in addition to the two scanning axes. It also requires electronics whose response time is fast enough to drive the $z$ piezo before the tip reaches the object that causes the increase in tunnel current.
Atomic-resolution STM image of HOPG graphite with structural model.
Image © LP Biró (Ctr Nat Sci, Budapest); P Lambin (Univ Namur).
STM is capable in principle of delivering images with atomic resolution since it is not based on an interaction with waves and therefore doesn't have a diffraction limit. Of course there are some technical limitation on the available resolution such as image stabilisation (avoiding environmental vibrations), stray electric or magnetic fields, deformed imaging tips, or extraneous materials deposited on the surface. However, if these problems can be overcome, it is possible to image surface features such as grain boundaries (Fig.) depicting the positions of individual atoms. For STM to work with atomic resolution does require an undistorted periodic lattice structure because irregularities tend to cause loss of focus because they cause vibrations of the tip. This can be seen in the Fig., where the atoms involved in the grain boundary appear fuzzier than those in the undistorted grain interior.
One severe limitation of STM is that electrically conducting samples are required in order to produce a steady current across the gap between the tip and the sample.
Atomic force microscopy
Atomic force microscopy.
An alternative to STM that doesn't require samples to be electrically conducting is Atomic Force Microscopy (AFM). Here, an atomically sharp tip is attached to a thin, springy cantilever which bends as a result of the interatomic forces between the tip and atoms in the sample surface. This small dynamic response is magnified using a laser reflected at the back of the cantilever; the extent of deflection of the laser beam is a measure of the strength of the interaction of the tip with the sample.
AFMs can operate in three different modes depending on how close the tip comes to the surface. Given that the interaction between the tip and the surface is effectively the transient formation of a chemical bond, it is difficult to say when exactly the tip is 'in contact' with the sample. However, in the terminology used, contact mode means that a full chemical bond is formed between the tip atom and an atom in the surface of the sample, which essentially means that the tip gets as close as the repulsive component of the interatomic force will allow. This causes the largest possible deflection of the tip cantilever, but the strong interaction between the tip and the sample can easily lead to damage to both and thus to artefacts. In tapping mode, the cantilever oscillates with its own resonant frequency. As the tip approaches the surface in the course of this oscillation, the interaction with the surface increases and wanes periodically. The resonant frequency is removed from the signal, and the difference is representative of the strength of interaction. This mode is less taxing on the tip and the sample. Finally, for very delicate samples, it is possible to use non-contact mode and avoid the temporary formation of chemical bonds altogether. In this case, only weak long-range interactions such as the van-der-Waals force are monitored. Of course, the deflections are less prominent in this case, leading to a weaker and typically fuzzier image. AFM copes well with ambient conditions, and it is even possible to image surfaces covered with a thin liquid film.
AFM image of diblock henges.
Diblock co-polymer morphologies.
Image © FS Bates, Univ. of Minnesota.
AFM images are false colour images (as are STM images), i.e. the colour shown represents the deflection of the tip within a range of values. AFM images reflect the topography of the surface of the sample, without any information about sub-surface features.
The AFM images (Figs.) show the surface of a diblock co-polymer film on a silicon substrate after exposure to different solvent vapours, causing different morphologies to prevail. The open rings ('henges') and white circles represent structures which protrude from the baseline height, while the black circles are small holes in the film. A histogram shows the distribution of pixels at a given height within the area sampled.
Scanning artefacts in AFM image.
Images typically have to be processed to remove scanning artefacts such as fluctuations in the background height from which islands and depressions in the surface are measured as well as distortions caused by large objects which bend the cantilever to such an extent that it cannot flip back to its relaxed position immediately after the tip has passed the obstacle. Any features which appear to be exactly horizontal, i.e. follow the direction of scanning, are generally suspect and need to be investigated by rotating the sample.
Comparison of microscopy techniques
The following table summarises some aspects of the microscopy techniques covered here.
probeelectron beamelectron beamtiptip
image formationdirectscanningscanningscanning
interactionabsorptionvariouselectron tunnellinginteratomic force
depth informationdepth focussub-surfacenonenone
chemical specificitynopossiblenolimited
sample req.very thinconducting surfaceconductingany
artefactscharging, obscurationchargingtip damagetip damage
This concludes the section on microscopy techniques supporting condensed matter physics. We will next expand on the topic of crystallography and diffraction covered in the Concepts lecture, starting from a brief review of the Bravais lattices used to describe crystal structures. |
2afdf87ff8d62285 | Jones Calculus
Jones calculus
A quaternion valued wave equation \Psi_{tt} = D^2 \Psi can be solved as usual with a d’Alembert solution \Psi(t) = \cos(D t) \Psi(0) + \sin(D t) D^{-1} \Psi'(0). We can write this more generally as e^{\beta D t} (u(0) - \beta v(0)) where \beta is a unit space quaternion where \psi(0)=u(0) - \beta v(0) is the initial wave. Now, \exp(\beta x) = \cos(x) + \beta \sin(x) holds for any space unit quaternion \beta. Unlike in the complex case, we have now an entire 2-sphere which can be used as a choice for \beta. If u(0) and v(0) are real, then we stay in the plane spanned by 1 and \beta. If u(0) and v(0) are in different plane, then the wave will evolve inside a larger part of the quaternion algebra.
Also as before, the wave equation has not be put in artificially. It appears when letting the system move freely in its symmetry. In the limit of deformation we are given an anti-symmetric matrix B= \beta (b+b^*) and get a unitary evolution \exp(i B t). As we have used Pauli matrices to represent the quaternion algebra on C^2, a wave is now given as a pair (\psi(t),\phi(t)) of complex waves. Using pairs of complex vectors is nothing new in physics. It is the Jones calculus named after Robert Clark Jones (1916-2004) who developed this picture in 1941. Jones was a Harvard graduate who obtained his PhD in 1941 and after some postdoc time at Bell Labs, worked until 1982 at the Polaroid Corporation.
Why would a photography company emply a physisists dealing with quaternion valued waves? The Jones calculus deals with polarization of light. It applies if the electromagnetic waves F =(E,B) have a particular form where E,B are both in a plane and perpendicular to each other. Remember that light is described by a 2-form F=dA which has in 4 dimensions B(4,2)=6 components, three electric and three magnetic components. The Maxwell equations dF=0, d* F=0 are then in a Lorentz gauge d^*A=0 equivalent to a wave equation L A =0, where L is the Laplacian in the Lorentz space. Now, if light has a polarized form, one can describe it with a complex two vector \Psi=(u,v) rather than by giving the 6 components (E,B) of the electromagnetic field. How is this applied? Sun light arrives unpolarized but when scattering at a surface, it catches an amount of polarization. Polarized sunglasses filter out part of this light reducing the glare of reflected light. The effect is also used in LCD technology or for glasses worn in 3D movies. It can not only be used for light, but in radio wave technology, polarization can be used to “double book” frequency channels. And for radar waves, using polarized radar waves can help to avoid seeing rain drops. Even nature has made use of it. Octopi or cuttlefish are able to see polarization patterns. See the encylopedia entry for more. Mathematically the relation with quaternion is no suprise because the linear fibre of a 1-form A(x) at a point is 4-dimensional. Describing the motion of the electromagnetic field potential A (which satisfies the wave equation) is therefore equivalent to a quaternion valued field.
We have to stress however that the connection between a quaternion valued quantum mechanics and wave motion of the electromagnetic field is mostly a mathematical one. First of all, we work in a discrete setup over an arbitrary finite simplicial complex. We don’t even have to take the de Rham complex: any elliptic complex D=d+d* as discribed in a discrete Atiyah-Singer setup will do. The Maxwell equations even don’t need to be 1 forms. If E \oplus F=\oplus E_k + \oplus F_k is the arena of vector spaces on which D:E \to F, F \to E$ acts, then one can see for a given $j \in D_k$ the equations dF=0,d^*F=j as the Maxwell equation in that space. For F=dA and gauge d^*A=0, the Maxwell equations reduce to the Poisson equation D^2 A=j which in the case of an absense of “current” j gives the wave equation D^2 A=0 meaning that A is a harmonic k-form. Now, in a classical de Rham setup on a simplicial complex G, A is just an anti-symmetric function on k-dimensional simplices of the complex. Still, in this setup, when describing light on a space of k-forms, it is given by real valued functions. If we Lax deform the elliptic complex, then the exterior derivatives become complex but still, the harmonic forms do not change because the Laplacian does not change. Also note that we don’t incorporate time into the simplicial complex (yet). Time evolution is given by an external real quantity leading to a differential equation. The wave equation u_{tt}=Lu can be described as a Schrödinger equation u_t = i Du. We have seen that when placing three complex evolutions together that we can get a quaternion valued evolution. But the waves in that evolution have little to do with the just described Maxwell equations in vacuum, which just describes harmonic functions in the elliptic complex.
We will deal with the problematic of time elsewhere. Just to state now that describing a space time with a finite simplicial complex does not seem to work. It migth be beautiful and interesting to describe finite discrete space times but one can hardly solve the Kepler problem with it. Mathematically close to the Einstein equations is to describe simplicial complexes with a fixed number of simplices which have maximal or minimal Euler characteristic among all complexes. Anyway, describing physics with waves evolving on finite geometries is appealing because the mathematics of its quantum mechanics is identical to the mathematics of the quantum mechanics in the continuum, just that everything is finite dimensional. Yes there are certain parts of quantum mechanics which appear needing infinite dimensions but if one is interested in the PDE’s, the Schroedinger respectivly the wave equation on such a space there are many interesting problems already in finite dimensions. The question how fast waves travel is also iteresting in the nonlinear Lax set-up. See This HCRP project from 2016 of Annie Rak. In principle the mathematics of PDE’s on simplicial complexes (which are actually ordinary differential equations) has more resemblence with the real thing because if one numerically computes any PDE using a finite element method, one essentially does this.
Here is a photograph showing Robert Clark Jones:
Robert Jones Robert Jones
Source: Emilio Segrè Visual Archives.
There are other places in physics where complex vector-valued fields appear. In quantum mechanics it appears from SU(2) symmetries, two level systems, isospin or weak isospin. Essentially everywhere, where two quantities can be exchanged, the SU(2) symmetry appears. A quaternion valued field is also an example of a non-abelian gauge field. In that case, one is interested (without matter) in the Lagrangian |F|^2/2 with F=dA+A \wedge A, where A is the connection 1-form. Summing the Lagranging over space gives the functional. One is interested then in critical points. The satisfy d_A^* F=0, d_A F=0 meaning that they are “harmonic” similarly as in the abelian case, where harmonic functions are critical points of the quadratic Lagrangian. There are differences however. In the Yang-Mills case, one looks at SU(2) meaning that the fields are quaternions of length 1. When we look at the Lax (or asymptotically for large t, the Schrödinger evolution) of quaternion valued fields \psi(t), then for exach fixed simplex x, the field value \psi(t,x) is a quaternion, not necessarily a unit quaternion.
[Remark. A naive idea put forward in the “particle and primes allegory” is to see a particle realized if it has an integer value. The particles and prime allegory draws a striking similarity between structures in the standard model and combinatorics of primes in associative complete division algebras. The later is pure mathematics. As there are symmetry groups acting on the primes, it is natural to look at the equivalence classes. The symmetry groups in the division algebras are U(1) and SU(2) but there is also a natural SU(3) action due to the exchange of the space generators i,j,k in the quaternion algebra. This symmetry does not act linearly on the apace, but it produces an other (naturally called strong) equivalence relation. The weak (SU(2)) and strong equivalence relations combined lead to pictures of Mesons and Baryons among the Hadrons while the U(1) symmetry naturally leads to pictures of Electron-Positron pairs and Neutrini in the Lepton case. The nomenclature essentially pairs the particle structure seen in the standard model with the prime structure in the division algebras. As expected, the analogy does not go very far. The fundamental theorem of algebra for quaternions leads to some particle processes like pair creation and annihilation and recombination but not all. It does not explain for example a transition from a Hadron to a Lepton. The set-up also leads naturally to charges with values 1/3 or 2/3 but not all. Also, number theory has entered physics in many places, it is not clear why “integers” should appear at all in a quantum field theory. What was mentioned in the particles and primes allegory is the possibility to see particles only realized at a simplex x, if the field value is an integer there. As in a non-linear integrable Hamiltonian system like the Lax evolution soliton solutions are likely to appear and so, if the wave takes some integer value p at some time t and position x, it will at a later time have that value p at a different position. The particle has traveled. But as during the time it has jumped from one vertex to an other, it can have changed to a gauge equivalent particle. If the integer value is not prime, it decomposes as a product of primes. Taking a situation where space is a product of other spaces allows to model particle interactions. One can then ask why a particle like an electron modeled by some non-real prime is so stable and why if we model an electron-positron pair by a 4k+1 prime, the position of the electron and positron are different. A Fock space analogy is to view space as an element in the strong ring, where every part is a particle. Still the mathematics is the same, we have a geometric space G with a Dirac operator D. Time evolution is obtained by letting D go in its symmetry group.] |
4abfed61abd37e58 | On the Possibility of Libertarian Free Will
This is a brief attempt to show that while remaining unlikely, the existence of Libertarian Free Will is not an impossibility within the bounds of our current scientific and philosophical understanding.
My aim is to identify both the possibilities and any problems with them that need tackling.
First we need to acknowledge that there are effectively two causal systems in play in the world:
1) the deterministic system of our macro-scale, aggregated, classical physics. Let us call this the D-system.
2) the indeterministic system of micro-scale quantum physics. Let us call this the I-system.
It is uncontroversial that there are brain processes that operate in the D-system, unfolding deterministically. Let us call these D-processes. It may also turn out to be the case that there are brain processes that operate in the I-system, unfolding indeterministically¹. Let us assume this to be the case and call these I-processes.
For our purposes we need to imagine a D-process D that assesses a range of possibilities from which the agent is to choose. It doesn’t matter if D is a conscious process or not. What matters is that left alone, will evolve deterministically.
However, in this case D initiates an I-process I that consists in the agent’s imagining of the future, to aid in its decision. We might characterize this event as D requesting data from I, and I returning data to D.
Since the I-system is indeterministic, the data returned will be random. It will however be constrained within the context of the data request (so for example, a request for data on choosing between cheese or chocolate is more likely to return imaginings of taste or waistlines than it is of drowning or London buses).
Data returned from I may or may not be novel with respect to previously existing data available to D. If the data is novel, then depending on its practical feasibility, it may provide additional possibilities from which D can select. Therefore the unfolding of D without the novel data from I may differ from the actual unfolding D with the novel data from I.
From this, we can see that one component of Libertarian Free Will is possible, consisting in the ability of an I-Process to return data to a D-Process, freeing the D-Process from the determinism of the D-System by way of a swerve from the deterministic system to the indeterministic system. However, this implies a random swerving, which is not good enough for Libertarian Free Will. So we will continue.
It is reasonable to suggest that the more data that is returned from I, the more likely it is that D will go on to select a possibility that only became available due to that additional data.
It may also turn out to be the case that the I-system operates in a non-temporal environment². Let us assume this is the case.
Given this assumption, the amount of data returned from I cannot depend on a variable set by D that results in a determined amount of time for I to complete the task. We say this because the amount of time used by the I-process – being that it is a-temporal – is always zero. The I-process takes place outside of both a spatial and a temporal structure.
If not determined by a variable set by D then perhaps it is reasonable to think that the amount of data returned from I is determined by something intrinsic to the I-process itself. It could be this variability in the I-process that corresponds to the conscious effort of the will, providing the non-random element of the swerve.
So if an agent exercising free will does so within the I-system, this not only implies that a function of the agent – its imagination – resides in that system, but that at least a part of that which defines its character as an agent – its strength of will – does so too. The question then becomes that of how a semi-persistent character trait can retain its integrity in an indeterministic environment.
I’ll return to this in a future post.
¹ c.f. quantum biology
² c.f. quantum gravity and emergent spacetime
Metaphysical Foundations (Pt4)
Click here for part one of this series.
Part three in this series sketched-out an idea for a fundamental ontological division in the Universe. It suggested a sub-Planckian base to reality (coined the Potentiat), and a super-Planckian extension to that base which we observe as the emergent macro properties and phenomena of physics (coined the Instantiat).
This division is not one of substance. The only substance proposed is the content of the Potentiat, and when that content obtains certain states, the additional macro-scale contents of the Instantiat with which we are familiar through observation emerge: spacetime, all its contents, and further emergent levels of properties and phenomena.
So by referring to the content of the Potentiat as “substance” we are not saying that it is physical, and by referring to the content of the Instantiat as “physical” we are not saying that is is substance. Substance here is that which fundamentally exists in non-emergent terms, and physical is the bottom layer of that which emerges. Insofar as the the emergent content and workings of the Instantiat (which are the targets of all science bar Quantum Gravity), we assume a physicalist position. Only then do we extend our ontological commitments, and this is only in response to physicalisms failure to account for the phenomena of quantum mechanics, experiential consciousness and libertarian free will.
As before it is important to note that the ideas presented here do not claim to be anything more than metaphysical speculation. They are guided by my understanding of mainstream scientific models and philosophical arguments, but they are not facts, nor personal beliefs.
What I’ve coined the Potentiat is the Universe below the Planck scale, or more accurately the pre-scale Universe, since the Potentiat is non-spatial. It was previously suggested an approach to the nature of the Potentiat called panprotoexperientialism.
The idea of consciousness as fundamental has a long history in the form of the philosophical position of panpsychism, and panprotoexperientialism is a variation on that. Panpsychist positions are usually regarded as an anti-realist position, not only granting our conscious experience metaphysical primacy, but also denying the existence of the objective world (not only does the tree make so sound when it falls, but with no-one to observe it, the tree ceases to exist at all).
The version of panprotoexperientialism suggested here is not anti-realist, or more accurately, it’s target means it’s not, because that target is not the Instantiat but the Potentiat.
In the Potentiat we have a metaphysical base that is both proto-experiential (because individual experiencing agents form within it), and proto-physical (because the Instantiat emerges from it). Nothing in the Instantiat on the other hand need be experiential (or even dual-aspect experiential) to accommodate consciousness. Rather the Potentiat should fulfill this role in addition to its role of being the source of the Instantiat.
So here we have a metaphysical monism, where from the single source of the Potentiat emerges both the Instantiat and also what we will call Consicats: individual bound instances of consciousness.
We’ve also previously discussed the Potentiat in terms of a geometric and topological object. The Potentiat consists of the overlaid uninstatiated shapes that the object could take according to the rules that govern it. The Instantiat is the particular shape that obtains, and the temporal unfolding of the Instantiat consists in a sequence of those shapes.
In common with theories like Loop Quantum Gravity and metaphysical ideas like the simple Game of Life and Gregg Rosenberg’s Theory of Natural Individuals, causation here is rooted in the structure and relations of this object’s nodes, each with a differing configuration and number of connections to its neighbours.
The specific rules that govern the scheme are a matter for the various models that posit them, but we might conjecture features that could be explanatory under the system proposed here, or at least that might serve as examples of the kind of features that would do that. If we go beyond a two dimensional visualization of the Potentiat until we have not just the geometry and shape of the object in mind but the topology as well, then we might imagine some interesting features.
Presumably there will be parts of the whole that are topological simple with minimal connections between nodes, and others that are exceedingly complex with tangles and loops. The properties of these structures will relate to whether and what they Instantiate both at the bottom layer (waves, particles) and in subsequent emergent layers. However, we should also note that causation might work in both directions, with the emergent layers influencing further development of the Potentiat’s structures from the top down, and creating more complexity there. This means the suggestions here need not be fully reductionist in outlook..
Judging complexity is itself a thorny issue, but by most measures the brain is a highly complex object and so would presumably be supported by an equally complex structure in the Potentiat. Perhaps experiential consciousness might be associated with a peculiarity of that proto-experiential base structure?
We might imagine features like closed loops to fulfill this role. Nodes might become isolated from the rest of the system, making only a one-way flow of information possible and corresponding to the same features of subjective experience.
Similarly, the closed structure might contain enough internal nodes to form other structures within. These might have differing properties to each other, but be bound by their mutual containment, and perhaps reinforced by their constant association. This might provide potential solutions to the Binding or Combination Problem that is common to all panpsychist theories.
Turning to libertarian free will, if we assume that it can only be initiated by Consciats, then it also seems reasonable to suggest that devoid of them the Universe would see the Instantiat sequence unfold from the myriad Potentiat possibilities in a very particular way.
This is not to say that the unfolding of the Potentiat would be a deterministic process. The first emergent level of the Instantiat remains quantumly indeterministic in its effects, but without Consciats and the free will process amplifying and directing those indeterminacies, the perturbations of the system would be small and non-cumulative. This means the Instantiat sequence would always quickly and easily settle back to the path through the Potentiat possibilities that conforms to the Principle of Least Action, which is already generally regarded as a fundamental property of the Universe.
This is just akin to saying that without consciousness and free will, the Universe is practically deterministic, but with them it is not, and that free agents influence the path that the Instantiat sequence treads through the Potentiat.
An analogy for this is an old-fashioned Choose Your Own Adventure book. The book is undoubtedly deterministic because it has already been written before it’s opened by the player, but the free choices of the player shape which entries in the story occur in that particular reading. The book as a whole represents the Potentiat, and the particular story told the non-determined Instantiat. The reader represents a Consciat.
To embrace the fact that there are multiple Consciats in the real world, we might update our Choose Your Own Adventure book to a Multi-User Dungeon on a computer, and to represent Universe before consciousness arose, we might image an automated demo mode running through the story, but each time taking the first option displayed because it expends less energy that way.
As with all analogies, ours has to stretch and fail somewhere, and an immediate thought is that here the player or players are outside the system, whereas Consciats are very much embedded in it.
There is more to say here, on how and why consciousness and free will might have evolved, and on how that process could be driven by the protoexperiential nature of the Potentiat, It might also be useful to look at some of the details of the various theories of quantum gravity and philosophical ideas on causation to compare their features. Further we need to look at possible processes of data transfer between Instantiat, Consciats and Potentiat involved in the proposed processes of experience, cognition (yet to be addressed) and free will.
However, this series was only meant to be a single post and has gone on long enough! I’ll turn to these topics in new individual posts.
I hope you’ve enjoyed coming with me on a speculative journey, and that even if you disagree fundamentally with my suggestions, that I might have illustrated how modern scientific ideas actually open-up these issues, rather than close them as some physicalists would have you believe.
Metaphysical Foundations (Pt3)
Click here for part one of this series.
Part two of this series was mostly concerned with physicalist assumptions in the free will debate. This third part will return to the wider metaphysical speculation.
Note that the descriptions here are merely the musings of an armchair blogger. They are not beliefs, they are suspicions. With that in mind, if you’re still interested, read on.
Click for source
First let’s zoom right out and taking a look at what current scientific understanding might suggest in regards to the metaphysics and ontology of the Universe.
We’ll start with the idea that quantum mechanics suggests a fundamental ontological division: there is that which is physically instantiated and that which is mere potential. Yet at the same time it suggests that in both cases events therein have active roles in the world.
While considering these two parts of reality, it’s important to keep in mind that this division is illusory in respect to location and substance. There is only one Universe (the capital U denoting that I am including any possible multiverse theories under this heading), and the differences I’ll be suggesting are in scale, not in temporospatial location or primitive (i.e. non-emergent) substance.
The first ontological category I want to consider consists of all everyday objects ranging from galaxies, planets, chairs and bacteria, all the way down to molecules, atoms, protons, and quarks. These are all things that are instantiated above the Planck scale in emergent spacetime and are therefore measurable and interpreted as objectively “real” in a physical sense. For that reason I’m going to refer to this macroworld as the Intantiat. It also consists of things that are less familiar as we move downward in scale toward that boundary, like briefly-instantiated virtual particles.
Within the Instantiat, both upward (reductive) processes, and downward (non-reductive) processes determine the unfolding of events, and that unfolding is probabilistic in accordance with most interpretations quantum mechanics. Therefore I am very much holding that ultimately determinism is false. the Universe is fundamentally indeterministic in nature. What we observe in the macroworld is a faux-determinism; the observed averaging-out of the enormous amount of interactions involved in macro events.
The second category consists of that which is not instantiated: potential counterfactuals, or – from the point of view of the Instantiat – things that might have been or might be. This realm I’ll refer to as the Potentiat. This is the Universe as considered at the sub-Plankian scale, and is best visualized holistically, each component of the whole a node in a single object: the all-possibilities-present block Universe.
The Potentiat has neither spatial nor temporal position. In physics terms it is background independent. Its “properties” are only its internal topological & geometric relations. This is in accordance with certain approaches to quantum gravity, like Loop Quantum Gravity. Both space and time emerge from the Potentiat in the same way that other fields and their particles do.
Now imagine looking down at this sub-Planckian microworld from the macroworld above with a bird’s eye view. From this high vantage point one can visualize the Potentiat below as a fuzzy sea of possibilities. The surface of the sea is an overlaid surface of fundamental physical fields at their zero point energy level, and thus fizzing with quantum fluctuations. It appears as a foam. And towering from this foamy surface are the soaring spines of the instantiated macroworld excitations in the fields that we call point particles. And indeed from our bird’s eye view, the peak of each wave excitation does indeed appear as a dimensionless point. Collections of these spiny structures swarm as they interact with each other, forming the skeletal structure of the things we perceive as macro objects.
Click for source
The Potentiat is also where objects like photons and electrons seemingly “disappear to” when they are not interacting with each other. So when we say that a photon takes “every path” from emitter to detector in the famous double-slit experiment, it is in the Potentiat that all those paths consist. In terms of the mathematics of quantum mechanics, the Potentiat is modeled by the imaginary axis on the complex number plane (see here for my brief attempt at an explanation).
If one now imagines moving down to the nodes of sub-Planckian microworld, and then outwards to an external god’s eye view, one can now imagine the related geometric nodes of the Potentiat allowing for potential shapes that the Potentiat as a whole could obtain.
So the Instantiat can be seen as a single obtained state from the myriad possible Potentiat states. Its state is no more “real” than the Potentiat states, but it differs in that it has super-Planckian scale and the emergent spaciotemporal physicality that comes with that status.
Each possible shape that the Instantiat could take maps to a bitmap snapshot of a potential physical Universe state. In quantum mechanical terms each shape is a possible Universal Wavefunction. the Instantiat is the measurable universe we experience, and the Potentiat consists of all the counterfactual universes of a Many-Worlds-like Interpretation of quantum mechanics (MWI is normally considered a deterministic theory, but I will come to that later).
Click for source
As the Instantiat shifts from state to state, each shift is a quantized moment of emergent Planck-time mapping to a different Universal wavefunction.
One can also imagine an overall shape for the Potentiat, which is an overlaid combination of all its possible shapes.
The relationship between the Potentiat, Instantiat and an evolving worldline might be illustrated with an analogy. Imagine a computer monitor. The Instantiat is a still image on the screen: a contingent configuration space of active and inactive pixels.
A worldline is an ordered sequence of these Instantiat pixel maps evolving according to a set of laws determined by the geometry of the Instantiat within the Potentiat (i.e. how individual shapes can and cannot transform within the Potentiat shape viewed as a whole). So in the analogy a worldline is a like a movie playing out on the screen.
The Potentiat on the other hand is a non-contingent configuration space of each and every possible configuration of pixels that could be displayed on the screen, all displayed simultaneously: a white noise.
Click for source
So how does experiential consciousness and libertarian free will fit into this picture? Firstly let’s define libertarian free will in the context I’ve given:
“Libertarian free will consists in the possibility of non-random interventions in the otherwise faux-deterministic unfolding of the Instantiat
As previously suggested, a possible mechanism for free will may be something like a class of mainstream theorized phenomena that occur at the Planck scale boundary and result in seemingly ex nihilo particle creation. Under the picture described here, that boundary is the interface between the Potentiat and the Instantiat.
Is it possible that such a phenomenon might be exploited by the brain to tip the balance of probabilities against faux-determinism? We already know that evolutionary processes exploit Planck-scale quantum effects for their own ends, and I’d argue that it would be surprising if a system as complex as the brain does not do the same to some extent or other. For the purpose of speculation, let’s assume here that such a phenomenon exists and can be exploited by the brain.
Opponents might say that such physical effects are too small in scope to make a difference to events at the macro scale. This worry might be addressed by positing some process of amplification in the brain, much like the amplification of macro systems seen in the butterfly effect (although it would have to also be directed rather than chaotic).
2013 08 03 Comma in the Back Garden
There is a more worrisome problem still. While the unfolding of events in the Instantiat is completely determined, the influence that the Potentiat has on that unfolding via particle creation is by contrast completely random, and a random influence is not sufficient for free will.
However, note that we have not said the the Potentiat itself is random in its nature, only that the direct (i.e. non-emergent) influence it has on the Instantiat is random. Of the intrinsic nature of the Potentiat itself we know little or nothing.
So if free will consists in the directed and amplified ex-nihilo creation of matter with source Potentiat and destination Instantiat, then perhaps experiential consciousness consists either at the border between the Instantiat and Potentiat, or in the Potentiat itself.
If so, and if conscious deliberation is an aspect of experiential consciousness, then it would be the workings of the Potentiat itself that afford the definitions “free” (non-random and non-determined) and “will” (interventions) as defined above.
Of course, the definition is “non-random and non-determined” is a tricky one. Are the two concepts not a binary affair with an excluded middle? Again, locating the experiential consciousness and its free will in (or partially within) the Potentiat may help here. Although the Potential’s nature remains a completely open question, if we look at its effects like superposition, along with its nature as painted here, there is a suggestion that some logical principles like the excluded middle may not apply in the way that we are familiar with when observing the world at the scale of the Instantiat. Perhaps the dichotomy of determinacy and indeterminacy is – for a Potentiat-based will – a false one.
As for experiential consciousness itself, here we might turn to theories of consciousness outside the scientific mainstream but still metaphysically conceivable. Traditions like Panpsychism and the related Panprotoexperientialism (or Panprotopsychism) become attractive.
These theories place abstractions and imagination not only at the heart of conscious experience, but also at the foundational level of reality, although there are also formidable issues to overcome, like the combination or binding problem.
Next time I’ll turn to speculation on consciousness and the workings of the Potentiat in more detail.
For Part 4 Click Here.
Metaphysical Foundations (Pt2)
In the first part of this article I laid down some foundational principles to guide me as I move from philosophical beliefs and metaphysical suspicions to more speculative ideas on consciousness and free will.
Here, as I approach the tricky subject of phenomenal conscious experience (or as I’m about to call it from here on out, just plain consciousness), I carry with me two central beliefs. Firstly that the Universe is amenable to explanation by science (a weak form of positivism), and secondly that whatever we find at the smallest scale of reality should inform us about what will emerge at the larger scales (a weak form of reductionism).
A third uncontroversial belief is that our scientific understanding of foundational physics is incomplete.
Click for source
Additionally, I adhere to three speculative but mainstream scientific ideas about how the Universe works.
Firstly, that our current best-fit theories of general relativity and quantum mechanics will eventually be superseded by a better-fit theory of quantum gravity. Secondly that space and time will be shown to be emergent properties of an underlying sub-Planckian realm. And thirdly, that this entails that reality at the most fundamental level consists in a non-spaciotemporal all-possibilities-present block universe of some sort.
Whatever wild and wacky ideas I have in regards to consciousness, they must at least adhere to these principle beliefs and fit with the theories described that I suspect have merit. They’ll be no god-smuggling or wizardry here, despite how it may appear to some.
I say this because some physicalists appear to regard any deviation from the that position to be veering inexorably towards belief in the supernatural, and I’ll start this discussion with a negative thesis defending non-physicalist accounts of both consciousness and free will from that accusation.
click image for source
The physicalists I’m talking about are not the likes of “A-Team captainDan Dennett or his fellow philosophers with similar opinions. Despite any metaphysical preference, all of them are all no doubt well aware of the problems with each position.
Rather I’m mostly referring to various science popularisers (many of whom are scientists themselves) who have an anti-theist agenda. They would appear to want to stamp on any ideas of consciousness being anything but a part of our current physical ontology, or free will being anything but an illusion in a “deterministic” universe, because they believe those concepts to be central to theistic belief.
(I put determinism in scare quotes because the framing of the free will debate in terms of determinism is actually a good hundred years out of date. Since quantum mechanics came on the scene, the likelihood of any truly deterministic system is increasingly small, although the random element introduced doesn’t automatically help those on my side of the argument.)
Although I sympathise with the motives of these individuals, I am highly dubious that theistic belief plays any necessary role in either consciousness or free will. Both are metaphysical concepts that reach far wider than that, and neither appear to say anything about the existence or otherwise of any god or gods, or the supernatural generally.
This is because unlike those concepts, neither consciousness nor free will make any claim to be immune to the laws of physics. One may will to fly unaided, but not even theistic accounts deny that gravity will have the last laugh. The only theistic claim on consciousness and free will is that they are divinely granted, and that the latter is uniquely granted to humans.
And here lies the true problem I think. Theism is seen as inexorably linked to anthropocentrism, and free will (and thus by association non-epiphenomenal consciousness) is seen as part of that discredited tradition. Yet in truth there are all sorts of abilities and traits that have uniquely evolved in humans, yet are not seen in the same light. One might as well blindly deny the existence of complex language because that uniquely human adaption is portrayed as divinely granted in the Biblical tradition.
Just because we now know we have an insignificant place in the cosmos, doesn’t mean that we are not contextually ‘special’. On this planet at least, we are indeed so, as evidenced by you’re reading and comprehending this article, whereas no other creature nor system can. That’s not anthropocentrism, it’s just the facts.
click image for source
Although some of these science popularisers might understand the philosophical arguments that show that physicalism is far from certain, I doubt many of the people who follow them do, because it’s something rarely discussed outside of philosophical literature. This gives the general impression to the thinking person that science has closed the door on these issues, which is certainly not the case. To claim otherwise – whatever the merit of the motivation – is nothing more than unscientific presumption.
So while some expect that free will can be explained using our current physical ontology alone, others like me suspect that we need to understand reality at the sub-Planckian scale before we get a grip on these concepts. However, there is actually no good evidence either way, and various problems for both options.
The evidence offered by most physicalists – that provided by neuroscience – is unsatisfactory in the extreme, because as worthwhile and useful as that venture is, it completely misses the target of this discussion. Neuroscience investigates the neural correlates of consciousness, but not phenomenal consciousness itself. In layman’s terms it investigates the brain, not the mind, and to claim the two are the same is simply begging the question.
If our subjective experience is somehow illusory, then neuroscience misses the target because the target is not there and physicalism is true. If on the other hand our subjective experience is non-illusory, then it misses the target because mind and brain are not identical.
Neuroscience is as far away from explaining subjective conscious experience as the computing field of artificial intelligence is away from creating it, and of course, the two problems are probably related. What seemingly magic ingredient do we need to add to make our computers ‘spark into life’ as it were? What clock speed, instructions per second, number of transistors, or logical connections will be required before a computer wakes up and says “who am I?”?
To move on to a positive thesis regarding consciousness and free will I will now need to make the assumption that they are not illusory. Of course, this in no way prejudices the debate as to whether that is actually the case, but it is necessary if one is to try to find a place for them in the natural order, and is justified by virtue of both being – at first sight at least – universally experienced phenomena.
I have already written a short piece on what I consider to be the most promising class of mainstream theoretical phenomena in physics and cosmology for finding a physical basis for non-epiphenomenal consciousness and free will. The particular examples I gave were seemingly ex nihilo particle creation at black hole event horizons in the form of Hawking radiation, and the same result in accelerated reference frames via the Unruh effect. Although the former seems completely off the table insofar as occurring within the brain, the latter might not be (although I am speaking from ignorance here rather than insight!). And more importantly (since the latter remains unlikely) the very existence of such a class of what I’ll call boundary phenomena suggest that there might be more examples to be found in the future.
Non-epiphenomenal consciousness is dependent on (phenomenal) consciousness, and free will is dependent both of those and also on cognition, so it might seem odd to start with an idea on free will rather than for example, qualia. But it seems best to start my search here, since if boundary phenomena is free will’s hook into the otherwise deterministic/random super-Planckian realm, then that would provide a clear signpost as to where we should look for the other phenomena on which it depends.
This is not a case of simply picking those phenomena to begin the search because it conveniently leads to areas I already found ripe for picking at consciousness itself. Rather, I know of no other mainstream process could account for the ‘uncaused causes’ required to sustain the notion of non-epiphenomenal consciousness and free will.
Click for source
Additionally, situating some aspects of consciousness at this scale is no longer prohibited by quantum effects being too fragile and fleeting for use by warm wet macro-biological systems. As I’ve touched on elsewhere, the field of quantum biology is blossoming, both in areas already supported by experimental evidence like photosynthesis and bird navigation, and more speculatively with ideas on DNA mutation. If there is evolutionary advantage to be had by the brain utilizing the sub-Planckian realm via quantum effects, then there’s a good chance nature will have done so. I will explore ideas on what such an advantage might be later.
So the next step for me in this series of posts is to try to integrate, or find suitable locations (really just scales, but I will say more on this, also later) for the two sets of fundamental phenomena we take as constituting reality. The first are those identified by objective scientific data; i.e. the properties of fundamental particles, their fields, and their relata or governing laws. The second are those identified by the evidence of the subjects who directly experience them, with their virtue being assessed by their universality, i.e. cognition, phenomenal consciousness, and free will.
In the next part, I’ll begin to tentatively make suggestions for the above project. I’ll try to find any aspects of the phenomena I’ve identified that might provide constraints and clues as to where they might reside and how they might function and interact. At the same time I’ll be sure not to resort to using any theory or phenomenon outside of mainstream speculative science.
Wish me luck, I’ll need it!
For Part 3 Click Here.
[last updated 14 September 2013)
Metaphysical Foundations (Pt1)
In this post and the next I’m going to lay down some of the few metaphysical beliefs, suspicions and desires that I have, for the sake of clarity in my statements elsewhere, my own introspection, and an attempt to explain why so many of my other views are ever-shifting and non-committal.
Click for source
It should be noted that not a single one of my beliefs, either presented here or elsewhere, are non-contingent, or true articles of faith. I don’t even believe one hundred percent that either the objective universe or my subjective experience is real, so from here on in, any non-qualified statement about belief translates to some extremely high, but non-certain probability.
I’m not going to get this done in one sitting, so I’ll begin with some of the basics before moving to the more interesting stuff next time. These are the foundations on which any metaphysical positioning on my part needs to be based.
First off, I think it’s fair to say that I take a weakly positivist view. In other words, I think that most phenomena (certainly consciousness and free will if they exist) in the Universe (and I use the term here in the largest sense, inclusive of multiple universes, extra dimensions etc.) are in principle, and almost certainly in practice, functionally explicable by science. This includes phenomenal consciousness and (if they exist) qualia, non-epiphenomenal consciousness and incompatibilist free will. This is not to say that science is the only relevant sense in which phenomena can be understood, but it should at least be one way.
(As I have explained elsewhere, I take anything that is posited to exist outside of the universe (like the god of Abrahamic religions) to be beyond the scope of scientific investigation. I consider any speculation regarding such things to be futile, and any claimed knowledge of them, either objective via divine books (lol) or subjective through personal divine experience, to be misguided. On ideas where god is the Universe, or nature in some sense, I remain skeptical but open.)
Second is a belief that’s flipped over the years, as my own take on what it means has changed, but I think it’s fair to say in at least a weak sense that I’m a reductionist.
It’s not that I believe every property of every system at every scale of the universe is explicable by referring to the workings of the lowest level alone. So for instance, I have no problem with ideas of top-down causation. Rather I believe that ultimately, the foundations of the ‘reality’ we perceive and measure should inform us how and why the properties of that ‘reality’ emerge at all.
Thirdly, I believe that our scientific understanding of the world is incomplete. This is, of course, an uncontroversial statement, but the key element is that our foundational understanding is incomplete. We have no proper understanding of the underpinnings of the rest of our theories, because we have no proper understanding of how the universe works, or of what it consists, at scales smaller than the Planck length.
In other words, in all likelihood, our current best theories – quantum mechanics and general relativity – are wrong. Not wrong in the logical sense that two plus two doesn’t equal five, but wrong in the sense of being only approximations of at least one deeper theory, in the same way that classical Newtonian gravity is only an approximation of Einstein’s General Relativity.
This fact strongly colours the other beliefs and ideas that follow, because we know that quantum mechanics and its unintuitive effects arise from energy fields that are grounded at a scale below the Planck length.
When combined with even quite weak reductionism, it should be obvious that we cannot sensibly commit to any metaphysical position with anything even approaching certainty. In fact, I’d argue that any weighting at all amounts to little beyond personal preference. All metaphysical positions remain highly speculative, including the dominant one, physicalism.
So, given the above, how should we approach forming metaphysical opinions that aren’t simply biased reflections of our personal preferences and wider views?
Again I’d suggest it should be obvious that we start at the places of closest approximation to the target. Here the target is currently physics at the Planckian and sub-Planckian scales, so our closest approximations are quantum mechanics and general relativity, and to have an informed metaphysical opinion, one needs to think hard about what the observed phenomena associated with these theories might suggest.
With that in mind I’ll now move from my “beliefs” to mainstream foundational scientific theories and ideas that I strongly suspect are true.
When we talk of a theory of quantum gravity that will succeed both quantum mechanics and general relativity, what we are effectively talking about are the laws that govern, and the ontology that consists in, the universe at the scale of the Planck length and below.
One thing considered a requirement of a fully-fledged theory of quantum gravity is that it be background independent. This is because general relativity will need to be derived from it, and general relativity (unlike quantum mechanics) is itself background independent. All this means is that like general relativity with its spacetime manifold, the equations of quantum gravity need to completely capture the evolution of the systems they describe without reference to a coordinate system of an ontological unit outside of the theory.
With Loop Quantum Gravity, background independence naturally falls out of the theory, but it is also achievable with string theory via the holographic principle in the form of AdS/CFT correspondence.
The upshot of this is that space and time are almost certainly emergent phenomena. At scales smaller than the Planck scale, they are expected to dissolve. Therefore we can say that at its foundations, the universe is both non-local and non-temporal, and of course, this is borne out in effects at larger scales above where we see both special and temporal separation being no barrier in the strong correlation of entangled particles in quantum mechanics.
This opens up interesting and counterintuitive possibilities for the notion of causation in the world when we consider again (and we should never forget!) that we are talking about fundamental truths at the bottom of reality. For instance, Loop Quantum Gravity is based on spin foam (itself based on the topological causation intrinsic in Penrose‘s spin networks), and alternative notions of causation have been posited by the likes of Gregg Rosenberg in his Theory of Natural Individuals as an essential base for understanding how consciousness may fit into the natural world.
Considering a non-temporal source for the temporally-ordered macroscopic phenomena that emerge from that base also holds the possibility of accommodating multiverse predictions that come from the likes of the Many Worlds Interpretation of quantum mechanics, or the vast number of possible Calabi-Yau manifolds in String Theory. This is because without time, one can think of the sub-Planckian realm as an Eternalist block universe, with all possible pasts and futures present simultaneously.
As Rosenberg has indicated, once one conceives of this, the most basic question we can ask of the Universe – why is there something rather than nothing – morphs into “why is there something rather than everything”
Click for source
So, to summarize, I believe in the following:
– a weak form of positivism in regards to consciousness and free will
– a weak form of reductionism
– foundational scientific incompleteness
And I strongly suspect the following are true:
– a background independent theory of quantum gravity will replace QM and GR
– space and time are emergent phenomena
– the Universe is a non-spaciotemporal all-possibilities-present block universe
Ultimately, in examining our closest approximations to foundational truths, one is rendered powerless to deny that all we think we ‘know’ is challenged. This applies not only to our current physical ontology, but also the notion of causation itself. With that in mind, I find it quite flabbergasting when otherwise rational and articulate thinkers declare that the basics of our theories are in some sense complete. In fact, the basics are the very thing we are missing.
That’s all for now folks, but next time out I’m going to get more speculative, as armed with the foundational beliefs and suspicions above, I approach the seeming conundrum of consciousness.
For Part 2 click here.
[last updated 14 September 2013)
A defense of non-epiphenomenal consciousness and free will.
[NOTE – this is re-post from the original incarnation of this blog.]
The existence of non-epiphenomenal consciousness and free will are two different, but related issues. Both are disputed by those of a physicalist persuasion, and both find themselves lacking any place within our current scientific understanding of the world. Indeed, they not only have no place, but also run contrary to a key precept of modern science: that there is no such thing as an uncaused cause.
click image for source
In the classical Newtonian picture of physics, the processes that lead to a particular brain state are governed by deterministic laws of nature. If in principle we could perfectly describe a starting brain state, then by extrapolation using those laws, we can predict with certainty a subsequent brain state. Quantum mechanics overthrows this view, revealing that fundamentally, all processes are probabilistic in nature. Instead of predicting with certainty, instead we only have a probability that one result will win out over another (even if in macroscopic systems there are so many quantum elements that the law of averages means the probability is very high indeed). This introduces a random element to the possible evolution of systems over time, but doesn’t necessarily help with defending free will. A random result is not necessarily a free one.
This fundamentally random, but practicably deterministic state of affairs is what we observe in every area of nature we’ve ever cared to study. Physical processes alone are sufficient to explain the evolution of systems in time. So what role could mental processes have if they exist at all? And even if there is a role, by what conceivable mechanism could a mental process affect a physical process? This is the problem of defending non-epiphenomenal consciousness.
Click for source
Beyond questions of the efficacy of conscious systems looms the even more unlikely notion of traditional incompatibilist free will; a concept seemingly so contrary to what we know about nature that most philosophers and scientists appear to have abandoned it altogether. And it’s not difficult to see why. The suggestion appears to be that not only does the mind play a role in the evolution of brain states, but that it can also derail the chain of cause and effect by somehow tipping the probabilities in favour of what would otherwise be a vanishingly unlikely alternative options.
Given those facts, how can defenders of causally efficacious mind and free will construct a believable argument for their existence?
To be taken seriously, both non-epiphenomenal consciousness and free will are desperately in need of a viable mechanism. Without it, both are rightly open to attack as being only explainable by supernatural forces. And to be viable, I would argue that any proposed mechanism would have to both conform to our current best-fit scientific theories and be robust enough to be considered mainstream.
Some may claim that such questions are outside the scope of science altogether, being that evidence for their existence is purely subjective and therefore unverifiable by the scientific method. With most such phenomena I would agree. For instance, believers in gods may try to claim that their experience of the divine counts as evidence, while others use subjective experience to underpin all sorts of dubious pseudoscience and quackery. So right away, I should make it clear that I consider non-epiphenomenal consciousness and free will worthy of explanation for one reason alone: they are – at first blush at least – subjectively universal phenomena. Even the most ardent physicalist must admit that without further reflection, we appear to have both. That of course is not proof – appearance often misrepresents reality – but it is I think, at least reason to investigate as best we can with an open mind.
An axiom attributed to ancient Greek philosopher Parmenides and later made famous in the modern Western world by William Shakespeare in King Lear says that “nothing comes from nothing“. The antithesis of this idea is the idea of creation ex nihilo, or “out of nothing”. The gods of many religious traditions are supposed to have pulled off such a trick at the beginning of the universe, and – unfortunately for defenders of non-epiphenomenal consciousness and free will – it’s a trick that agents seemingly also need to perform every time they exercise free will. They have to introduce or create some new event that is neither random nor wholly dependent on prior physical causes.
However, modern science has put that axiom under pressure, leading us to question whether it really is such a self-evident truth. It’s not that science has shown that matter or energy can be created ex nihilo (indeed, that would violate another key idea in physics; that of the conservation of energy enshrined in the first law of thermodynamics) but rather that modern science now suggests that the very concept of nothingness may be meaningless.
The quantum fields that make up the universe, such as the electromagnetic field and the Higgs field all have a ground state – a lowest possible energy configuration – slightly above zero, making them subject to quantum fluctuations. This is the case even in a complete vacuum, hence the name vacuum energy, although the property as applied to each field is known as zero point energy. But a vacuum is the only physical (i.e. non-abstract) definition of nothingness that makes sense within the bounds of the universe, so physically-speaking there is no such thing as nothing.
click image for source
Because excitations in quantum fields are one and the same as point particles in the standard model, this vacuum energy manifests as the creation of virtual particle/antiparticle pairs that briefly pop into existence and immediately annihilate each other. This fact applies not only to the vacuum or to space, but to every part of the universe. This vacuum energy can be thought of like the fizzing surface of a liquid, with each bubble being that brief pair of particles that burst into existence only to almost immediately pop out of it again, although it is important to note that this energy is usually both unmeasurable and unavailable to macroscopic processes – it is not some mystical energy field one can use to justify belief in dubious phenomena!
In technical terms, these particles exist for a time shorter than the Planck time, which means that due to the time-energy relationship in Heisenberg’s uncertainty principle, they remain unmeasurable and insubstantiated in the physical world. Hence the label virtual particles as opposed to actual particles we can measure.
However, just because they are virtual, one shouldn’t imagine that they play no role in the physical world. Not only have experiments shown them to be most-likely responsible for proven phenomena such as spontaneous emission, the Casimir effect, and the Lamb shift, but they are also generally thought to mediate the interaction of real particles in quantum field theory. For example, the exchange of virtual photons underlying the interaction of electrons in electromagnetism.
The only way these virtual particles can achieve actualisation and gain any kind of permanence is to draw on the energy in the surrounding environment, whilst avoiding mutual annihilation.
One situation in which this is thought to be possible is in the extreme environment of a black hole. These gravitational sink-holes bend space so severely that even the fastest moving objects in the universe – photons of light – do not have sufficient escape velocity to avoid falling into their clutches. This results in the formation of a boundary, or event horizon, from which no matter or energy can escape.
click image for source
Now consider a particle/antiparticle pair that forms at the event horizon of a black hole. In simple terms, if one of the pair forms inside the event horizon and the other on the outside, then they will not be able to interact and annihilate, and drawing on the gravitational energy of the black hole, they actualise. So both an observer on the interior of the horizon, and one on the outside witness the emission of particles as radiation. This is known as Hawking radiation after physicist Stephen Hawking who first conjectured its existence.
As previously stated, this isn’t really ex nihilo creation of matter or energy, because the creation process is driven by the intrinsic zero point energy of quantum fields, plus the energy of the surrounding system. Thus the principle of conservation of energy also means that the system involved must lose some of its own energy, or in the case of black holes the equivalent mass. In this way black holes starved of infalling matter are though to slowly but surely evaporate.
Another consequence is that the more mass or energy a system has, the greater the mass or energy of the particles that can be emitted. So whilst there are also hypothesized micro black holes, produced primordially in the early universe and perhaps still existing today, the Hawking radiation they would emit would consist only of low mass particles like electron/positron pairs or photons, which are massless and their own antiparticles. (Note that even in normal black holes, Hawking radiation is dominated by photons).
But black holes are not the only situation where this type of particle creation can occur. In theory, any energetic phenomena that forms an event horizon can perform the same trick.
One such phenomenon is known as the Unruh effect, and is a logical consequence of Einstein’s realisation that the gravitational force is equivalent to acceleration. Here an accelerating system gains kinetic energy from the gravitational field which then – from the point of view of an observer in the same relativistic reference frame as the accelerating system – results in a radiation bath in that internal frame, as particle/antiparticle pairs actualise before annihilating. And just like the black hole case, because an accelerating system creates an event horizon (the reason for which is beyond the scope of this piece), the equivalent of Hawking radiation is also witnessed by observers outside that horizon.
In both examples, we have the formation of an event horizon creating a one-way barrier between an enclosed volume of space (the interior of the black hole and the relativistic reference frame) and the rest of the universe.
So, returning to consciousness, we have – superficially at least – an interesting parallel. In both, the external environment can influence the enclosed internal worlds via the flow of information into them, but from within those enclosed internal worlds one is only able to observe the external universe rather than interact directly with it. However, via a phenomenon such as Hawking radiation, that internal world is able to exert a physical influence back on its environment. By analogy, these phenomena correspond to a mechanism for non-epiphenomenal consciousness.
Now, I’m certainly not suggesting that consciousness resides in microscopic black holes – I’ll leave that to Romulan starships! Nor am I saying that the Unruh effect is responsible. I simply don’t have enough knowledge of physics or mathematics to surmise or calculate how small objects at short distances may or may not produce the acceleration necessary or an event horizon local enough. And I strongly doubt there is anything in mainstream neuroscience to offer as a framework for such effects in the brain.
I’m only suggesting that such seemingly ex nihilo creation would not-so-long-ago have been thought impossible without supernatural intervention in the world, but that zero point energy opens-up the possibility of a variety of effects that might – at least conceivably – be exploited by evolved systems.
Under those speculative lights, the mere possibility of horizon-enduced particle creation in connection to consciousness and the brain would provide a high-level explanatory mechanism for non-epiphenomenal consciousness. And if such creation could be directed and (perhaps chaotically) amplified, one might see how such internally-produced nudges might pave the way for free will.
Click for source
At such low energies, any such creation would have to be in the form massless particles like photons, and whilst this might bring its own problems in accounting for how they might deliver the needed nudges to existing processes, on the other hand, such effects should in principle be measurable and therefore testable. It should be noted that there is already some speculation about the role of photons in the brain, though stressed that this is not mainstream.
Of course, even if there is something in my speculation, many issues might remain unresolved, such as the hard problem of consciousness and how the mental domain might manage to muster and direct its will; not to mention under what ontology and laws consciousness itself might operate internally.
Also, there is danger here in stepping too far with speculative ideas. Scientists and rational thinkers are wary of any non-physicalist speculation on consciousness, I suspect because to do so opens the door to all sorts of religious and pseudoscientific nonsense that are neither objectively testable or even subjectively universal. So it’s important to not speculate more than a single step beyond our current knowledge, and to do so without any preconceptions of where one is heading.
But with that caution in mind, I still think it’s fair to say that this class of phenomena in physics at least shines a light into the domains in which we should search for clues. And even if such speculation proves fruitless, it serves to illustrate how science continually surprises us with unexpected phenomena. So while admitting that the existence of non-epiphenomenal consciousness and free will remain improbable, we should not lose hope. Closing the door on what are our most universal and all-encompassing experiences of reality – that our minds interact with and affect the physical world – is premature.
To be free again? How free will is not dead yet.
The New Scientist website is running an interesting article on a recent experiment that casts further doubt on the non-existence of free will. You can read the original article here.
If the possibility that your decisions are not free comes as a shock to you, it’s worth considering that from a purely scientific point of view, our current understanding of how the universe works at a fundamental level leaves absolutely no room for anybody or anything to be a self-creating causal agent.
Click for source
The reason for this is that the known laws of classical physics are deterministic, and even if you discard these in favour of the more fundamental laws of quantum physics, you find that the only non-deterministic part of the theory, namely the wave function in the Schrödinger equation, only throws a component of complete randomness into the mix, and frankly, complete randomness is no better for free will than complete determinism.
So back in the 70s, when Benjamin Libet studied the brain during free decision making and found correlated activity before the volunteers reported they had made the conscious decision, some interpreted this as good evidence that free will was indeed just an illusion created by the mind.
That’s not to say that huge numbers of scientists necessarily take that view. The majority of scientists are practical people who simply follow the evidence and avoid interpreting their results too much, an approach that some refer to as “shut up and calculate”. Most are content to leave any metaphysical speculation to philosophers and armchair commentators like myself.
But among philosophers themselves, there are probably many of a physicalist persuasion who previously looked to Libet’s results as supporting their view, and that line of evidence looks like it’s beginning to disappear. This includes those of the compatiblist view, which for me is the same as saying there’s no free will at all.
Personally, I believe that the hypothesis that free will is an illusion is an extraordinary one in the face of all our subjective experience to the contrary, and that, to quote Carl Sagan, “extraordinary claims require extraordinary evidence”.
For me, free will should be assumed to exist, even if that means we need to accept that something major is missing or incorrect in our understanding of fundamental physics. And since consciousness itself is still far from being fully integrated into our understanding, there is still plenty of room.
The whole subject of consciousness, free will, and fundamental physics fascinates me, and there’s a whole host of literature by philosophers who deny it exists and those that would like to preserve it. I see this as philosophy at it’s most useful, probing the edge of our scientific understanding, and suggesting ideas on what logic dictates could be possible against what’s not.
I hope to write a lot more on this subject in more detail when I have time, hence the lack of links in that last paragraph. So hang on to your armchairs! ;) |
ee9e8b9de31b09d1 |
by subject...
1 - 10 of 37 results for: PHYSICS
PHYSICS 14N: Quantum Information: Visions and Emerging Technologies
Terms: Spr | Units: 3 | UG Reqs: WAY-FR, WAY-SMA | Grading: Letter or Credit/No Credit
PHYSICS 17: Black Holes and Extreme Astrophysics
PHYSICS 25: Modern Physics
How do the discoveries since the dawn of the 20th century impact our understanding of 21st-century physics? This course introduces the foundations of modern physics: Einstein's theory of special relativity and quantum mechanics. Combining the language of physics with tools from algebra and trigonometry, students gain insights into how the universe works on both the smallest and largest scales. Topics may include atomic, molecular, and laser physics; semiconductors; elementary particles and the fundamental forces; nuclear physics (fission, fusion, and radioactivity); astrophysics and cosmology (the contents and evolution of the universe). Emphasis on applications of modern physics in everyday life, progress made in our understanding of the universe, and open questions that are the subject of active research. Physical understanding fostered by peer interaction and demonstrations in lecture, and interactive group problem solving in discussion sections. Prerequisite: PHYSICS 23 or PHYSICS 23S.
Instructors: Irwin, K. (PI)
PHYSICS 26: Modern Physics Laboratory
Guided hands-on and simulation-based exploration of concepts in modern physics, including special relativity, quantum mechanics and nuclear physics with an emphasis on student predictions, observations and explanations. Pre- or corequisite: PHYSICS 25.
Instructors: Irwin, K. (PI)
PHYSICS 43: Electricity and Magnetism
What is electricity? What is magnetism? How are they related? How do these phenomena manifest themselves in the physical world? The theory of electricity and magnetism, as codified by Maxwell's equations, underlies much of the observable universe. Students develop both conceptual and quantitative knowledge of this theory. Topics include: electrostatics; magnetostatics; simple AC and DC circuits involving capacitors, inductors, and resistors; integral form of Maxwell's equations; electromagnetic waves. Principles illustrated in the context of modern technologies. Broader scientific questions addressed include: How do physical theories evolve? What is the interplay between basic physical theories and associated technologies? Discussions based on the language of mathematics, particularly differential and integral calculus, and vectors. Physical understanding fostered by peer interaction and demonstrations in lecture, and discussion sections based on interactive group problem solving. Prerequisite: PHYSICS 41 or equivalent. MATH 21 or MATH 51 or CME 100 or equivalent. Recommended corequisite: MATH 52 or CME 102.
Instructors: Kasevich, M. (PI)
PHYSICS 43A: Electricity and Magnetism: Concepts, Calculations and Context
Additional assistance and applications for Physics 43. In-class problems in physics and engineering. Exercises in calculations of electric and magnetic forces and field to reinforce concepts and techniques; Calculations involving inductors, transformers, AC circuits, motors and generators. Highly recommended for students with limited or no high school physics or calculus. Corequisite: PHYSICS 43-34 or PHYSICS 43-35; Prerequisite: application at https://stanforduniversity.qualtrics.com/jfe/form/SV_eIGPlvxyNxdziXX .
Instructors: Nanavati, C. (PI)
PHYSICS 44: Electricity and Magnetism Lab
Hands-on exploration of concepts in electricity, magnetism, and circuits. Introduction to multimeters, function generators, oscilloscopes, and graphing techniques. Pre- or corequisite: PHYSICS 43.
Instructors: Kasevich, M. (PI)
PHYSICS 65: Quantum and Thermal Physics
(Third in a three-part advanced freshman physics series: PHYSICS 61, PHYSICS 63, PHYSICS 65.) This course introduces the foundations of quantum and statistical mechanics for students with a strong high school mathematics and physics background, who are contemplating a major in Physics or Engineering Physics, or are interested in a rigorous treatment of physics. Quantum mechanics: atoms, electrons, nuclei. Quantization of light, Planck's constant. Photoelectric effect, Compton and Bragg scattering. Bohr model, atomic spectra. Matter waves, wave packets, interference. Fourier analysis and transforms, Heisenberg uncertainty relationships. Schrödinger equation, eigenfunctions and eigenvalues. Particle-in-a-box, simple harmonic oscillator, barrier penetration, tunneling, WKB and approximate solutions. Time-dependent and multi-dimensional solution concepts. Coulomb potential and hydrogen atom structure. Thermodynamics and statistical mechanics: ideal gas, equipartition, heat capacity. Probability, counting states, entropy, equilibrium, chemical potential. Laws of thermodynamics. Cycles, heat engines, free energy. Partition function, Boltzmann statistics, Maxwell speed distribution, ideal gas in a box, Einstein model. Quantum statistical mechanics: classical vs. quantum distribution functions, fermions vs. bosons. Prerequisites: PHYSICS 61 & PHYSICS 63. Pre- or corequisite: MATH 53 or MATH 63CM or MATH 63DM.
Terms: Spr | Units: 4 | UG Reqs: GER: DB-NatSci, WAY-FR, WAY-SMA | Grading: Letter or Credit/No Credit
Instructors: Gratta, G. (PI)
PHYSICS 67: Introduction to Laboratory Physics
Methods of experimental design, data collection and analysis, statistics, and curve fitting in a laboratory setting. Experiments drawn from electronics, optics, heat, and modern physics. Lecture plus laboratory format. Required for PHYSICS 60 series Physics and Engineering Physics majors; recommended, in place of PHYSICS 44, for PHYSICS 40 series students who intend to major in Physics or Engineering Physics. Pre- or corequisite: PHYSICS 65 or PHYSICS 43.
Instructors: Pam, R. (PI)
PHYSICS 83N: Physics in the 21st Century
Preference to freshmen. Current topics at the frontier of modern physics. This course provides an in-depth examination of two of the biggest physics discoveries of the 21st century: that of the Higgs boson and Dark Energy. Through studying these discoveries we will explore the big questions driving modern particle physics, the study of nature's most fundamental pieces, and cosmology, the study of the evolution and nature of the universe. Questions such as: What is the universe made of? What are the most fundamental particles and how do they interact with each other? What can we learn about the history of the universe and what does it tell us about it's future? We will learn about the tools scientists use to study these questions such as the Large Hadron Collider and the Hubble Space Telescope. We will also learn to convey these complex topics in engaging and diverse terms to the general public through writing and reading assignments, oral presentations, and multimedia projects. The syllabus includes a tour of SLAC, the site of many major 20th century particle discoveries, and a virtual visit of the control room of the ATLAS experiment at CERN amongst other activities. No prior knowledge of physics is necessary; all voices are welcome to contribute to the discussion about these big ideas. Learning Goals: By the end of the quarter you will be able to explain the major questions that drive particle physics and cosmology to your friends and peers. You will understand how scientists study the impossibly small and impossibly large and be able to convey this knowledge in clear and concise terms.
Instructors: Tompkins, L. (PI)
Filter Results:
term offered
updating results...
number of units
updating results...
time offered
updating results...
updating results...
UG Requirements (GERs)
updating results...
updating results...
updating results...
© Stanford University | Terms of Use | Copyright Complaints |
00d4cde2ff65a286 | (1/3) “Man fears time…”
I have wondered if Imhotep had some hidden meaning in the pyramid. Little is recorded about him, but he lived during the Old Kingdom of Egypt under Djoser. He seems to have been born a commoner, and become renown as a great architect, sage, and perhaps the first physician in the world. His monument to King Djoser, the Step Pyramid, is the archetype for all pyramids that followed and the oldest stone structure in the world.
Pyramids are uniquely stable structures. They have survived many thousands of years and all the storms and earthquakes that time could bring. The mass at the peak of the pyramid we may say is in a high energy state. It is supported by progressively larger masses that reside at lower points of the gravitational potential. This is the thermodynamic analogy I mean to draw, and I speculate:
That Imhotep, born a commoner and working at a time when there may have still been living memory of an Egypt much less advanced, and therefore much less hierarchical. He may have understood on some deep level the changes in social structure that were necessary to make the pyramid possible. Because pyramids are built by slaves, and laborers, and craftsmen of many kinds. Hunters and gatherers cannot build them, and pastoral nomads can’t either. Only a culture with advanced agriculture and the division of labor agriculture makes possible can build structures like this.
The great works of the ancient world were not possible without slavery, and systems of exploitation and power rise to reflect the skyward structures themselves. Then, at the earliest years of that civilization, Imhotep may have understood in some way that Egypt had passed into a new thermodynamic mode of existence. A mode in which energy is drawn up through the roots of plants, and into the people who plant and harvest them and other resources, to flow up and be concentrated into progressively more privileged classes, to terminate finally in the vested power, privilege, and abundance of the King.
There is a clear connection here to the flows of energy in ecosystems. Apex predators eat and scavenge whatever they can. A lion will steal a kill from a hyena. The grazing animals are fed ultimately by the sun. People deny this similarity because they don’t want to know they are eating each other.
In a way, modern civilization seems to be an extension of an ecological structure that humans used to be embedded in, or maybe civilization has superseded that ecology completely. We were prey once. Then we became apex predators. We have advanced our civilization by replacing most of the world’s fauna with our own biomass in both livestock and human chattel. And the more developed a society tries to become, the more material must be put underneath in the base. Egypt and other ancient civilizations could only advance so far, but the modern western imperium extends across the entire globe, purchasing its privileges with sweatshops and child miners.
There are observations to make and questions to ask. I won’t have a proper response to all of these. Among the most important:
• The poor don’t need the wealthy, but the wealthy do need the poor. The world is right-side up, not upside down as Ayn Rand would have it.
• Scientific discoveries are typically made by the upper classes. They benefit everyone, but not uniformly. Most of the benefit reaches the upper classes first, or exclusively.
• People are interdependent, and though it’s not clear that the quality of life in post-agricultural civilization is better in absolute terms, most people today would not be alive without modern technology. Is it forbidden by the laws of physics to have this modernity without its gross inequality?
• Are there scaling principles at work such that the thermodynamics of current societies could be used to predict an upper limit to any society’s development, assuming the total flux of energy on the surface of the Earth bounds its base?
• There are limits on an empire’s geographic span which seem to determine how wide the base can become, and these limits are primarily technological, viz. communications technology may have played a dominant role in determining the size of social structures in each era of history. What effects are due to new technologies, not just in the size, but in the topology of society?
• If preagricultural societies exist in a “first” thermodynamic mode, and post-agricultural societies are in a second mode, does a third mode exist? If I have anything worthwhile on this question it will wait until part 3 of these posts.
It has always bothered me that Americans never seem to understand their relationship to the rest of the world. It is the nature of privilege never to recognize itself, but it is absurd to see them so oblivious to how they in fact depend on the poverty in the greater part of the developing world and the working class in their own country.
I can’t go to the grocery store without being reminded of this. My hands aren’t the first to touch the onion and the tomatoes in the produce section. You see, it is apparently very hard to build a machine that can cut the stem of an onion and leave the bulb intact. It’s necessary to have people, sometimes children, do this by hand. They crawl through the field on their hands and knees under the summer sun and cut the stems with a knife. In the western United States of course, these people are mostly immigrants from Central and South America and their children.
The pyramid of Djoser is a reification of the Egyptian social structure in power and exploitation that was necessary to construct it. And like the pyramid, it may be that social structure is just as long lived, and reflects some underlying thermodynamic stability. There is an irony then in the old Arab Proverb,
Man fears time, but time fears the pyramids.
Do we really know how to build pyramids, or was that knowledge buried with Imhotep and hidden with his tomb? Have we learned the secrets hidden in our own societies? Do we know what is necessary for a long-lived, sustainable, self-sufficient society?
Does it bear repeating that the Step Pyramid is a tomb? And how long will it last, compared to the sand dunes that flow across the eons of the desert?
Some of these things I have understood, if dimly, since I was a 12 year old crawling through an onion field with a rusty knife in my hand. I don’t know if I’ve made much progress in my understanding. I was actually working on a post about Elon Musk’s Mars Colony, when I realized I hadn’t written this yet.
(2/3) The eugenics of a Mars Colony.
(3/3) In Extremis. (soon)
You are a sociopath.
Why is war the kind of event that can put two brothers on opposite sides of a conflict?
It did seem ironic to me that President Obama would decry the attitude that Americans have adopted toward events like this. Just two days after the Umpqua Community College shooting, a US Airforce C-130 gunship attacked a Hospital in Kunduz, Afghanistan run by Doctors without Borders, killing 22 (1). Obama in this case had little to offer but a weak apology and no public comment. It seems the killing of foreigners is routine, and Americans’ response to the killing of foreigners is routine.
Why do foreign lives matter so little to Americans? Americans are not special of course, people of every country value the lives of their own over others. Why do black lives matter so little to most whites in the United States, and violence on the part of the police against black youth so readily justified or excused in the minds of the majority?
We like to flatter ourselves with the notion that we are very social and empathetic creatures, but there are important caveats(2). Quite simply, empathizing with others requires us to know them. We know our family members best, our friends and coworkers second.
Imagine that everyone resides in a social network, with ties that extend like concentric circles out to neighbors or friends, then out to friends of friends, etc. until the world population is reached (3). People close-in are typically in the “in-group,” and benefit the most from our prosocial tendencies.
What is the problem with this? It is very subtle and I’m not sure there is a solution. The increasing globalization of the world seems to have made this effect worse:
The consequences of your actions can reach many more people than your empathy can. For the vast majority of the people in the world in whose lives you have some influence, you are a sociopath.
We work in concert to create the systems that individuals live in. Our actions are individual, but the system is collective. Above I think I have only put into personal terms what people usually call institutional or systemic effects. What most of you don’t seem to understand is that you are a part of a system. It is not something you can externalize; it’s easy to say something is the fault of governments, leaders, or corporations, but that lets us cop out of our personal integration with these constructions and our role in legitimating the actions of leaders.
But how much responsibility can we each hold for this? I’ve written here before about the subtle ways individual behavior is integrated, and emerges into macroscopic behavior that is not predictable from, and is even sometimes directly opposed to individual intention (4). There is something incredibly important in what we don’t understand about how a corporation is capable of mass depraved indifference murder(5), and how citizens’ taxes can pay for drone strikes. And there is something we are not understanding about how wars occur.
Most people are not murderers. Most people don’t want to start wars, but often believe when the fighting starts that it must occur. Even most soldiers do not want to kill anyone. But the social system’s behavior feeds back down to the personal scale, putting human beings onto battlefields, and into conflicts they have no inherent desire for. At least, not until they form a connection to their fellow soldiers.
Yet the world has entered new regime of interconnection. The capacity of people to move about the globe, and to interact across arbitrary distances has implications for the consequences of an individual’s actions (6). There is a kind of “nonlocality” in our actions that was not possible before. It allows us to have a more direct impact on the lives of people far away geographically, but also perhaps to empathize where we could not previously. I’m not sure we can know where this will lead.
1. Kunduz articles at the Guardian , New York Times.
2. A prior post on Race and Human Groups.
3. Social networks have a well documented small-world property. It’s never more than 6 degrees to Kevin Bacon. For a”social distance” metric defined in this way it seems the capacity for empathy drops as quickly as the number of people reached expands.
4. Prior posts: The moral implications of nonlinearity and emergence., The multiplicity of agency.
5. The 2013 collapse of a garment factory in Dhaka is just the one example that came immediately to mind: http://www.bbc.com/news/world-asia-22476774. You may be reminded of the 2003 doc The Corporation.
There is an aspect of this I will try to post about later: there are segments of the population that are not well integrated in this globalized social network. They are disproportionately older and are being left behind in a way.
The “great man” of history.
Donald Trump is a weak candidate: he is a fool, lacks charisma, and is a poor strategist in politics and everything else. One has to wonder how this feckless clown could have possibly met with such success.
There have been other people like this before: Pat Buchanan is the first that comes to mind, but also Barry Goldwater. What is it about Donald Trump?
There is nothing about Donald Trump. Society has always produced people like him, and always will, but what is special is the time we find ourselves in. Apparently the United States is ready for a return to nationalism and nativism, and it is on these currents he has sailed to the nomination.
It’s these massive trends that seem to be beyond our control and understanding, so we lean on our need for narratives, and tend to focus disproportionately on the individual actors of history. Though it is through systems the actors’ scripts are written.
The world is “complex” in a particular sense: most individuals contribute a little to the large scale behavior, as would be expected in a system that aggregates all their actions uniformly – one that is effectively stochastic or chaotic on all scales. But somehow the world is ordered in some ways and not totally chaotic – there is enough structure for a few people to have an apparently large impact, so their actions reach much further than would be expected in a chaotic(or “noisy”) system. The world may be on the ‘edge’ of chaos.
My point is that the true complexity of the world is something we don’t understand, and most people fail to take a system level perspective for that reason. Human faculty is more amenable to storytelling. We like to have heroes and villains, even if this perspective makes some of the deepest problems of human society more difficult to solve.
I can take up one example to illustrate the point – not long ago, one of the disgusting things Trump said was that women would have to have some kind of punishment for getting an abortion. The media seized on this of course, and there was a lot of blather about this, but here is the problem: Trump has passed no law, and may never be able to pass any such law, but the conservative takeover of state and local governments almost complete. And it is at the local level that a slow, creeping regime of anti-choice law has been imposed in many parts of the country. It is this greater problem that is distracted from when the national discussion centers on some celebrity shitstick – it is the greater problem because Trump will almost certainly lose, but the steady loss of women’s rights will continue afterwards.(1)
I have been expecting for a few months now that Donald Trump will break the Republican party, as the tides have dictated. The fundamental dilemma is being exposed: No candidate can win the Republican primary without being staunchly anti-immigrant. No candidate can win a general election while being staunchly anti-immigrant. This transition seems to have occurred during the Bush administration, while most of white america failed to notice. I am expecting a 3rd party candidate to appear in the next few weeks from establishment conservatives. In the aftermath of 2016 (before the election, even?) there will perhaps be a tectonic shift in the allegiances of both parties, as the system reforms itself and responds to the radical shift in the country’s demographics.
I have posted here before that what lies on the other side of a singularity, or a phase transition cannot be predicted. We may find communists and neonazis, radicals left and right jumping into the melee. We are approaching the hour of extremes.
1. Obviously climate change is another prime example, and one for which we can’t even find a proper villain to motivate our action. The bad guy in that case is us.
Cogito, Ergo…
From Part IV of Descartes’ Meditations(1):
There is an error here (2). What is the “I”? What restrictions does the existence of thoughts and consciousness alone put on the forms it may take? Could the thinking thing really be independent of “any material thing”?
Let there be thinking thing, then. A mind. It perceives by some instruments, which correlate with the opening and closing of the eyes, the stopping of the ears. It exists with some temporal sense, such that it recollects “past” events, and anticipates “future” events. It sleeps, it dreams at different times, all correlated with changes in experience.
Clearly the mind doesn’t see without eyes. It doesn’t hear without ears. When the brain takes a whack, its function doesn’t continue independently and unaffected. There is clear physicality to experience, and there are a number of reasons to suspect the brain and the body are the substrates of the mind.
Let us assume, as I have done here before (the “Materialistic Principle”) that that which is not observable does not exist. It follows that the mind can only arise from what is physically observable about the brain (3, 4). I take it that the mind is not “hidden” somewhere inside the brain, then it follows that the mind is not an integral whole; it is an emergent property or process of the electrical signals that pass in and among the parts of the brain.
Creatures are not made by design. They are evolved, but that process gives a striking appearance of design. It obvious that nature is capable of acting as if it was* a creative entity. Maybe not with foresight and planning, yet somehow with an apparent agency.
*It seems to me that we don’t grant nature the capacity to “truly” act creatively for this reason: We see too clearly the detailed mechanisms by which it achieves that appearance. There exists no integral whole for the mind as we trivially observed above; the creativity the mind possesses must also arise through a confluence of components, just as in the case of natural selection.
Clearly there is no god (5), and it would appear that nowhere(!) in the world is there a thinking, creating thing that does not arise from a confluence of more basic constituents in a distributed system. This system’s processes may be distributed more widely in time, as in the case of natural selection, or more tightly distributed in time but highly complex in space as in the case of the brain, but this consequence is unavoidable: selection is not “like” a creative entity, it is one in any sense we are willing to impart to ourselves.
You have heard of a “god of the gaps,” but there is a similar kind of error in reasoning when it is assumed – and it is assumed very commonly – that people posses free will. It is a kind of deification of the self; an intercession of a vague divinity, which, while totally normal and probably embedded as part of the adaptive process in humans(6), in this current discussion I posit that all agency is only apparent agency. Or equivalently, that that which is indistinguishable from agency is agency itself (7).
The development process in technology is so organic, especially for things in which researchers are taking advantage of the principles of evolution and adaptation. It seems strange to me that we should be so cavalier about allowing neural networks to train and evolve. These processes are so poorly understood (8), we risk putting ourselves in a situation where we will have created strong AI on accident before doing it on purpose, and not even knowing how we did it. As I have explicitly stated, the evolutionary process “has a mind of its own.” This cavalier attitude seems to stem in part from the basic assumption that we are so special, that we must be the only thinking things in the universe.
We are not so special, except maybe in terms of how rare a thing we might be, and maybe the processes that made us form an NP Hard problem, which therefore can’t be shortcutted, so that this planet really did necessarily invest a lot of time and sacrifice in creating our species.
But even if we are the rarer structure, we exist not so apart and different from the other structures of the universe. The differences of kind are not differences of kind solely, but come from differences of a quantity of pieces and a subtlety of their organization.
I recall the arguments of Lucretius and wonder why it should take so long for us to see that the types of arguments about objects and matter being made of atoms carry similar implications for our minds and the simplicity that underlies apparent complexity of all forms.
1. From the translation at Gutenberg. https://www.gutenberg.org/files/59/59-h/59-h.htm
2. And I’m sure, somewhere in the reasoning I put forth to correct it, heheh.
3. Is the brain even separable from the body? Can I assume its function doesn’t distribute into processes occurring throughout the nervous system? I remember reading research articles about people’s emotional states being altered with the amputation of limbs, or after having artificial hearts implanted. If the brain relies so heavily on cues from the nervous system, it may not actually be able to access some states or engage in some processes without contingent parts. Or put it like this: Can a congenitally blind person imagine the experience of sight? Do you know what its like to be a bat? Can you even know what its like to reside in the body of another person? Even your identical twin?
4. Is the brain a quantum computer? What are the limits for what is physically measurable? I recall reading an argument in Schlausshauer that suggested it probably was not a quantum computer – or at least from what was known about neuron activity it couldn’t involve quantum mechanics. I’m just assuming that some microscale structure and electrical signals are as deep as you need to get for a brain.
5. This is a kind of implicit axiom on this blog, as a part of what I’ve been trying to do here is talk about things purely from a post-theistic worldview, i.e., what are the things we would think about if we were not chained to naive assumptions about the answers to basic questions and having to argue constantly with deists? I’ve thought maybe I should post more about the justification for atheism, but there doesn’t seem to be much point – I don’t dwell on it too much anymore, and it’s beat to death in the popular sphere.
6. Some lies it might be beneficial to believe, as I have mentioned on this blog before. I think this might be one of them. Apparently Isaac Singer said this: “We must believe in free will — we have no choice.
7. Notice the use of MP. There’s some real irony here. Some of these things I don’t think most people will realize until AFTER strong AI has been brought into the world and made them incredibly clear to everyone. If it satisfies the turing test, you have no choice: It thinks just as much and probably not too different from how you do.
8. I am suddenly reminded of the beautifully creative play of AlphaGo recently.
The moral implications of nonlinearity and emergence.
There are a few concepts I think it would help to take care of: The first is in the companion post here.
The other is described below; it just happens Dr. Neil Tyson tweeted about this yesterday:
It has often been repeated, and my experience bears this out, that the real trouble doesn’t come from what we know we don’t understand, but from what we think we understand but really don’t(3). If we had the right questions, the answers would soon follow.
There are so many people who study the subjects of human behavior: psychologists, sociologists, political scientists, etc. How many of them would pretend to understand something of quantum mechanics?
Quantum mechanics is just linear algebra, some complex variables, and the very general physical concepts of uncertainty and wave interference that require those mathematics. But waves interfere linearly: that is, they just stack. And the operators that appear in the equations of (non relativistic) quantum mechanics are just stand-ins for acts of measurement or transformation on the information of the state, and as required by the Schrödinger equation(4) act linearly.
But the behavior of the vast majority of the systems of the world, which do not submit to the simplifications and contrivances of a well designed physics experiment, is not linear. In living systems, the interactions between, and really it seems the integration of components result in a nonlinear relationship between small scale and large scale behavior.
The consequences of that nonlinearity are profound and confounding. Pen and paper analysis becomes useless for many questions, when no equation can be written down. The growth of systems implies its regime – the very rules it obeys – can change radically as its configuration traverses a complex and unknown topology, making computations and simulations necessary, though not sufficient to understand. These systems are very hard, yet the people who study these fields today do pretend to understand!
And now we must be honest: we, as humans, do not understand ourselves. The softer sciences have made progress, but it has been slow and groping, stymied by their being bound to the use of inferior, insufficiently rigorous tools. What knowledge they have gained is washed out by an ocean of biases, assumptions, and plain ignorance in the greater public, as a drive toward self-serving beliefs come into play particularly in human affairs.
And now to the proper subject of this post, and the reason I tagged it “Black History Month.” (I’d hoped to make more general comments, but this post is already too long and a specific example serves as well.) A few days ago this article appeared on the Atlantic from Dr. Adia Wingfield(1)(emphasis mine):
Progress has undoubtedly been made since the days of explicit segregation, and most white people no longer openly advocate for segregation in neighborhoods, schools, and offices. When speaking to researchers, many even argue that integration is important and necessary. … Despite laws prohibiting segregation…it persists on several fronts today.
Some of the most striking studies done on present-day segregation have to do with how it’s connected to the ways families share money and other resources among themselves. The sociologist Thomas Shapiro, for instance, argues that the greater wealth that white parents are likely to have allows them to help out their children with down payments, college tuition, and other significant expenses that would otherwise create debt. As a result, white families often use these “transformative assets” to purchase homes in predominantly white neighborhoods, based on the belief that sending their children to mostly white schools in these areas will offer them a competitive advantage. (These schools are usually evaluated in racial and economic terms, not by class size, teacher quality, or other measures shown to have an impact on student success.) Shapiro’s research shows that while whites no longer explicitly say that they will not live around blacks, existing wealth disparities enable them to make well-meaning decisions that, unfortunately, still serve to reproduce racial segregation in residential and educational settings.
Local decisions and actions have global consequences, not always the linear sum of the local, or even foreseeable from the local. This is the deeply nonintuitive part, the part people will fail to understand because it violates some vague assumption I might call ‘linearity of intent’, and because they really can’t anticipate that something bad could come from most people meaning well:
It is not actually necessary for people to be racist to reproduce a systemically racist society. (5)
This carries more general implications about the morality of actions, carried out locally, which have global consequences not directly foreseeable, but I will stick to this example specifically. There is a kind of transmutation that occurs through the nonlinear aggregation of people’s behavior, so that decisions which appear acceptable at the one scale grow to have dire consequences in the larger.
In this case it means that people doing their best by their children, by the fact of acting in a world in which whites enjoy disproportionate privilege, perpetuate segregation and the systemic oppression follows (6, 7). Hiring managers, acting without any racial intent of their own, reinforce the topology of social networks by selecting from a pool of applicants which comes to them with such bias already built-in (8).
It follows that we can’t assume our actions will not remain unaltered by their integration with the actions of others, and that the globally/systemically reproduced intent of the macroscopic system will be the same as the majority’s. Even worse: there are cases where emergence creates a system that behaves in ways that are directly opposed to the intent of individuals.
Without a better understanding of nonlinearity and emergence – and with a rigor that deprives us of safety in our preconceptions – I don’t imagine a solution to these problems can be found.
1. Two Atlantic articles about segregation and poverty. http://www.theatlantic.com/business/archive/2015/02/is-ending-segregation-the-key-to-ending-poverty/385002/?utm_source=SFFB, http://www.theatlantic.com/business/archive/2016/02/segregation-tomorrow/459942/
2. WaPo article about whether more intelligent people are less racist. https://www.washingtonpost.com/news/wonk/wp/2016/01/27/are-smarter-people-actually-less-racist/?tid=sm_fb
3. I can’t find a proper attribution. It wasn’t Twain.
4. And some basic “common sense” type assumptions…when all possible events are accounted for, their total probability is 1.0. That the position from which distances are measured should imply nothing about the prediction, and such.
5. In 2014, I realized to my surprise how much sociologists had been able to learn; that this was not a foreign or outlandish concept to them, but some had already made this observation, ex.: Bonilla-Silva.
6. Another very related example comes immediately to mind: it is not necessary that most police in a black community be malicious to do harm. It is only necessary that they be afraid, indifferent, ignorant, or any mix thereof.
7. If many of our problems really do take on this form, can they even have a solution? I have heard that busing in an earlier era was actually closing the black/white achievement gap and undoing the evil of segregation before American’s more inveterate nature reasserted itself during the 1980’s. This American Life, 562.
8. This was also discussed in the Wingfield article.
The multiplicity of agency.
The multiplicity, Ω, is the number of ways a system might be configured given some observable macroscopic state (macrostate). If I flip a coin and catch it in my hand, and I do not look at it, its multiplicity is two(1). A body with no extension (a particle with no length, width, or depth can’t twist, bend, rotate, etc.), contained in some box can occupy some position in that box and have some velocity, and only these two variables contribute to its multiplicity.
If I take a large handful of coins, and toss them up in the air, there is an expectation that “about” half of them will come up heads, correct? What may be less apparent, but can still be intuited, is this: As the number of coins being tossed increases, the percent by which they deviate from a 50/50 split will decrease. This is sometimes called the Weak Law of large numbers, and it is the simplest kind of emergence I can think of; it is a macroscopic, qualitative property that arises from increasing a quantity in the system. It is a statistical impossibility that a large number of fair coin tosses will not reveal the underlying probability of a single toss.
Viewed another way: there exists a set of all possible outcomes for every coin flip. Because they are all equally probable, and there are so many more outcomes with the coins split about 50/50, those outcomes are much more likely. The states that split the coins evenly have a much greater multiplicity.
There are many different kinds of statistical convergence, and I suspect they can all be associated with a type of emergent observable property. If the multiplicity as it appears in formal statistical mechanics, is sufficiently explained, I may give the multiplicity of agency.
Let there exist some macroscopic behavior of a community. This behavior is associated with an effective, or apparent agency which emerges from aggregation of individuals, or the local behavior of components. The multiplicity of this agency is the number of local behaviors which all contribute to reproducing the same macroscopic behavior.
• People shopping for clothes will operate under a number of motivations, and may weigh a number of different things when making purchasing decisions. Different people also go with very different fashion choices, or will prefer certain stores, etc. To the extent that all of these variations in local behavior typically contain a common thread of preferring lower priced goods, the market will generate a downward pressure on costs of production which is robust to all of this variation. This pressure has contributed to the creation of sweatshops. Because people make decisions at a local level and act at that scale, the apparent macroscopic agency, or systemic behavior is indifferent to this.
What I am trying to get at is a way of understanding systemic societal problems in rigorous terms that show the qualitative differences between global scale and local scale behavior.
• This seems closely associated with problems of nonlinearity, and a violation of the basic assumptions people habitually make(2) when trying to understand these systems. It’s just not enough that ‘most people’ would not want some particular system behavior.
• How do wars occur, anyway? It would be too simplistic to attribute this completely to leaders. It would be safe to say that most people do not wish for these events, but we seem to habitually behave in ways that contribute to tensions and conflict; that the greater multiplicity belongs to the emergent agency which creates and maintains hostile divisions between people.
Maybe as a species we are just feckless? After all of our advances in technology and science, why do we still not understand ourselves well enough to solve fundamental problems like poverty and violence? I would hope we are not still holding on to the notion that our individual free will has relevance and power to affect our behavior as a society(3), as the recurrence of civil wars, the tendency of markets to produce dangerous bubbles, and many other phenomena demonstrate that the aggregation of behavior can create systems acting in direct opposition to local intentions.
The companion post is here.
1. There is some subtle interpretational issues here with what probability means, but I will gloss over these because they are largely philosophical, and to the extent they relate to underlying issues in classical and quantum probability, etc. I think I can get away with ignoring them for the purposes of this post.
2. The prior post about linearity.
3. Econophysics, for example, can get pretty far, and reproduce some surprising results assuming individuals’ behavior is entirely random. It hardly seems to matter that people operate under coherent local rules at all, at least for certain properties. |
4146f5c710a3b1de | A size-consistent approach to strongly correlated systems using a generalized antisymmetrized product of nonorthogonal geminals
P.A. Johnson, P.W. Ayers, P.A. Limacher, S. De Baerdemacker, D. Van Neck, P. Bultinck
Computational and Theoretical Chemistry
1003 (2013), 101-113
Inspired by the wavefunction forms of exactly solvable algebraic Hamiltonians, we present several wavefunction ansatze. These wavefunction forms are exact for two-electron systems; they are size consistent; they include the (generalized) antisymmetrized geminal power, the antisymmetrized product of strongly orthogonal geminals, and a Slater determinant wavefunctions as special cases. The number of parameters in these wavefunctions grows only linearly with the size of the system. The parameters in the wavefunctions can be determined by projecting the Schrödinger equation against a test-set of Slater determinants; the resulting set of nonlinear equations is reminiscent of coupled-cluster theory, and can be solved with no greater than O (N5) scaling if all electrons are assumed to be paired, and with O (N6) scaling otherwise. Based on the analogy to coupled-cluster theory, methods for computing spectroscopic properties, molecular forces, and response properties are proposed. |
64e40ed80f39630e | Formation of σ and π bonds
As an illustration of the VB procedure, consider the structure of H2O. First, note that the valence-shell electron configuration of an oxygen atom is 2s22px22py12pz1, with an unpaired electron in each of two 2p orbitals, and
Lewis diagram for the atom oxygen.
is the Lewis diagram for the atom. Each hydrogen atom has an unpaired 1s electron (H·) that can pair with one of the unpaired oxygen 2p electrons. Hence, a bond can form by the pairing of each hydrogen electron with an oxygen electron and the overlap of the orbitals they occupy. The electron distribution arising from each overlap is cylindrically symmetrical around the respective O−H axis and is called a σ bond. The VB description of H2O is therefore that each hydrogen atom is linked to the oxygen atom by a σ bond formed by pairing of a hydrogen 1s electron and an oxygen 2p electron. Because a wave function can be written for this structure, an energy can be calculated by solving the Schrödinger equation, and a bond length can be determined by varying the nuclear separation and identifying the separation that results in the minimum energy.
The term σ bond is widely used in chemistry to denote an electron distribution like that in an oxygen-hydrogen bond, specifically one that has cylindrical symmetry about the line between the two bonded atoms. It is not the only type of bond, however, as can be appreciated by considering the structure of a nitrogen molecule, N2. Each nitrogen atom has the valence-shell electron configuration 2s22px12py12pz1. If the z direction is taken to lie along the internuclear axis of the molecule, then the electrons in the two 2pz orbitals can pair and overlap to form a σ bond. However, the 2px orbitals now lie in the wrong orientation for head-to-head overlap, and they overlap side-to-side instead. The resulting electron distribution is called a π bond. A π bond also helps to hold the two atoms together, but, because the region of maximum electron density produced by the overlap is off the line of the internuclear axis, it does not do so with the same strength as a σ bond. The 2py electrons can pair and overlap in the same way and give rise to a second π bond. Therefore, the structure of an N2 molecule consists of one σ bond and two π bonds. Note how this corresponds to and refines the Lewis description of the :N≡N: molecule.
In summary, a single bond in a Lewis structure corresponds to a σ bond of VB theory. A double bond corresponds to a σ bond plus a π bond, and a triple bond corresponds to a σ bond plus two π bonds.
Promotion of electrons
Valence bond theory runs into an apparent difficulty with CH4. The valence-shell electron configuration of carbon is 2s22px12py1, which suggests that it can form only two bonds to hydrogen atoms, in which case carbon would have a valence of 2. The normal valence of carbon is 4, however. This difficulty is resolved by noting that only the overall energy of a molecule is important, and, as long as a process leads to a lowering of energy, it can contribute even if an initial investment of energy is required. In this case, VB theory allows promotion to occur, in which an electron is elevated to a higher orbital. Thus, a carbon atom is envisaged as undergoing promotion to the valence configuration 2s12px12py12pz1 as a CH4 molecule is formed. Although promotion requires energy, it enables the formation of four bonds, and overall there is a lowering of energy. Carbon is particularly suited to this promotion because the energy involved is not very great; hence the formation of tetravalent carbon compounds is the rule rather than the exception.
The discussion is not yet complete, however. If this description of carbon were taken at face value, it would appear that, whereas three of the CH bonds in methane are formed from carbon 2p orbitals, one is formed from a carbon 2s orbital. It is well established experimentally, however, that all four bonds in methane are identical.
Quantum mechanical considerations resolve this dilemma by invoking hybridization. Hybridization is the mixing of atomic orbitals on the same atom. When the 2s and three 2p orbitals of a carbon atom are hybridized, they give rise to four lobelike sp3 hybrid orbitals that are equivalent to one another apart from their orientations, which are toward the four corners of a regular tetrahedron. Each hybrid orbital contains an unpaired electron and can form a σ bond by pairing with a 1s electron of a hydrogen atom. Hence, the VB structure of methane is described as consisting of four equivalent σ bonds formed by overlap of the s orbitals of the hydrogen atoms with sp3 hybrid orbitals of the carbon atom.
Hybridization is a major contribution of VB theory to the language of chemistry. The structure of ethylene can be examined in VB terms to illustrate the use of hybridization. To reproduce the Lewis structure given earlier, it is necessary to contrive a double bond (i.e., a σ bond plus a π bond) between the two carbon atoms. Such a bonding pattern can be achieved by selecting the carbon 2s orbital, from which an electron has been promoted, and two of its 2p orbitals for hybridization, leaving one 2p orbital unhybridized and ready for forming a π bond. When one 2s and two 2p orbitals are hybridized, they form sp2 hybrid orbitals, which have lobelike boundary surfaces that point to the corners of an equilateral triangle; the unhybridized 2p orbital lies perpendicular to the plane of the triangle (Figure 11). Each of the orbitals contains a single electron. Two of the hybrids can form σ bonds to two hydrogen atoms, and one of the hybrids can form a σ bond to the other carbon atom (which has undergone similar hybridization). The unhybridized 2p orbitals are now side-by-side and can overlap to form a π bond.
This description conforms to the Lewis description. It also explains naturally why ethylene is a planar molecule, because twisting one end of the molecule relative to the other reduces the overlap between the 2p orbitals and hence weakens the π bond. All double bonds confer a torsional rigidity (a resistance to twisting) to the parts of molecules where they lie.
Resonant structures
The description of the planar hexagonal benzene molecule, C6H6, illustrates another aspect of VB theory. Each of the six carbon atoms is taken to be sp2 hybridized. Two of the hybrid orbitals are used to form σ bonds with the carbon atom neighbours, and one is used to form a σ bond with a hydrogen atom. The unhybridized carbon 2p orbitals are in a position to overlap and form π bonds with their neighbours (Figure 12). However, there are several possibilities for pairing; two are as follows:
Two possibilities for pairing fo the planar hexagonal benzene molecule, C6H6.
There is a VB wave function for each of these so-called Kekulé structures. (They are so called after Friedrich August Kekulé, who is commonly credited with having first proposed the hexagonal structure for benzene in 1865; however, a cyclic structure had already been proposed by Joseph Loschmidt four years earlier.) The actual structure is a superposition (sum) of the two wave functions: in VB terms, the structure of benzene is a resonance hybrid of the two canonical structures. In quantum mechanical terms, the blending effect of resonance in the Lewis approach to bonding is the superposition of wave functions for each contributing canonical structure. The effect of resonance is the sharing of the double-bond character around the ring, so that each carbon-carbon bond has a mixed single- and double-bond character. Resonance also (for quantum mechanical reasons) lowers the energy of the molecule relative to either contributing canonical structure. Indeed, benzene is a molecule that is surprisingly resistant to chemical attack (double bonds, rather than being a source of molecular strength and stability, are usually the targets of chemical attack) and is more stable than its structure suggests.
One of the difficulties that has rendered VB computationally unattractive is the large number of canonical structures, both covalent and ionic, that must be used in order to achieve quantitatively reliable results; in some cases tens of thousands of structures must be employed. Nevertheless, VB theory has influenced the language of chemistry profoundly, and the concepts of σ and π bonds, hybridization, and resonance are a part of the everyday vocabulary of the subject.
Molecular orbital theory
The alternative quantum mechanical theory of the electronic structures of molecules is MO theory. This approach was introduced about the same time as VB theory but has proved more amenable to quantitative implementation on computers. It is now virtually the only technique employed in the computational investigation of molecules. Like VB theory, it has introduced a language that is widely used in chemistry, and many chemists discuss chemical bonds in terms that combine both theories.
Just as an atomic orbital is a wave function that describes the distribution of an electron around the nucleus of an atom, so a molecular orbital (an MO) is a wave function that describes the distribution of an electron over all the nuclei of a molecule. If the amplitude of the MO wave function is large in the vicinity of a particular atom, then the electron has a high probability of being found there. If the MO wave function is zero in a particular region, then the electron will not be found there.
Although an MO can in principle be determined by solving the Schrödinger equation for an electron in the electrostatic field of an array of nuclei, in practice an approximation is always adopted. In this approximation, which is known as the linear combination of atomic orbitals (LCAO) approximation, each MO is constructed from a superposition of atomic orbitals belonging to the atoms in the molecule. The size of the contribution of an orbital from a particular atom indicates the probability that the electron will be found on that atom. The actual shape of the molecular orbital (and indirectly its energy) is a reflection of the extent to which the individual atomic orbitals interfere with one another either constructively or destructively.
Learn More in these related Britannica articles:
More About Chemical bonding
18 references found in Britannica articles
Assorted References
Edit Mode
Chemical bonding
Tips For Editing
Thank You for Your Contribution!
Uh Oh
Keep Exploring Britannica
Email this page |
d03a371b127ca49a | OpenChemistry Lecture Videos
Professor A.J. Shaka
CHEM 131A: Physical Chemistry - Quantum Principles
This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied.
Lecture 1: Introduction
Lecture 2: Particles, Waves, the Uncertainty Principle and Postulates
Lecture 3: More Postulates, Superposition, Operators and Measurement
Lecture 4: Complementarity, Quantum Encryption, Schrodinger Equation
Lecture 5: Model 1D Quantum Systems - "The Particle In a Box"
Lecture 6: Quantum Mechanical Tunneling
Lecture 7: Tunneling Microscopy and Vibrations
Lecture 8: More on Vibrations and Approximation Techniques
Lecture 9: Potentials + Quantization in Two Spatial Dimensions
Lecture 10: Particles on Rings and Spheres... A Prelude to Atoms
Lecture 11: Particle on a Sphere, Angular Momentum
Lecture 12: Spin, The Vector Model and Hydrogen Atoms
Lecture 13: Hydrogen Atoms: Radial Functions & Solutions
Lecture 14: Atomic Spectroscopy Selection Rules, Coupling, and Terms
Lecture 15: Hydrogen Wavefunctions, Quantum Numbers, Term Symbols
Lecture 16: Energy Level Diagrams, Spin-Orbit Coupling, Pauli Principle
Lecture 17: Approximation Methods: Variational Principle, Atomic Units
Lecture 18: The Hydride Ion (Continued): Two-Electron Systems
Lecture 19: The Hydride Ion (Try #3!) The Orbital Philosophy
Lecture 20: Hartree-Fock Calculations, Spin, and Slater Determinants
Lecture 21: Bigger Atoms, Hund's Rules and the Aufbau Principle
Lecture 22: The Born-Oppenheimer Approximation and H2+
Lecture 23: LCAO-MO Approximation Applied to H2+
Lecture 24: Molecular Orbital: The Virial Theorem in Action
Lecture 25: Optimizing H2+ Molecular Orbital, H2, & Config Interaction
Lecture 26: Qualitative MO Theory
Lecture 27: CH4 Molecular Orbitals and Delocalized Bonding
Lecture 28: What We've Covered: Course Summary |
24bec69638fbdb67 | Solving the Schrödinger equation numerically by expansion in eigenstates
Computational physics example – Quantum Mechanics
By Jonas Tjemsland, Andreas Krogen og Jon Andreas Støvneng.
Last edited: March 12th 2016
In this notebook we will be solving the one-dimensional Schrödinger equation,
$$i\hbar\frac{\partial\Psi(x, t)}{\partial t} = -\frac{\hbar^2}{2m}\frac{\partial^2 \Psi( x, t)}{\partial x^2}+V(x)\Psi( x, t) $$
numerically for an arbitrary initial condition $\Psi(x, 0)$. The eigenstates $\psi_n(x)$ and the eigenenergies $E_n$ of the system are found by solving the time-independent Schrödinger equation
$$-\frac{\hbar^2}{2m}\frac{\partial^2 \psi_n(x)}{\partial x^2}+V(x)\psi_n(x) = E_n\psi_n(x),$$
and normalizing the result. The inital condition $\Psi(x, 0)$ is expanded in terms of $\psi_n(x)$: $$\Psi(x,0) = \sum_{i}\alpha_i\psi_i(x).$$ In turn, the solution at time $t$, $\Psi(x, t)$, is given by
$$\Psi(x, t) = \sum_n\alpha_n\psi_n(x)\exp\left(-i\frac{E_n}{\hbar}t\right).$$
As an example, we will be propagating an electron given by a gaussian wave packet towards a potential barrier. A similar example is studied in our notebook on One-Dimensional Wave Packet Propagation, but with a quite different approach.
The numerical scheme that is used is developed and explained in detail in the appendix at the end of this notebook. The reader is adviced to read through this before reviewing the notebook.
We start by importing packages, setting common figure parameters and defining physical parameters.
In [1]:
import matplotlib.pyplot as plt
from scipy.linalg.lapack import ssbevd
import numpy as np
from matplotlib import animation
newparams = {'axes.labelsize': 25, 'axes.linewidth': 1, 'savefig.dpi': 200,
'lines.linewidth': 3, 'figure.figsize': (20, 10),
'ytick.labelsize': 25, 'xtick.labelsize': 25,
'ytick.major.pad': 5, 'xtick.major.pad': 5,
'figure.titlesize': 25,
'legend.fontsize': 25, 'legend.frameon': True,
'legend.handlelength': 1.5, 'axes.titlesize': 25,
'mathtext.fontset': 'stix',
'': 'STIXGeneral'}
hbar = 1.05E-34 # J⋅s. Reduced Plank's constant
m = 9.11E-31 # kg. Electron mass
As mentioned in the introduction, we will be propagating an electron towards a potential barrier in one dimension. We will be considering a domain $x\in[0,L]$. Let us use $\Delta x = 1\text Å$, which is a typical diameter of an atom. In turn, the width of the barrier is being decided by the number of discretization points it consists of. We want each side of the potential barrier to be large, so that the electron is not influenced by the barrier or the edges at $t=0$. We choose $N=10$ discretization points for the barrier, and 50 times that for each of the sides. The barrier has a height $V_0=1.5\cdot 1.6\cdot 10^{-19}J = 1.5\text{eV}$.
Play around with other parameters and potential barriers. The code in this notebook works even for arbitrary potentials!
In [2]:
V0 = 1.5*1.6E-19 # J. Potential height
dx = 1e-10 # m. Discretization step
N = 10 # #. Number of discretization points in the barrier
N_sides = 100*N # #. Number of discretization points on each side of the barrier
Ntot = N + 2*N_sides # Total number of discretization points
x = np.linspace(0, dx*Ntot, Ntot) # x-axis
# Potential
V = np.array([0]*N_sides + [V0]*N + [0]*N_sides)
Wave packet
We will be representing the initial electron as a gaussian wavepacket, $$\Psi(x,0)=C\exp\left(-\frac{(x-x_0)^2}{4\sigma^2}+i\frac{p_0x}{\hbar}\right),$$ where $p_0=\sqrt{2mE_0}$ is the momentum of the wave packet, $E_0$ the energy of the electron, $x_0$ is the initial expectation value, and $\sigma$ is some parameter specifying the width of the wave packet.
It will not be unreasonable to choose $E_0\sim V_0$. As we will see, this will give a good visualization of transmission and reflection. We start by choosing an energy a bit higher than the potential height, $E_0=1.39V_0$. $x_0$ to be in the middle of the left part of the domain and $\sigma$ (one standard deviation) to 1/8 of the left part. Play around with different parameters!
In [3]:
x0 = 0.5*dx*N_sides
k0 = np.sqrt(2.0*m*E0)/hbar
sigma = dx*N_sides/8.
A = (2*np.pi*sigma**2)**(-0.25)
Psi_0 = A * np.exp(-(x-x0)**2/(4*sigma**2)) * np.exp(1j*k0*x)
# Check if the wave function is normalized
print("Normalization:", dx*np.sum(np.abs(Psi_0)**2))
Normalization: 0.999471364542
We now visualize the initial wave packet and the potential (with a suitable scaling).
In [4]:
plt.plot(x, .75*V*np.max(np.abs(Psi_0)**2)/max(1e-30,np.max(V)), '--')
plt.plot(x, np.abs(Psi_0)**2)
plt.title('Initial probability distribution and potential')
plt.xlabel('$x$ [m]')
Solving the eigenvalue problem (Schrödinger equation)
Now that all the parameters are settled, we can finally solve the Schrödinger equation. This is done by solving an eigenvalue problem. We are using a real symmetric band matrix solver (You could of course also use numpy.linalg.eigh, but this requires the initialization of the whole matrix, mostly consisting of zeros). We thus need to initialize the diagonal and the sub- and superdiagonal. This is explained in detail in the appendices.
Note that this is the most computationally demanding part of these computations.
In [5]:
diag = hbar**2/(m*dx**2) + V # Diagonal
sup_diag = np.ones(Ntot)*(-hbar**2/(2*m*dx**2)) # Superdiagonal
In [6]:
E, psi_n, _ = ssbevd([sup_diag, diag]) # Call solver
Let us visualize some of the eigenstates and eigenenergies!
In [7]:
for i in [0, 1, 3]:
plt.plot(x, psi_n[:,i], label=r"$\psi_{%.0f}(x)$"%(i))
plt.plot(x, .75*V*np.max(psi_n[1])/max(1e-30,np.max(V)), '--', label="Potential")
plt.title("Eigenmodes for the given potential")
plt.xlabel("$x$ [m]")
In [8]:
plt.ylabel('Energy (eV)')
Here is a quick question for the reader: why is $\psi_0(x)$ and $\psi_1{x}$ almost equal in the right part of the domain? Would be expect the same result for other pairs $(\psi_n, \psi_{n+1})$? Hint: nearly degenerate.
Finding the expansion coefficients
We now calculate the expansion coefficents as explained in the introduction and in the appendices.
In [9]:
psi_n = psi_n.astype(complex)
c = np.zeros(Ntot, dtype=complex)
for n in range(Ntot):
c[n] = np.vdot(psi_n[:,n], Psi_0)
Computing $\Psi(x,t)$
Now, everything is set to compute the wave function at some arbitrary time $t$ given the inital condition and potential. To do this, we create a function performing the calculation as explained in the introduction and in the appendices.
In [10]:
def Psi(t, c, psi_n, E):
""" Calculate the wave function at some time t given the
expansion coefficients c, eigenstates psi_n and
eigenenergies E.
t : float. Time
c : 1d array-like float, len Ntot. Expansion coefficient
psi_n : 1d array-like float, len Ntot. Eigenstates
E : 1d array-like float, len Ntot. Eigenenergies
Numpy-array float, len Ntot. Wave function at time t
Finding a suitable time step - Ehrenfest's theorem
To find a suitable time step $\Delta t$, we will be using Ehrenfest's theorem. That is, the quantum mechanical expectation values obeys the classical equations of motion. For zero potential, (the expectation value of) the particle will thus have a velocity $$v = \frac{p(x)}{m} = \sqrt{\frac{2E_0}{m}}.$$ We will thus use $\Delta t \sim \sqrt{m/2E_0}\Delta x$.
Let us plot the result for some $t$'s!
In [11]:
dt = 250*dx*(m/(2*E0))**.5
nt = 5
for t in np.arange(0, nt*dt, dt):
plt.plot(x, np.abs(Psi(t, c, psi_n, E))**2, label=r"$t=%.1e$ s"%(t))
plt.title("Wave function for different $t$")
plt.xlabel("$x$ [m]")
plt.ylabel("$|\Psi(x, t)|^2$")
Tunneling, reflection and transmission
There are many things one can learn from this simple exercise. For example, note that we have used an energy that is higher than the potential barrier, $E_0>V_0$. In classical mechanics we would expect total transmission, but from the plot above, we see that there is a probability for reflection! On the other hand, if $E_0<V_0$ we would classically expect total reflection, but there is some probability for transmission (test for yourself)! This is called tunneling. These concepts are explained in more detail in our notebook on One-Dimensional Wave Packet Propagation, and the different probabilities are explicitly calculated.
Note how the wave function has a high peak at the barrier. This is again due to reflection and transmission. In quantum mechanics we will have some reflection both when the potential is lowered and raised (check for yourself with a potential well!). The peak is thus due to constructive interference between different parts of the wave function being reflected repeatedly.
Exercises and further work
Investigate the problem further by yourself!
• What are the advantages and disadvantages using this method (opposed to the more direct method used in our notebook on One-Dimensional Wave Packet Propagation)?
• Compute numerically the transmission and reflection coefficient for different barrier widths and different barriers.
• Implement periodic boundary conditions. (Hint: Take a look at the matrix in the appendices and consider the boundary condition at the edges. We need to add two new non-zero matrix elements. These are located in the upper right and lower left corner. What are they? Note that we also need to use a sparse matrix or general eigenvalue solver, e.g. numpy.eigh)
• Explain why we have dispersion of the wave packet (it is spreading out).
• Calculate (you can make approximations if it is necessary) how long it takes for the electron to pass the barrier, reflect the right boundary, pass the barrier again and return to its initial position. Verify your calculations using the Python codes in this notebook.
• Generalize the method to two dimensions. (Hint: Use the same finite difference method as in the appendix on the two-dimensional Schrödinger equation. For simplicity, use $\Delta x = \Delta y = h$. To write the resulting approximation as a matrix, use the reindexing $i,j\to i + (j-1)N$. Treat carefully the boundaries! The easiest boundary condition is probably the Dirichlet boundary condition.)
Let us make an animation to visualize the propagating electron! It may also be instructive to calculate the probabilities for the particle to be in the different parts of the domain. When the particle has propagated through the barrier, the probability that the particle is on the right side of the barrier should be approximately equal to the transmission coefficient.
In [12]:
from matplotlib import animation
from IPython.display import HTML
plt.rcParams.update({'animation.html':'html5', 'savefig.dpi': 50})
def init_anim():
""" Initialises the animation. """
global ax, line, textbox
line, = ax.plot([], [])
ax.set_xlim([0, dx*Ntot])
ax.set_ylim([0, 4*np.max(np.abs(Psi_0)**2)])
ax.set_title('Numerical simulation')
# A text box that will display the probability for different parts of the domain
textbox = ax.text(0.05, 0.95, '', transform=ax.transAxes, fontsize=25,
verticalalignment='top', bbox=props)
return line, textbox
def animate(i):
""" Animation function. Being called repeatedly. """
global ax, line, textbox
prob = np.abs(Psi(i*dt, c, psi_n, E))**2
line.set_data(x, prob)
left_text = "Left side: %.4f\n"%(dx*np.sum(prob[0:N_sides]))
barrier_text = "Barrier: %.4f\n"%(dx*np.sum(prob[N_sides:N_sides+N]))
norm_text = "Normalization: %.4f\n"%(dx*np.sum(prob))
right_text = "Right side: %.4f\n"%(dx*np.sum(prob[-N_sides:]))
return line, textbox
# Run the simulation and visualize the system as an animation.
fig, ax = plt.subplots()
h_anim = animation.FuncAnimation(fig, animate, init_func=init_anim, frames=1000, interval=20, blit=True) |
bba7a9f64b0176d0 |
Forgot your password?
NASA Science
NASA Gravity Probe Confirms Two Einstein Predictions 139
Posted by samzenpus
from the I-hope-it-feels-so-good-to-be-right dept.
sanzibar writes "After 52 years of conceiving, testing and waiting, marked by scientific advances and disappointments, one of Stanford's and NASA's longest-running projects comes to a close with a greater understanding of the universe. Stanford and NASA researchers have confirmed two predictions of Albert Einstein's general theory of relativity, concluding one of the space agency's longest-running projects. Known as Gravity Probe B, the experiment used four ultra-precise gyroscopes housed in a satellite to measure two aspects of Einstein's theory about gravity. The first is the geodetic effect, or the warping of space and time around a gravitational body. The second is frame-dragging, which is the amount a spinning object pulls space and time with it as it rotates."
NASA Gravity Probe Confirms Two Einstein Predictions
Comments Filter:
• by Anonymous Coward on Thursday May 05, 2011 @04:53AM (#36033044)
Please, can somebody restore the fortune database? Thanks.
Uh, and First Post.
• Re: (Score:1, Offtopic)
by hcpxvi (773888)
Uh, what he said. I'd mod him up if I had any mod points. Not that I have had any for months, despite excellent karma. The new Slashdot: too buggy to be fit for purpose.
• by rhook (943951) on Thursday May 05, 2011 @06:16AM (#36033308)
The new Slashdot: too buggy to be fit for purpose.
I have to agree with this, several bugs. The most annoying one is having the comments scroll to the top of the page when I click anything.
• by nanospook (521118)
I know this is off topic, because I need glasses, I use the + and - keys in Opera to zoom the screen a bit. But now ./ does something to ignore those keystrokes. I have to go to Options and toggle filter controls. It doesn't seem to matter if it's on or off, I have to just toggle it to another state. Then it works. A day or so later, I have to do it again..
• by Shippu (1888522)
• by amaupin (721551) on Thursday May 05, 2011 @09:48AM (#36034508) Homepage
Links are now unclickable, at least on the first 4 or 5 tries. Each time you click a link in someone's post, the page jumps and/or another post expands/collapses. The sheer level of ignorance and/or lack of interest in their own site on the part of the Slashdot owners is mind-boggling.
(Click on links? I must be new here.)
Seriously, Slashdot, fix your goddam site.
• by Ogive17 (691899)
I'm curious why /. looks like shit while using IE8 or Firefox but looks pretty good on my Droid X's native browser. I was browsing from my phone during a phone conference yesterday and couldn't believe how functional the page looked.
• by Joce640k (829181)
Um, maybe the developer uses a Droid X for development work.
That would explain quite a lot actually...
• by Xacid (560407)
And here I thought it was just my fault for not using IE...
• by Hatta (162192)
Mark as untrusted.
Switch to classic discussion mode in your preferences.
• by JWW (79176)
Couldn't agree more.
EVERYTIME /. upgrades the first thing I do is go back and turn classic discussion mode back on.
• dont click anything. CmdrTaco --sent from my iPhone
• by sjwt (161428)
no, but I can link to the related saturday morning breakfast cereal comic.
This is why experimental scientists hare theoretical scientists []
• by dotancohen (1015143) on Thursday May 05, 2011 @08:47AM (#36033992) Homepage
Please, can somebody restore the fortune database? Thanks.
Uh, and First Post.
Restore it? It works fine for me, here:
In fact, I've been seeing that for a few days!
Protip: Say that quote while walking the halls. You will immediately know who your fellow /.ers are by the snickers. If your boss laughs, then you're in trouble.
Well, I'd laugh at that quote -- specifically, the presumptions it implies.
• Honey? (Score:4, Funny)
by mangu (126918) on Thursday May 05, 2011 @05:09AM (#36033112)
"Imagine the Earth as if it were immersed in honey," Francis Everitt, GP-B principal investigator at Stanford University in Palo Alto, Calif., said in a statement
Doh, this is Slashdot, we want a car analogy, please. And have the numerical results expressed in libraries of congress per football field. Thanks.
• OK, geodetic effect, check. Frame-dragging, check. Commence dev. project warp drives
• by roger_pasky (1429241) on Thursday May 05, 2011 @06:21AM (#36033332)
Agreed, make it so. Geordi, estimate developement period from current stardate. Data, start doing some calculations. Wesley, contact Dr. Sheldon Cooper and piss him off.
• NASA and the USA (Score:5, Insightful)
by mustPushCart (1871520) on Thursday May 05, 2011 @05:22AM (#36033158)
I am not an American, but I have seen both the blue pearl image and the pale blue dot image. I have read about how long these projects have run and the astounding quality of the instruments that must be on satellites like these along with the massive foresight it must have taken at launch time to make them relevant decades later. You can criticize the USA all you want for their wars, and I have heard some harsh criticism of NASA too but the most astounding images and discoveries have always come from the here because they are on the pinnacle of space exploration. The world would be a lot less interesting if it wasn't for them.
• by Anonymous Coward
Have you seen the comments in TFA by this David de Hilster guy? What a fruitloop. Check out his picture []. Want some love particles, baby?
• by a_hanso (1891616) on Thursday May 05, 2011 @05:53AM (#36033256) Journal [] has a simple animation explaining the gravity probe B experiment.
• That's great... but given a quantum physics and that little bugger of a concept known as the observer effect (basically ALL experience is subjective to the observer - even scientific ones...) how do we know the results we are recording are actual vs what we believe we should be experiencing and therefore are willing to see? Sure I could be wrong in what I am saying, but let me know and I'll entertain it in my field of awareness as possibility and perhaps I'll experience it differently...or maybe not. ;) Y
• by sandytaru (1158959) on Thursday May 05, 2011 @08:40AM (#36033938) Journal
The effects of gravity are at macro scales, not quantum scales. From what I understand, the observer effect doesn't really kick in until you start talking about stuff smaller than atoms. The universe is a bit more well-behaved at scale sizes larger than an atom, where chemistry and classical physics kick in. Our other end of non-understanding doesn't start until you get to the very macro, all the dark matter and dark energy floating around out there that no one really knows anything about.
• by gman003 (1693318)
Exactly. Quantum mechanics only starts to be noticeable about ~50nm or so. In contrast, gravity is normally only noticeable with objects best measured in yottagrams (that's "quintillions of tons", for those of us a bit fuzzy on the extreme SI prefixes).
Now, there's been a huge amount of speculation as to how the two combine, especially from theoretical physicists like Dr. Hawking. However, there have been absolutely no experiments in quantum gravity, for one simple reason: the only time you get that much
• In contrast, gravity is normally only noticeable with objects best measured in yottagrams
1.61lb is considerably less than a yottagram. Cavendish Experiment []
• by gman003 (1693318)
Yes, and that experiment required some of the greatest precision technologically possible at that time. I'm talking objects big enough that the force of gravity they exert is clearly and immediately obvious, just as I was talking about quantum effects only being clearly and immediately obvious below 50nm. You can certainly detect both phenomena at lower masses or greater distances, but that is hardly relevant to the discussion of practical effects.
• The effects of gravity are at macro scales, not quantum scales.
The effects are on all scales. Just because nobody can currently describe how a single photon warps space as it travels does not mean it does not occur. We know it does.
• by blueg3 (192743) on Thursday May 05, 2011 @08:55AM (#36034052)
That's not part of quantum mechanics at all. That's a gross generalization made philosophical that arose out of an actual quantum mechanical principle.
Measurement-related QM principles, like wavefunction collapse and Heisenburg, are only meaningful when what you're observing is the size and scale of a quantum state, which is very, very small. Gravitational effects are for the most part (and in this case) for large objects, where QM principles are unimportant.
• by qc_dk (734452)
And it could also be related to a gross misgeneralization of the theory of relativity. Which basically states the exact opposite: That any careful observer in any frame of reference will agree on the value of the speed of light and the laws of physics. A better name would have been the theory of constancy.
• by honkycat (249849)
It depends on your perspective. It's "relativity" because most measurements you make *are* relative to your reference frame, only the speed of light (and various invariant quantities) are absolute.
The relativity that SR and GR deal with is different in kind than the "peculiarities" of quantum mechanics. And, the previous post was correct: the observation-related uncertainties of QM are (mostly) only important when systems get to microscopic scales. Yes, the same microscopic laws apply to macroscopic phys
• by blueg3 (192743)
Only observers in inertial reference frames agree on the laws of physics, no?
• by Anonymous Coward on Thursday May 05, 2011 @08:58AM (#36034090)
You need to actually study quantum physics if you want to talk about these things like an adult. It's obvious to everyone that HAS studied quantum physics that you're spouting nonsense and claiming that Science supports you. Quit watching "What the bleep do we know?". It's full of people lying to you to sell you an idea (and one scientist who was duped and every single quote taken out of context).
• by xehonk (930376)
The observer effect is not something specific to self-aware observers. It can simply be interaction with other matter - which has then "observed" the item in question.
Now with that out of the way, what you want to happen has no influence on what does happen. That's simply not what the observer effect is about.
• by tm2b (42473)
Sorry, you're making a comment on Quantum Mechanics. I am going to have to ask to see you explain any version of a Schrödinger equation, or ask you to stop.
That should really be a law.
• I usually bow out of stories like this, but must make one comment:
Anybody who thinks time is important as a metric is seriously missing the point.
• ... but the Chinese are actively doing it - as seen here in 2007 [].
Sometimes we to just shut up and do it else we'll have deja vu like solar energy [] or nuclear power []
• by cephus440 (828210)
I'm sorry, I posted this comment to the wrong article... sigh.
• by fotoguzzi (230256)
But your first post got Score:1 and your second got Score:2. I think the day is about here when the long running two-million monkey experiment that is will be shut down.
Oh, and thank you, Dr. Einstein, for thinking about this stuff and putting it in a form that could be challenged experimentally.
• Finally I can put an end to all of those naysayers of gravitation theory!
• Look - it's just at THEORY - you admitted it yourself right in your post. Go find some facts and get back with me. I've got a Bible full of them right here at my desk, and there isn't a single mention of gravity. I can't believe you're still blathering on about this... ;-)
• Now if I recall correctly, they were also looking for the existence of gravitational waves.. which they.. didn't find.. correct?
• by Greyfox (87712) on Thursday May 05, 2011 @09:21AM (#36034254) Homepage Journal
Relativity and black holes look like bugs in a not-very-well thought-out physics simulation. This sort of thing makes me wonder if the universe isn't just some extra-dimensional college kid's thesis project on how to find the best way to turn hydrogen into plutonium.
• by StikyPad (445176)
In the beginning, Bob created the heavens and the earth. But his emulation of Newtonian physics was but partially implemented, and so he only got a B-.
• by qc_dk (734452)
Dear Mr. 94343,
I would like to thank you for considering our ilustrious instituion. I regret to inform you, however,
that you have not been accepted to our "Universe creation and it's applications" Ph.d. programme.
While your admission project did indeed show a lot of practical skill and hard effort, we believe your theoretical understanding is somewhat deficit.
We asked for the best way to turn hydrogen into plutonium, not iron.
We encourage you to take another year of theoretical physics, and reapplying for t
• When I read something like "confirms Einstein's theory" AGAIN I just get annoyed. In my opinion, the mission would only be a success if it found a flaw in Einstein's theories. Those theories are many decades old and I'm hungry for some totally new physics.
I get so disappointed when I hear that the Pioneer mystery (or whichever one was curving unexpectedly) is solved using perfectly well known physics. Where are the new unknown rules that we can use to create new breakthrough technologies?
• by notpaul (181662)
• by arisvega (1414195)
From an extra-dimensional point of view, Hydrogen may as well already be Plutonium.
• However the Stanford satellite supposedly is ten times more accurate
• Why it took 52 years (Score:5, Interesting)
by rotenberry (3487) on Thursday May 05, 2011 @10:39AM (#36035148)
From what I have heard, the reason it took 52 years to get this spacecraft into space was political, not technical.
There is no doubt that the technology developed to measure these parameters is very impressive. The real question is whether or not it was worth the effort.
When I was at JPL in the 1980s a person who had published numerous papers in both experimental and theoretical relativity explained why scientists within the space program were not supporting this project. Since this conversation took place thirty years ago I must paraphrase:
"No modern theory of gravity predicts anything else, and if the measurements showed anything but the predicted results it would be assumed to be an experimental error. Unlike the technology used to search for gravitational radiation (which is also used to study the atmospheres of planets), the hardware in this spacecraft cannot be used for any other scientific experiment."
So for 52 years the money has been used for other science. For a much more worthy project read about the recently canceled LISA project.
If you wish to read about the politics of how a science project is chosen by NASA I can think of no better description that Steven W. Squyres' "Roving Mars" where he describes how the Mars Rovers were nearly canceled.
• by radtea (464814) on Thursday May 05, 2011 @11:48AM (#36035984)
No modern theory of gravity predicts anything else
Except Moffat's, of course.
And while every experimental anomaly is first dismissed as error, the fact (you remember those things, facts?) is that scientists have an excellent record of poking away at anomalies until a robust, consistent explanation is found. Sometimes the explanation is mundane--the Pioneer Anomaly, for example. Sometimes it is profound--the anomalous precession of the orbit of Mercury comes to mind, which was measured quite precisely in the 1850's, if I recall correctly, some sixty years before the underlying cause was found.
People who say things like this are simply ignorant of the history and timescales on which science actually operates. It is entirely implausible that a group of people who have collectively worked over hundreds of years to account for dozens of tiny numerical anomalies in extremely difficult precision measurements would suddenly throw up their hands and say, "OK, I guess we can ignore the data now!"
• by Anonymous Coward
Like everything else, science does not have access to infinite resources. However, posts such as yours remind us there is an infinite amount of testing to do. For example, we could pose the question of whether or not a ball and a feather fall at the same rate as each other on Pluto, if dropped simultaneously. In the case where our need for resources outpaces our access to them, we must prioritize what is important.
One way of doing this is time and potential for payoff. Consider how many years the hypothetic
Very likely, but nobody would have been absolutely sure. Physicists would have looked at possible theories that were in accordance with the experimental results, and come up with other tests.
The Michelson-Morley experiment was similar in effect. People thought it very odd that it didn't show ether drift, but the theories were firmly established, and so physicists kept worrying at it. More expe
• by Chris Burke (6130)
They cancelled LISA?! D=
If it's because there's no room in the budget for LISA and a shuttle-derived heavy-lift vehicle, I'm personally going to go kick a bunch of congresscritters in the jewels.
• by equex (747231)
Sometimes I wonder if these great minds that pops up from time to time (Newton, Copernicus, Einstein etc) are really one of us. It's funny how they appear, completely revolutionize a field or offer a world changing new perspective and then disappear, just to have us mere mortals work for years and decades to understand, confirm and accept it. Applause again for Einstein, you are a bit creepy to be completely honest.
• My understanding was that (satellite-based) GPS would give you a drastically inaccurate position reading without an algorithmic correction for frame-dragging. If so, it would seem that part of Einstein's predictions were validated quite a few years ago.
• by Strider- (39683) on Thursday May 05, 2011 @01:40PM (#36037490)
No, GPS does takes General Relativity and Special Relativity into account, and confirms both nicely. Due to the motion of the spacecraft in orbit with respect to us on the ground, one would expect the GPS satellites to lose about 7 microseconds a day. However, because the satellites are further out of our gravity well, General Relativity predicts the satellites will gain about 45 microseconds a day. Basically, this means that if GR and SR were not taken into account, the GPS system would be useless after about 2 minutes.
Source: []
However, the effect of Frame Dragging is many orders of magnitude smaller, to the point where it will not have a measurable effect on GPS. To even have a hope of measuring it, Gravity Probe B had gyroscopes made from a set of the most perfect spheres ever manufactured. If you were to scale these spheres up to the size of the earth, the tallest mountain would be less than 1 meter tall.
• by Required Snark (1702878) on Thursday May 05, 2011 @02:40PM (#36038456)
According to this paper [] the Gravity Probe B experiment results were not very useful.
The goal was to get numerical results to 1% accuracy, and the actual measurements only achieved %19 percent accuracy. This was due to a design error.
On top of that, other researchers made better measurements using other much cheaper satellites.
So they got scooped and their final results were not what they had planned. Not a complete failure, but not a real success either.
• This is cool news! When I first got deep into physics, I often considered the ideal of; "a hot air balloon floating(not) around an earth without an atmosphere", and "would the balloon be dragged around the plaint as it rotates(by gravity)?", now I feel satisfied that know the answer!
Which leads to the next n question:
If you took our solar system and placed it at the most significant Lagrange point between two galaxy's, would our understanding of physical constants change? ;) And also the intermediary
|
eb8a32fedc5513b5 | New Quantum Exchange collection resources The latest material additions to the Quantum Exchange. en-US Copyright 2015, Mon, 23 Mar 2015 21:48:46 EST Quantum Exchange 125 35 Developing a quantum mechanics concept inventory This paper describes the process of writing a quantum mechanics concept inventory concerning one-dimensional potential barriers, tunneling and probability distributions. It also explores some of the related alternative conceptions, and presents the results of 216 inventory questionnaires distributed to four groups of students. One main result is that the question context is important for the models used when answering. The survey results show the alternative conceptions of energy loss due to tunneling and the view of a probability density peak as an indivisible entity are quite common (about 40 and 30 percent of the students, respectively). Quantum Physics/Probability, Waves, and Interference Mon, 23 Mar 2015 21:48:46 EST Physlets Quantum Physics Physlet Quantum Physics 2E contains a collection of exercises about concepts from modern and quantum physics, facilitated by computer animations. Topics include special relativity, quantum experiments, quantum theory, and applications. Chapters are divided into Illustrations, Explorations, and Problems. Illustrations are designed to demonstrate physical concepts. They are suitable for reading assignments prior to class and classroom demonstrations. Explorations are tutorial in nature. They provide hints or suggest problem-solving strategies to students in working problems and are useful as Just-in-Time Teaching exercises. Problems are interactive versions of the kind of exercises typically assigned for homework. They require the students to demonstrate their understanding, such as in homework assignments. Quantum Physics/General Sat, 15 Nov 2014 00:59:54 EST Tutorials in Physics: Quantum Mechanics This web site provides access to a set of student tutorials designed to supplement lectures and textbooks through in quantum mechanics. The tutorials are most suitable for courses in which there is an opportunity for students to work together in small groups; however, they can also be adapted for use in large, lecture hall settings. Carefully sequenced exercises and questions engage students in the type of active intellectual involvement that is necessary for developing a functional understanding of physics. The website contains resources for instructors, including sample pretests, post-tests examination questions, suggestions for preparing Teaching Assistants, as well as details about the individual tutorials. Quantum Physics/General Fri, 06 Jun 2014 17:07:58 EST SEI: Modern Physics Course Materials This website contains lectures, homework, exams, and other materials for a PER-based large lecture modern physics course for engineering majors. This course has redesigned content and learning techniques focused on topics more useful to engineering majors than a traditional modern physics course. The course emphasizes reasoning development, model building, and connections to real world applications. A variety of PER-based learning resources, including peer instruction, collaborative homework sessions, and interactive simulations, are used. Research results on learning outcomes from the course are included. Modern Physics/General Thu, 05 Jun 2014 09:07:01 EST Exploring Student Understanding of Energy through the Quantum Mechanics Conceptual Survey We present a study of student understanding of energy in quantum mechanical tunneling and barrier penetration. This paper will focus on student responses to two questions that were part of a test given in class to two modern physics classes and in individual interviews with 17 students. The test, which we refer to as the Quantum Mechanics Conceptual Survey (QMCS), is being developed to measure student understanding of basic concepts in quantum mechanics. In this paper we explore and clarify the previously reported misconception that reflection from a barrier is due to particles having a range of energies rather than wave properties. We also confirm previous studies reporting the student misconception that energy is lost in tunneling, and report a misconception not previously reported, that potential energy diagrams shown in tunneling problems do not represent the potential energy of the particle itself. The present work is part of a much larger study of student understanding of quantum mechanics. Quantum Physics/Scattering and Continuum State Systems/Transmission and Reflection Wed, 09 Apr 2014 21:28:17 EST Photoelectric Effect Model The EJS Photoelectric Effect model simulates the Photoelectric effect discovered by Hertz in 1887 and described theoretically by Einstein in 1905. Light of a given frequency (energy) shines on a metal in a vacuum tube. If the energy of the photons is greater than the work function of the metal, W, electrons are ejected and can form a current in an external circuit. These photoelectrons will have a kinetic energy if the energy of the light is greater than the work function. If subjected to an electric potential between the plates in the tube, the electrons excited from the metal will be accelerated resulting in an increase, decrease, or stopping of the current. This model provides controls for the frequency of the light source and the external potential on the electron tube. An ammeter allows users to take data for the photo-current. The EJS Photoelectric Effect model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_qm_photoelectric.jar file will run the program if Java is installed. Quantum Physics/Quantum Experiments Wed, 09 Apr 2014 21:01:41 EST The Transactional Interpretation of Quantum Mechanics This article introduces the interpretation of the formalism of quantum mechanics, the Transactional Interpretation (TI) which addresses some issues raised by recent tests of Bell's inequalities. TI is non-local, relativistically invariant, and fully causal. A detailed comparison is made with the Copenhagen interpretation. Also, there is a link providing articles that have cited this one. Quantum Physics/Foundations and Measurements Fri, 08 Nov 2013 09:30:39 EST Interactive Learning Tutorials on Quantum Mechanics We discuss the development and evaluation of quantum interactive learning tutorials (QuILTs), which are suitable for undergraduate courses in quantum mechanics. QuILTs are based on the investigation of student difficulties in learning quantum physics. They exploit computer-based visualization tools and help students build links between the formal and conceptual aspects of quantum physics without compromising the technical content. They can be used both as supplements to lectures or as self-study tools. Quantum Physics/General Thu, 31 Oct 2013 17:20:04 EST Improving student understanding of addition of angular momentum in quantum mechanics We describe the difficulties advanced undergraduate and graduate students have with concepts related to addition of angular momentum in quantum mechanics. We also describe the development and implementation of a research-based learning tool, Quantum Interactive Learning Tutorial (QuILT), to reduce these difficulties. The preliminary evaluation shows that the QuILT related to the basics of the addition of angular momentum is helpful in improving students’ understanding of these concepts. Quantum Physics/Symmetries in Quantum Mechanics Tue, 18 Jun 2013 22:28:14 EST EPR/Bell's Theorem This is a simulation of the simplified EPR-like experiment described by David Mermin's 1981 AJP Article (N. D. Mermin, "Bringing home the atomic world: Quantum mysteries for anybody," Am. J. Phys. 49, 940-943(1981)). The program has internal documentation. Quantum Physics/Entanglement and Quantum Information Tue, 18 Jun 2013 22:23:02 EST Improving Students' Understanding of Quantum Mechanics Learning physics is challenging at all levels. Students’ difficulties in the introductory level physics courses have been widely studied and many instructional strategies have been developed to help students learn introductory physics. However, research shows that there is a large diversity in students’ preparation and skills in the upper-level physics courses and it is necessary to provide scaffolding support to help students learn advanced physics. This thesis explores issues related to students’ common difficulties in learning upper-level undergraduate quantum mechanics and how these difficulties can be reduced by research-based learning tutorials and peer instruction tools. We investigated students’ difficulties in learning quantum mechanics by administering written tests and surveys to many classes and conducting individual interviews with a subset of students. Based on these investigations, we developed Quantum Interactive Learning Tutorials (QuILTs) and peer instruction tools to help students build a hierarchical knowledge structure of quantum mechanics through a guided approach. Preliminary assessments indicate that students’ understanding of quantum mechanics is improved after using the research-based learning tools in the junior-senior level quantum mechanics courses. We also designed a standardized conceptual survey that can help instructors better probe students’ understanding of quantum mechanics concepts in one spatial dimension. The validity and reliability of this quantum mechanics survey is discussed. Education Practices/Instructional Material Design/Tutorial Wed, 20 Feb 2013 18:44:05 EST Numerical Solutions to the Schrödinger Equation This Mathematica Notebook provides in introduction to computational methods for studying quantum mechanical systems. Examples given are one dimensional. Studying quantum mechanics in one-dimension allows the student to learn the basics of quantum mechanics and to develop an intuition without some of the mathematical complexities present in three-dimensions. Quantum Physics/Approximation Techniques Wed, 20 Feb 2013 18:35:09 EST Improving students’ understanding of quantum measurement. I. Investigation of difficulties We describe the difficulties that advanced undergraduate and graduate students have with quantum measurement within the standard interpretation of quantum mechanics. We explore the possible origins of these difficulties by analyzing student responses to questions from both surveys and interviews. Results from this research are applied to develop research-based learning tutorials to improve students’ understanding of quantum measurement. Quantum Physics/Foundations and Measurements Wed, 20 Feb 2013 18:25:14 EST Improving students’ understanding of quantum measurement. II. Development of research-based learning tools We describe the development and implementation of research-based learning tools such as the Quantum Interactive Learning Tutorials and peer-instruction tools to reduce students’ common difficulties with issues related to measurement in quantum mechanics. A preliminary evaluation shows that these learning tools are effective in improving students' understanding of concepts related to quantum measurement. Quantum Physics/Foundations and Measurements Wed, 20 Feb 2013 18:24:35 EST Categorization of quantum mechanics problems by professors and students We discuss the categorization of 20 quantum mechanics problems by physics professors and undergraduate students from two honours-level quantum mechanics courses. Professors and students were asked to categorize the problems based upon similarity of solution. We also had individual discussions with professors who categorized the problems. Faculty members' categorizations were overall rated higher than those of students by three faculty members who evaluated all of the categorizations. The categories created by faculty members were more diverse compared to the categories they created for a set of introductory mechanics problems. Some faculty members noted that the categorization of introductory physics problems often involves identifying fundamental principles relevant for the problem, whereas in upper-level undergraduate quantum mechanics problems, it mainly involves identifying concepts and procedures required to solve the problem. Moreover, physics faculty members who evaluated others' categorizations expressed that the task was very challenging and they sometimes found another person's categorization to be better than their own. They also rated some concrete categories such as 'hydrogen atom' or 'simple harmonic oscillator' higher than other concrete categories such as 'infinite square well' or 'free particle'. Quantum Physics/General Tue, 23 Oct 2012 08:08:19 EST Quantum Mechanics Survey (QMS) This 31-question research-based multiple-choice test is designed to evaluate students’ conceptual understanding of quantum mechanics in junior-level courses. The survey is based on investigations of students’ difficulties in quantum mechanics and should be given in a 50-minute period. Statistical results have shown the survey to be reliable and valid. A summary of the construction and analysis of the survey is available in <em>Surveying students’ understanding of quantum mechanics in one spatial dimension</em>, Am. J. Phys. <b>80</b> (3), 252-259. This assessment is free for use by instructors in their classroom. As it takes years of development effort to create and validate reliable assessment instruments, access is restricted to instructors and researchers. Quantum Physics/General Tue, 23 Oct 2012 08:06:44 EST Improving Students' Understanding of Quantum Mechanics Richard Feynman once famously stated that nobody understands quantum mechanics. He was, of course, referring to the many strange, unintuitive foundational aspects of quantum theory such as its inherent indeterminism and state reduction during measurement according to the Copenhagen interpretation. But despite its underlying fundamental mysteries, the theory has remained a cornerstone of modern physics. Most physicists, as students, are introduced to quantum mechanics in a modern-physics course, take quantum mechanics as advanced undergraduates, and then take it again in their first year of graduate school. One might think that after all this instruction, students would have become certified quantum mechanics, able to solve the Schrödinger equation, manipulate Dirac bras and kets, calculate expectation values, and, most importantly, interpret their results in terms of real or thought experiments. That sort of functional understanding of quantum mechanics is quite distinct from the foundational issues alluded to by Feynman. Education Practices/Instructional Material Design Tue, 23 Oct 2012 08:04:02 EST Perspectives in Quantum Physics: Epistemological, Ontological and Pedagogical A common learning goal for modern physics instructors is for students to recognize a difference between the experimental uncertainty of classical physics and the fundamental uncertainty of quantum mechanics. Our studies suggest this notoriously difficult task may be frustrated by the intuitively realist perspectives of introductory students, and a lack of ontological flexibility in their conceptions of light and matter. We have developed a framework for understanding and characterizing student perspectives on the physical interpretation of quantum mechanics, and demonstrate the differential impact on student thinking of the myriad ways instructors approach interpretive themes in their introductory courses. Like expert physicists, students interpret quantum phenomena differently, and these interpretations are significantly influenced by their overall stances on questions central to the so-called measurement problem: Is the wave function physically real, or simply a mathematical tool? Is the collapse of the wave function an ad hoc rule, or a physical transition not described by any equation? Does an electron, being a form of matter, exist as a localized particle at all times? These questions, which are of personal and academic interest to our students, are largely only superficially addressed in our introductory courses, often for fear of opening a Pandora’s Box of student questions, none of which have easy answers. We show how a transformed modern physics curriculum (recently implemented at the University of Colorado) may positively impact student perspectives on indeterminacy and wave-particle duality, by making questions of classical and quantum reality a central theme of our course, but also by making the beliefs of our students, and not just those of scientists, an explicit topic of discussion. Quantum Physics/Foundations and Measurements Tue, 29 May 2012 11:03:07 EST QuVis: Non-interacting Particles in an Infinite Well This animation shows a system of N non-interacting Bose or Fermi particles in an infinitely deep square well. The total involves the distribution of the individual particles across energy levels. Users can choose the type and number of particles in the well and the total energy of the system. This animation includes a step-by-step exploration that explains key points in detail. This animation is part of a collection of animations for the teaching of concepts in quantum mechanics. Quantum Physics/Multi-particle Systems Wed, 18 Jan 2012 10:13:12 EST Assessing and improving student understanding of quantum mechanics We developed a survey to probe student understanding of quantum mechanics concepts at the beginning of graduate instruction. The survey was administered to 202 graduate students in physics enrolled in first-year quantum mechanics courses from seven different universities at the beginning of the first semester. We also conducted one-on-one interviews with fifteen graduate students or advanced undergraduate students who had just finished a course in which all the content on the survey was covered. We find that students share universal difficulties about fundamental quantum mechanics concepts. The difficulties are often due to over-generalization of concepts learned in one context to other contexts where they are not directly applicable and difficulty in making sense of the abstract quantitative formalism of quantum mechanics. Instructional strategies that focus on improving student understanding of these concepts should take into account these difficulties. The results from this study can sensitize instructors of first-year graduate quantum physics to the conceptual difficulties students are likely to face. Quantum Physics/General Fri, 18 Nov 2011 09:38:59 EST |
ec5228c9f00b1048 | Tell me more ×
For $a$ being positive what are the quantization conditions for an exponential potential?
$$ - \frac{d^{2}}{dx^{2}}y(x)+ ae^{|x|}y(x)=E_{n}y(x) $$ with boundary conditions $$ y(0)=0=y(\infty) $$ I believe that the energies $ E_{n} $ will be positive and real
I have read a similar paper: P. Amore, F. M. Fernández. Accurate calculation of the complex eigenvalues of the Schrödinger equation with an exponential potential. Physics Letters A 372 (2008), pp. 3149–3152. doi:10.1016/j.physleta.2008.01.053, arXiv:0712.3375 [math-ph].
But I get this strange quantization condition
$$ J_{2i\sqrt{E_{n}}}(\sqrt{-a})=0 $$
However in case $ a >0 $ how can I handle with this?
share|improve this question
Maybe you can show how you got to your quantization condition? – Bernhard Dec 18 '12 at 11:50
the quantization condition is explained in the paper, due to the condition $ y(0)=0$ you get the quantizaton condition, in a similar way to the Airy function for the potential $ V(x)=x$ – Jose Javier Garcia Dec 18 '12 at 11:53
Bessel functions of imaginary order and argument are relatively hard to manage but this DLMF section may be of help. If everything is done correctly then I would not be surprised by imaginary order and imaginary argument yielding real roots for $E_n$. – Emilio Pisanty Dec 18 '12 at 13:11
The potential must be attractive to have positive $E_n$. For a repulsive potential one can get quantized $E_n$, but they may become negative and unbounded from below. – Vladimir Kalitvianski Dec 18 '12 at 15:29
Your Answer
Browse other questions tagged or ask your own question. |
ef7f8c5a262c8f4a | sensagent's content
• definitions
• synonyms
• antonyms
• encyclopedia
Dictionary and translator for handheld
⇨ New : sensagent is now available on your handheld
Advertising ▼
sensagent's office
Shortkey or widget. Free.
Windows Shortkey: sensagent. Free.
Vista Widget : sensagent. Free.
Webmaster Solution
Try here or get the code
Business solution
Improve your site content
Add new content to your site from Sensagent by XML.
Crawl products or adds
Get XML access to reach the best products.
Index images and define metadata
Get XML access to fix the meaning of your metadata.
Please, email us to describe your idea.
The English word games are:
○ Anagrams
○ Wildcard, crossword
○ Lettris
○ Boggle.
English dictionary
Main references
Most English definitions are provided by WordNet .
English Encyclopedia is licensed by Wikipedia (GNU).
The SensagentBox are offered by sensAgent.
Change the target language to find translations.
last searches on the dictionary :
4309 online visitors
computed in 0.094s
Advertising ▼
1-500X 25cm Focus Distance USB Digital Electronic Microscope for PCB Inspection (54.99 USD)
Commercial use of this term
Hitachi S-415A Scanning Electron Microscope (2675.0 USD)
Commercial use of this term
Hitachi TEM H-800 H-8010 Transmission Electron Microscope (15000.0 USD)
Commercial use of this term
Commercial use of this term
New Waterproof Digital Display LED Electronic Sport Silicone Wrist Watch White (5.99 USD)
Commercial use of this term
Commercial use of this term
Commercial use of this term
New Fashionable Men Digital Display LED Electronic Sport Watch Yellow (5.99 USD)
Commercial use of this term
Dual 200 watts per side, electronic load. Working and guaranteed, with manual. (140.0 USD)
Commercial use of this term
Men's boy's children kids sport chronograph electronic Digital Wrist watch 6149 (12.26 USD)
Commercial use of this term
New 38 in 1 Electronic Tool Precision Screwdriver Set (9.99 USD)
Commercial use of this term
Commercial use of this term
NEW TEL Tokyo Electron 1D10-312637-11 C-ESC Cover Ring (125.99 USD)
Commercial use of this term
New Porpular Rubber Band Children Electronic Wrist Watch Black (4.99 USD)
Commercial use of this term
OHSEN Rubber Analog Digital Dual time Waterproof Electronic casual Sports Watch (12.18 USD)
Commercial use of this term
New 29 LED Light Watch Digital Electronic gift Black P6 (0.99 USD)
Commercial use of this term
Portable 2M 5X-500X USB Handheld Digital Microscope Electronic Magnifier Camera (64.99 USD)
Commercial use of this term
1X Hot Sale 1/3 Colors Men's Electronic watches Led watches (4.59 USD)
Commercial use of this term
Commercial use of this term
electron (n.) elementary particle with negative charge
Merriam Webster
1. Amber; also, the alloy of gold and silver, called electrum. [archaic]
electron (n.)
atom, negatron, particle
see also
electron (n.)
-Bacterial Electron Transport Chain Complex Proteins • Bacterial Electron Transport Complex I • Bacterial Electron Transport Complex II • Bacterial Electron Transport Complex III • Bacterial Electron Transport Complex IV • Cryo-electron Microscopy • Electron Beam Computed Tomography • Electron Beam Tomography • Electron Cryomicroscopy • Electron Diffraction Microscopy • Electron Energy-Loss Spectroscopy • Electron Microscopy • Electron Microscopy, Scanning Transmission • Electron Microscopy, Transmission • Electron Nuclear Double Resonance • Electron Paramagnetic Resonance • Electron Probe Microanalysis • Electron Scanning Microscopy • Electron Spectroscopic Imaging • Electron Spin Resonance • Electron Spin Resonance Spectroscopy • Electron Transfer Flavoprotein • Electron Transfer Flavoprotein Alpha Subunit Deficiency • Electron Transfer Flavoprotein Beta Subunit Deficiency • Electron Transfer Flavoprotein Deficiency • Electron Transfer Flavoprotein Dehydrogenase Deficiency • Electron Transport • Electron Transport Chain Complex Proteins • Electron Transport Chain Deficiencies, Mitochondrial • Electron Transport Complex I • Electron Transport Complex II • Electron Transport Complex III • Electron Transport Complex IV • Electron-Transferring Flavoproteins • Microanalysis, Electron Probe • Microscopy, Electron • Microscopy, Electron Diffraction • Microscopy, Electron, Scanning • Microscopy, Electron, Scanning Transmission • Microscopy, Electron, Transmission • Microscopy, Electron, X-Ray Microanalysis • Mitochondrial Electron Transport Chain Complex Proteins • Mitochondrial Electron Transport Chain Deficiencies • Mitochondrial Electron Transport Complex I • Mitochondrial Electron Transport Complex II • Mitochondrial Electron Transport Complex III • Mitochondrial Electron Transport Complex IV • Scanning Electron Microscopy • Spectroscopy, Electron Energy-Loss • Transmission Electron Microscopy • X-Ray Microanalysis, Electron Microscopic • X-Ray Microanalysis, Electron Probe • electron accelerator • electron beam • electron dot structure • electron gun • electron lens • electron microscope • electron microscopic • electron microscopy • electron multiplier • electron multiplier tube • electron optics • electron orbit • electron paramagnetic resonance • electron radiation • electron shell • electron spin resonance • electron tube • electron volt • free electron • unbound electron • valence electron
analogical dictionary
MESH root[Thème]
electron [MeSH]
A glass tube containing a glowing green electron beam
Experiments with a Crookes tube first demonstrated the particle nature of electrons. In this illustration, the profile of the cross-shaped target is projected against the tube face at right by a beam of electrons.[1]
Composition Elementary particle[2]
Statistics Fermionic
Generation First
Interactions Gravity, Electromagnetic, Weak
Symbol e
, β
Antiparticle Positron (also called antielectron)
Theorized Richard Laming (1838–1851),[3]
G. Johnstone Stoney (1874) and others.[4][5]
Discovered J. J. Thomson (1897)[6]
Mass 9.10938291(40)×10−31 kg[7]
5.4857990946(22)×10−4 u[7]
[1,822.8884845(14)]−1 u[note 1]
0.510998928(11) MeV/c2[7]
Electric charge −1 e[note 2]
−1.602176565(35)×10−19 C[7]
−4.80320451(10)×10−10 esu
Magnetic moment −1.00115965218076(27) μB[7]
Spin 12
The electron (symbol: e
) is a subatomic particle with a negative elementary electric charge.[8] It has no known components or substructure; in other words, it is generally thought to be an elementary particle.[2] An electron has a mass that is approximately 1/1836 that of the proton.[9] The intrinsic angular momentum (spin) of the electron is a half-integer value in units of ħ, which means that it is a fermion. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles may be totally annihilated, producing gamma ray photons. Electrons, which belong to the first generation of the lepton particle family,[10] participate in gravitational, electromagnetic and weak interactions.[11] Electrons, like all matter, have quantum mechanical properties of both particles and waves, so they can collide with other particles and can be diffracted like light. However, this duality is best demonstrated in experiments with electrons, due to their tiny mass. Since an electron is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle.[10]
The concept of an indivisible quantity of electric charge was theorized to explain the chemical properties of atoms, beginning in 1838 by British natural philosopher Richard Laming;[4] the name electron was introduced for this charge in 1894 by Irish physicist George Johnstone Stoney. The electron was identified as a particle in 1897 by J. J. Thomson and his team of British physicists.[6][12][13]
In many physical phenomena, such as electricity, magnetism, and thermal conductivity, electrons play an essential role. An electron in motion relative to an observer generates a magnetic field, and will be deflected by external magnetic fields. When an electron is accelerated, it can absorb or radiate energy in the form of photons. Electrons, together with atomic nuclei made of protons and neutrons, make up atoms. However, electrons contribute less than 0.06% to an atom's total mass. The attractive Coulomb force between an electron and a proton causes electrons to be bound into atoms. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.[14]
According to theory, most electrons in the universe were created in the big bang, but they may also be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. Electrons may be destroyed through annihilation with positrons, and may be absorbed during nucleosynthesis in stars. Laboratory instruments are capable of containing and observing individual electrons as well as electron plasma, whereas dedicated telescopes can detect electron plasma in outer space. Electrons have many applications, including welding, cathode ray tubes, electron microscopes, radiation therapy, lasers and particle accelerators.
The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Apart from lightning, this phenomenon is humanity's earliest recorded experience with electricity.[15] In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, to refer to this property of attracting small objects after being rubbed.[16] Both electric and electricity are derived from the Latin ēlectrum (also the root of the alloy of the same name), which came from the Greek word for amber, ήλεκτρον (ēlektron).
Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges.[3] Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis.[20] However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".[4]
In 1894, Stoney coined the term electron to describe these elementary charges, saying, "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron".[21] The word electron is a combination of the word electric and the suffix -on, with the latter now used to designate a subatomic particle, such as a proton or neutron.[22][23]
A round glass vacuum tube with a glowing circular beam inside
A beam of electrons deflected in a circle by a magnetic field[24]
The German physicist Johann Wilhelm Hittorf undertook the study of electrical conductivity in rarefied gases. In 1869, he discovered a glow emitted from the cathode that increased in size with decrease in gas pressure. In 1876, the German physicist Eugen Goldstein showed that the rays from this glow cast a shadow, and he dubbed the rays cathode rays.[25] During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside.[26] He then showed that the luminescence rays appearing within the tube carried energy and moved from the cathode to the anode. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged.[27][28] In 1879, he proposed that these properties could be explained by what he termed 'radiant matter'. He suggested that this was a fourth state of matter, consisting of negatively charged molecules that were being projected with high velocity from the cathode.[29]
The German-born British physicist Arthur Schuster expanded upon Crookes' experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given level of current, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time.[27][30]
In 1896, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson,[12] performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier.[6] Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles," had perhaps one thousandth of the mass of the least massive ion known: hydrogen.[6][13] He showed that their charge to mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal.[6][31] The name electron was again proposed for these particles by the Irish physicist George F. Fitzgerald, and the name has since gained universal acceptance.[27]
Robert Millikan
While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter.[32] In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays.[33] This evidence strengthened the view that electrons existed as components of atoms.[34][35]
The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team,[6] using clouds of charged water droplets generated by electrolysis,[12] and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913.[36] However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.[37]
Around the beginning of the twentieth century, it was found that under certain conditions a fast moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber, allowing the tracks of charged particles, such as quickly-moving electrons, to be photographed.[38]
Atomic theory
Three concentric circles about a nucleus, with an electron moving from the second to the first circle and releasing a photon
The Bohr model of the atom, showing states of electron with energy quantized by the number n. An electron dropping to a lower orbit emits a photon equal to the energy difference between the orbits.
By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons.[39] In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with the energy determined by the angular momentum of the electron's orbits about the nucleus. The electrons could move between these states, or orbits, by the emission or absorption of photons at specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom.[40] However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.[39]
Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them.[41] Later, in 1923, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics.[42] In 1919, the American chemist Irving Langmuir elaborated on the Lewis' static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness".[43] The shells were, in turn, divided by him in a number of cells each containing one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table,[42] which were known to largely repeat themselves according to the periodic law.[44]
In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was inhabited by no more than a single electron. (This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle.)[45] The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, Goudsmit and Uhlenbeck suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment.[39][46] The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.[47]
Quantum mechanics
In his 1924 dissertation Recherches sur la théorie des quanta (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter possesses a De Broglie wave similar to light.[48] That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment.[49] Wave-like nature is observed, for example, when a beam of light is passed through parallel slits and creates interference patterns. In 1927, the interference effect was demonstrated with a beam of electrons by English physicist George Paget Thomson with a thin metal film and by American physicists Clinton Davisson and Lester Germer using a crystal of nickel.[50]
A symmetrical blue cloud that decreases in intensity from the center outward
In quantum mechanics, the behavior of an electron in an atom is described by an orbital, which is a probability distribution rather than an orbit. In the figure, the shading indicates the relative probability to "find" the electron, having the energy corresponding to the given quantum numbers, at that point.
The success of de Broglie's prediction led to the publication, by Erwin Schrödinger in 1926, of the Schrödinger equation that successfully describes how electron waves propagated.[51] Rather than yielding a solution that determines the location of an electron over time, this wave equation can be used to predict the probability of finding an electron near a position. This approach was later called quantum mechanics, which provided an extremely close derivation to the energy states of an electron in a hydrogen atom.[52] Once spin and the interaction between multiple electrons were considered, quantum mechanics allowed the configuration of electrons in atoms with higher atomic numbers than hydrogen to be successfully predicted.[53]
In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron - the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field.[54] In order to resolve some problems within his relativistic equation, in 1930 Dirac developed a model of the vacuum as an infinite sea of particles having negative energy, which was dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron.[55] This particle was discovered in 1932 by Carl D. Anderson, who proposed calling standard electrons negatrons, and using electron as a generic term to describe both the positively and negatively charged variants.
In 1947 Willis Lamb, working in collaboration with graduate student Robert Rutherford, found that certain quantum states of hydrogen atom, which should have the same energy, were shifted in relation to each other, the difference being the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard P. Feynman in the late 1940s.[56]
Particle accelerators
With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles.[57] The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons, moving near the speed of light, through a magnetic field.[58]
With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968.[59] This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron.[60] The Large Electron-Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.[61][62]
A table with four rows and four columns, with each cell containing a particle identifier
Standard Model of elementary particles. The electron is at lower left.
In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles.[63] The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin 12.[64]
Fundamental properties
The invariant mass of an electron is approximately 9.109×10−31 kilograms,[65] or 5.489×10−4 atomic mass units. On the basis of Einstein's principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836.[9][66] Astronomical measurements show that the proton-to-electron mass ratio has held the same value for at least half the age of the universe, as is predicted by the Standard Model.[67]
Electrons have an electric charge of −1.602×10−19 coulomb,[65] which is used as a standard unit of charge for subatomic particles. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign.[68] As the symbol e is used for the elementary charge, the electron is commonly symbolized by e
because it has the same properties as the electron but with a positive rather than negative charge.[64][65]
The electron has an intrinsic angular momentum or spin of 12.[65] This property is usually stated by referring to the electron as a spin-12 particle.[64] For such particles the spin magnitude is 32 ħ.[note 3] while the result of the measurement of a projection of the spin on any axis can only be ±ħ2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis.[65] It is approximately equal to one Bohr magneton,[69][note 4] which is a physical constant equal to 9.27400915(23)×10−24 joules per tesla.[65] The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.[70]
The electron has no known substructure.[2][71] Hence, it is defined or assumed to be a point particle with a point charge and no spatial extent.[10] Observation of a single electron in a Penning trap shows the upper limit of the particle's radius is 10−22 meters.[72] There is a physical constant called the "classical electron radius", with the much larger value of 2.8179×10−15 m. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.[73][note 5]
There are elementary particles that spontaneously decay into less massive particles. An example is the muon, which decays into an electron, a neutrino and an antineutrino, with a mean lifetime of 2.2×10−6 seconds. However, the electron is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation.[74] The experimental lower bound for the electron's mean lifetime is 4.6×1026 years, at a 90% confidence level.[75]
Quantum properties
As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density.[76]
A three dimensional projection of a two dimensional plot. There are symmetric hills along one axis and symmetric valleys along the other, roughly giving a saddle-shape
Example of an antisymmetric wave function for a quantum state of two identical fermions in a 1-dimensional box. If the particles swap position, the wave function inverts its sign.
Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, ψ(r1, r2) = −ψ(r2, r1), where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead.[76]
In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit.[76]
Virtual particles
Physicists believe that empty space may be continually creating pairs of virtual particles, such as a positron and electron, which rapidly annihilate each other shortly thereafter.[77] The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, ħ6.6×10−16 eV·s. Thus, for a virtual electron, Δt is at most 1.3×10−21 s.[78]
A sphere with a minus sign at lower left symbolizes the electron, while pairs of spheres with plus and minus signs show the virtual particles
A schematic depiction of virtual electron–positron pairs appearing at random near an electron (at lower left)
While an electron–positron virtual pair is in existence, the coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron.[79][80] This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator.[81] Virtual particles cause a comparable shielding effect for the mass of the electron.[82]
The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment).[69][83] The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.[84]
In classical physics, the angular momentum and magnetic moment of an object depend upon its physical dimensions. Hence, the concept of a dimensionless electron possessing these properties might seem inconsistent. The apparent paradox can be explained by the formation of virtual photons in the electric field generated by the electron. These photons cause the electron to shift about in a jittery fashion (known as zitterbewegung),[85] which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron.[10][86] In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines.[79]
An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force is determined by Coulomb's inverse square law.[87] When an electron is in motion, it generates a magnetic field.[88] The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. It is this property of induction which supplies the magnetic field that drives an electric motor.[89] The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).
A graph with arcs showing the motion of charged particles
A particle with charge q (at left) is moving with velocity v through a magnetic field B that is oriented toward the viewer. For an electron, q is negative so it follows a curved trajectory toward the top.
When an electron is moving through a magnetic field, it is subject to the Lorentz force that exerts an influence in a direction perpendicular to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation.[90][91][note 6] The energy emission in turn causes a recoil of the electron, known as the Abraham-Lorentz-Dirac force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.[92]
In quantum electrodynamics the electromagnetic interaction between particles is mediated by photons. An isolated electron that is not undergoing acceleration is unable to emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. It is this exchange of virtual photons that, for example, generates the Coulomb force.[93] Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The acceleration of the electron results in the emission of Bremsstrahlung radiation.[94]
A curve shows the motion of the electron, a red dot shows the nucleus, and a wiggly line the emitted photon
Here, Bremsstrahlung is produced by an electron e deflected by the electric field of an atomic nucleus. The energy change E2 − E1 determines the frequency f of the emitted photon.
An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift.[note 7] The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength.[95] For an electron, it has a value of 2.43×10−12 m.[65] When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or Linear Thomson scattering.[96]
The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by α ≈ 7.297353×10−3, which is approximately equal to 1137.[65]
When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV.[97][98] On the other hand, high-energy photons may transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.[99][100]
In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a W and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a Z0
exchange, and this is responsible for neutrino-electron elastic scattering.[101]
Atoms and molecules
A table of five rows and five columns, with each cell portraying a color-coded probability density
Probability densities for the first few hydrogen atom orbitals, seen in cross-section. The energy level of a bound electron determines the orbital it occupies, and the color reflects the probability to find the electron at a given position.
An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of several electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus' electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number.
Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential.[102] Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect.[103] In order to escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.[104]
The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital (so called, paired electrons) cancel each other out.[105]
The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics.[106] The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules.[14] Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms.[107] A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. On the contrary, in non-bonded pairs electrons are distributed in a large volume around nuclei.[108]
Four bolts of lightning strike the ground
A lightning discharge consists primarily of a flow of electrons.[109] The electric potential needed for lightning may be generated by a triboelectric effect.[110][111]
If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect.[112]
Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasi-particles, which have the same electrical charge, spin and magnetic moment as real electrons but may have a different mass.[113] When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.[114]
At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation.[115] On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas)[116] through the material much like free electrons.
Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed.[117] This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material.[118]
When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electrical current, in a process known as superconductivity. In BCS theory, this behavior is modeled by pairs of electrons entering a quantum state known as a Bose–Einstein condensate. These Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance.[120] (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.)[121] However, the mechanism by which higher temperature superconductors operate remains uncertain.
Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into two other quasiparticles: spinons and holons.[122][123] The former carries spin and magnetic moment, while the latter electrical charge.
Motion and energy
The plot starts at zero and curves sharply upward toward the right
Lorentz factor as a function of velocity. It starts at value 1 and goes to infinity as v approaches c.
The effects of special relativity are based on a quantity known as the Lorentz factor, defined as \scriptstyle\gamma=1/ \sqrt{ 1-{v^2}/{c^2} } where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is:
\displaystyle K_{\mathrm{e}} = (\gamma - 1)m_{\mathrm{e}} c^2,
where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.[125] Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum.[48] For the 51 GeV electron above, the wavelength is about 2.4×10−17 m, small enough to explore structures well below the size of an atomic nucleus.[126]
A photon strikes the nucleus from the left, with the resulting electron and positron moving off to the right
The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe.[127] For the first millisecond of the Big Bang, the temperatures were over 10 billion Kelvin and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons:
γ + γe+
+ e
An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.[128]
For reasons that remain uncertain, during the process of leptogenesis there was an excess in the number of electrons over positrons.[129] Hence, about one electron in every billion survived the annihilation process. This excess matched the excess of protons over anti-protons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe.[130][131] The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes.[132] Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,
np + e
+ ν
For about the next 300,000400,000 yr, the excess electrons remained too energetic to bind with atomic nuclei.[133] What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.[134]
Roughly one million years after the big bang, the first generation of stars began to form.[134] Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus.[135] An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (60
A branching tree representing the particle production
An extended air shower generated by an energetic cosmic ray striking the Earth's atmosphere
At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole.[137] According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, it is believed that quantum mechanical effects may allow Hawking radiation to be emitted at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants.
When pairs of virtual particles (such as an electron and positron) are created in the vicinity of the event horizon, the random spatial distribution of these particles may permit one of them to appear on the exterior; this process is called quantum tunneling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space.[138] In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.[139]
Cosmic rays are particles traveling through space with high energies. Energy events as high as 3.0×1020 eV have been recorded.[140] When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions.[141] More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton which is produced in the upper atmosphere by the decay of a pion.
+ ν
A muon, in turn, can decay to form an electron or positron.[142]
+ ν
+ ν
A swirling green glow in the night sky above snow-covered ground
Aurorae are mostly caused by energetic electrons precipitating into the atmosphere.[143]
Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.[144]
The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it will absorb or emit photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct absorption lines will appear in the spectrum of transmitted radiation. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. Spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.[145][146]
In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge.[104] The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months.[147] The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.[148]
The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time.[149][150]
The distribution of the electrons in solid materials can be visualized by angle resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.[151]
Plasma applications
Particle beams
A violet beam from above produces a blue glow about a Space shuttle model
During a NASA wind tunnel test, a model of the Space Shuttle is targeted by a beam of electrons, simulating the effect of ionizing gases during re-entry.[152]
Electron beams are used in welding,[153] which allows energy densities up to 107 W·cm−2 across a narrow focus diameter of 0.1–1.3 mm and usually does not require a filler material. This welding technique must be performed in a vacuum, so that the electron beam does not interact with the gas prior to reaching the target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.[154][155]
Electron beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micron.[156] This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.[157]
Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products.[158] In radiation therapy, electron beams are generated by linear accelerators for treatment of superficial tumors. Because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV, electron therapy is useful for treating skin lesions such as basal cell carcinomas. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.[159][160]
Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. As these particles pass through magnetic fields, they emit synchrotron radiation. The intensity of this radiation is spin dependent, which causes polarization of the electron beam—a process known as the Sokolov–Ternov effect.[note 8] The polarized electron beams can be useful for various experiments. Synchrotron radiation can also be used for cooling the electron beams, which reduces the momentum spread of the particles. Once the particles have accelerated to the required energies, separate electron and positron beams are brought into collision. The resulting energy emissions are observed with particle detectors and are studied in particle physics.[161]
Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons, then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV.[162] The reflection high energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.[163][164]
The electron microscope directs a focused beam of electrons at a specimen. As the beam interacts with the material, some electrons change their properties, such as movement direction, angle, relative phase and energy. By recording these changes in the electron beam, microscopists can produce atomically resolved image of the material.[165] In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm.[166] By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential.[167] The Transmission Electron Aberration-corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms.[168] This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.
There are two main types of electron microscopes: transmission and scanning. Transmission electron microscopes function in a manner similar to overhead projector, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. In scanning electron microscopes, the image is produced by rastering a finely focused electron beam, as in a TV set, across the studied sample. The magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.[169][170][171]
Other applications
In the free electron laser (FEL), a relativistic electron beam is passed through a pair of undulators containing arrays of dipole magnets, whose fields are oriented in alternating directions. The electrons emit synchrotron radiation, which, in turn, coherently interacts with the same electrons. This leads to the strong amplification of the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices can be used in the future for manufacturing, communication and various medical applications, such as soft tissue surgery.[172]
Electrons are at the heart of cathode ray tubes, which have been used extensively as display devices in laboratory instruments, computer monitors and television sets.[173] In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse.[174] Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.[175]
See also
1. ^ The fractional version’s denominator is the inverse of the decimal value (along with its relative standard uncertainty of 4.2×10−13 u).
2. ^ The electron’s charge is the negative of elementary charge, which has a positive value for the proton.
3. ^ This magnitude is obtained from the spin quantum number as
S & = \sqrt{s(s + 1)} \cdot \frac{h}{2\pi} \\
& = \frac{\sqrt{3}}{2} \hbar \\
for quantum number s = 12.
See: Gupta, M.C. (2001). Atomic and Molecular Spectroscopy. New Age Publishers. p. 81. ISBN 81-224-1300-5.
4. ^ Bohr magneton:
5. ^ The classical electron radius is derived as follows. Assume that the electron's charge is spread uniformly throughout a spherical volume. Since one part of the sphere would repel the other parts, the sphere contains electrostatic potential energy. This energy is assumed to equal the electron's rest energy, defined by special relativity (E = mc2).
\textstyle E_{\mathrm p} = m_0 c^2,
See: Haken, H.; Wolf, H.C.; Brewer, W.D. (2005). The Physics of Atoms and Quanta: Introduction to Experiments and Theory. Springer. p. 70. ISBN 3-540-67274-5.
6. ^ Radiation from non-relativistic electrons is sometimes termed cyclotron radiation.
7. ^ The change in wavelength, Δλ, depends on the angle of the recoil, θ, as follows,
\textstyle \Delta \lambda = \frac{h}{m_{\mathrm{e}}c} (1 - \cos \theta),
where c is the speed of light in a vacuum and me is the electron mass. See Zombeck (2007: 393, 396).
8. ^ The polarization of an electron beam means that the spins of all electrons point into one direction. In other words, the projections of the spins of all electrons onto their momentum vector have the same sign.
1. ^ Dahl, P.F. (1997). Flash of the Cathode Rays: A History of J J Thomson's Electron. CRC Press. p. 72. ISBN 0-7503-0453-7.
2. ^ a b c Eichten, E.J.; Peskin, M.E.; Peskin, M. (1983). "New Tests for Quark and Lepton Substructure". Physical Review Letters 50 (11): 811–814. Bibcode 1983PhRvL..50..811E. DOI:10.1103/PhysRevLett.50.811.
3. ^ a b Farrar, W.V. (1969). "Richard Laming and the Coal-Gas Industry, with His Views on the Structure of Matter". Annals of Science 25 (3): 243–254. DOI:10.1080/00033796900200141.
4. ^ a b c Arabatzis, T. (2006). Representing Electrons: A Biographical Approach to Theoretical Entities. University of Chicago Press. pp. 70–74. ISBN 0-226-02421-0.
5. ^ Buchwald, J.Z.; Warwick, A. (2001). Histories of the Electron: The Birth of Microphysics. MIT Press. pp. 195–203. ISBN 0-262-52424-4.
6. ^ a b c d e f Thomson, J.J. (1897). "Cathode Rays". Philosophical Magazine 44: 293.
8. ^ "JERRY COFF". Retrieved 10 September 2010.
9. ^ a b "CODATA value: proton-electron mass ratio". 2006 CODATA recommended values. National Institute of Standards and Technology. Retrieved 2009-07-18.
10. ^ a b c d Curtis, L.J. (2003). Atomic Structure and Lifetimes: A Conceptual Approach. Cambridge University Press. p. 74. ISBN 0-521-53635-9.
11. ^ Anastopoulos, C. (2008). Particle Or Wave: The Evolution of the Concept of Matter in Modern Physics. Princeton University Press. pp. 236–237. ISBN 0-691-13512-6.
12. ^ a b c Dahl (1997:122–185).
13. ^ a b Wilson, R. (1997). Astronomy Through the Ages: The Story of the Human Attempt to Understand the Universe. CRC Press. p. 138. ISBN 0-7484-0748-0.
14. ^ a b Pauling, L.C. (1960). The Nature of the Chemical Bond and the Structure of Molecules and Crystals: an introduction to modern structural chemistry (3rd ed.). Cornell University Press. pp. 4–10. ISBN 0-8014-0333-2.
15. ^ Shipley, J.T. (1945). Dictionary of Word Origins. The Philosophical Library. p. 133. ISBN 0-88029-751-4.
16. ^ Baigrie, B. (2006). Electricity and Magnetism: A Historical Perspective. Greenwood Press. pp. 7–8. ISBN 0-313-33358-0.
17. ^ Keithley, J.F. (1999). The Story of Electrical and Magnetic Measurements: From 500 B.C. to the 1940s. IEEE Press. ISBN 0-7803-1193-0.
18. ^ "Benjamin Franklin (1706–1790)". Eric Weisstein's World of Biography. Wolfram Research. Retrieved 2010-12-16.
19. ^ Myers, R.L. (2006). The Basics of Physics. Greenwood Publishing Group. p. 242. ISBN 0-313-32857-9.
20. ^ Barrow, J.D. (1983). "Natural Units Before Planck". Quarterly Journal of the Royal Astronomical Society 24: 24–26. Bibcode 1983QJRAS..24...24B.
21. ^ Stoney, G.J. (1894). "Of the "Electron," or Atom of Electricity". Philosophical Magazine 38 (5): 418–420.
22. ^ Soukhanov, A.H. ed. (1986). Word Mysteries & Histories. Houghton Mifflin Company. p. 73. ISBN 0-395-40265-4.
23. ^ Guralnik, D.B. ed. (1970). Webster's New World Dictionary. Prentice-Hall. p. 450.
24. ^ Born, M.; Blin-Stoyle, R.J.; Radcliffe, J.M. (1989). Atomic Physics. Courier Dover. p. 26. ISBN 0-486-65984-4.
25. ^ Dahl (1997:55–58).
26. ^ DeKosky, R.K. (1983). "William Crookes and the quest for absolute vacuum in the 1870s". Annals of Science 40 (1): 1–18. DOI:10.1080/00033798300200101.
27. ^ a b c Leicester, H.M. (1971). The Historical Background of Chemistry. Courier Dover Publications. pp. 221–222. ISBN 0-486-61053-5.
28. ^ Dahl (1997:64–78).
29. ^ Zeeman, P.; Zeeman, P. (1907). "Sir William Crookes, F.R.S". Nature 77 (1984): 1–3. Bibcode 1907Natur..77....1C. DOI:10.1038/077001a0.
30. ^ Dahl (1997:99).
31. ^ Thomson, J.J. (1906). "Nobel Lecture: Carriers of Negative Electricity". The Nobel Foundation. Retrieved 2008-08-25.
32. ^ Trenn, T.J. (1976). "Rutherford on the Alpha-Beta-Gamma Classification of Radioactive Rays". Isis 67 (1): 61–75. DOI:10.1086/351545. JSTOR 231134.
33. ^ Becquerel, H. (1900). "Déviation du Rayonnement du Radium dans un Champ Électrique". Comptes Rendus de l'Académie des Sciences 130: 809–815. (French)
34. ^ Buchwald and Warwick (2001:90–91).
35. ^ Myers, W.G. (1976). "Becquerel's Discovery of Radioactivity in 1896". Journal of Nuclear Medicine 17 (7): 579–582. PMID 775027.
36. ^ Kikoin, I.K.; Sominskiĭ, I.S. (1961). "Abram Fedorovich Ioffe (on his eightieth birthday)". Soviet Physics Uspekhi 3 (5): 798–809. Bibcode 1961SvPhU...3..798K. DOI:10.1070/PU1961v003n05ABEH005812. Original publication in Russian: Кикоин, И.К.; Соминский, М.С. (1960). "Академик А.Ф. Иоффе". Успехи Физических Наук 72 (10): 303–321.
37. ^ Millikan, R.A. (1911). "The Isolation of an Ion, a Precision Measurement of its Charge, and the Correction of Stokes' Law". Physical Review 32 (2): 349–397. Bibcode 1911PhRvI..32..349M. DOI:10.1103/PhysRevSeriesI.32.349.
38. ^ Das Gupta, N.N.; Ghosh, S.K. (1999). "A Report on the Wilson Cloud Chamber and Its Applications in Physics". Reviews of Modern Physics 18 (2): 225–290. Bibcode 1946RvMP...18..225G. DOI:10.1103/RevModPhys.18.225.
39. ^ a b c Smirnov, B.M. (2003). Physics of Atoms and Ions. Springer. pp. 14–21. ISBN 0-387-95550-X.
40. ^ Bohr, N. (1922). "Nobel Lecture: The Structure of the Atom". The Nobel Foundation. Retrieved 2008-12-03.
42. ^ a b Arabatzis, T.; Gavroglu, K. (1997). "The chemists' electron". European Journal of Physics 18 (3): 150–163. Bibcode 1997EJPh...18..150A. DOI:10.1088/0143-0807/18/3/005.
43. ^ Langmuir, I. (1919). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society 41 (6): 868–934. DOI:10.1021/ja02227a002.
44. ^ Scerri, E.R. (2007). The Periodic Table. Oxford University Press. pp. 205–226. ISBN 0-19-530573-6.
45. ^ Massimi, M. (2005). Pauli's Exclusion Principle, The Origin and Validation of a Scientific Principle. Cambridge University Press. pp. 7–8. ISBN 0-521-83911-4.
46. ^ Uhlenbeck, G.E.; Goudsmith, S. (1925). "Ersetzung der Hypothese vom unmechanischen Zwang durch eine Forderung bezüglich des inneren Verhaltens jedes einzelnen Elektrons". Die Naturwissenschaften 13 (47): 953. Bibcode 1925NW.....13..953E. DOI:10.1007/BF01558878. (German)
47. ^ Pauli, W. (1923). "Über die Gesetzmäßigkeiten des anomalen Zeemaneffektes". Zeitschrift für Physik 16 (1): 155–164. Bibcode 1923ZPhy...16..155P. DOI:10.1007/BF01327386. (German)
48. ^ a b de Broglie, L. (1929). "Nobel Lecture: The Wave Nature of the Electron". The Nobel Foundation. Retrieved 2008-08-30.
49. ^ Falkenburg, B. (2007). Particle Metaphysics: A Critical Account of Subatomic Reality. Springer. p. 85. ISBN 3-540-33731-8.
50. ^ Davisson, C. (1937). "Nobel Lecture: The Discovery of Electron Waves". The Nobel Foundation. Retrieved 2008-08-30.
51. ^ Schrödinger, E. (1926). "Quantisierung als Eigenwertproblem". Annalen der Physik 385 (13): 437–490. Bibcode 1926AnP...385..437S. DOI:10.1002/andp.19263851302. (German)
52. ^ Rigden, J.S. (2003). Hydrogen. Harvard University Press. pp. 59–86. ISBN 0-674-01252-6.
53. ^ Reed, B.C. (2007). Quantum Mechanics. Jones & Bartlett Publishers. pp. 275–350. ISBN 0-7637-4451-4.
54. ^ Dirac, P.A.M. (1928). "The Quantum Theory of the Electron". Proceedings of the Royal Society of London A 117 (778): 610–624. Bibcode 1928RSPSA.117..610D. DOI:10.1098/rspa.1928.0023.
55. ^ Dirac, P.A.M. (1933). "Nobel Lecture: Theory of Electrons and Positrons". The Nobel Foundation. Retrieved 2008-11-01.
56. ^ "The Nobel Prize in Physics 1965". The Nobel Foundation. Retrieved 2008-11-04.
57. ^ Panofsky, W.K.H. (1997). "The Evolution of Particle Accelerators & Colliders". Beam Line (Stanford University) 27 (1): 36–44. Retrieved 2008-09-15.
58. ^ Elder, F.R.; et al. (1947). "Radiation from Electrons in a Synchrotron". Physical Review 71 (11): 829–830. Bibcode 1947PhRv...71..829E. DOI:10.1103/PhysRev.71.829.5.
59. ^ Hoddeson, L.; et al. (1997). The Rise of the Standard Model: Particle Physics in the 1960s and 1970s. Cambridge University Press. pp. 25–26. ISBN 0-521-57816-7.
60. ^ Bernardini, C. (2004). "AdA: The First Electron–Positron Collider". Physics in Perspective 6 (2): 156–183. Bibcode 2004PhP.....6..156B. DOI:10.1007/s00016-003-0202-y.
61. ^ "Testing the Standard Model: The LEP experiments". CERN. 2008. Retrieved 2008-09-15.
62. ^ "LEP reaps a final harvest". CERN Courier 40 (10). 2000. Retrieved 2008-11-01.
63. ^ Frampton, P.H.; Hung, P.Q.; Sher, Marc (2000). "Quarks and Leptons Beyond the Third Generation". Physics Reports 330 (5–6): 263–348. arXiv:hep-ph/9903387. Bibcode 2000PhR...330..263F. DOI:10.1016/S0370-1573(99)00095-2.
64. ^ a b c Raith, W.; Mulvey, T. (2001). Constituents of Matter: Atoms, Molecules, Nuclei and Particles. CRC Press. pp. 777–781. ISBN 0-8493-1202-7.
65. ^ a b c d e f g h The original source for CODATA is Mohr, P.J.; Taylor, B.N.; Newell, D.B. (2006). "CODATA recommended values of the fundamental physical constants". Reviews of Modern Physics 80 (2): 633–730. Bibcode 2008RvMP...80..633M. DOI:10.1103/RevModPhys.80.633.
Individual physical constants from the CODATA are available at: "The NIST Reference on Constants, Units and Uncertainty". National Institute of Standards and Technology. Retrieved 2009-01-15.
66. ^ Zombeck, M.V. (2007). Handbook of Space Astronomy and Astrophysics (3rd ed.). Cambridge University Press. p. 14. ISBN 0-521-78242-2.
67. ^ Murphy, M.T.; et al. (2008). "Strong Limit on a Variable Proton-to-Electron Mass Ratio from Molecules in the Distant Universe". Science 320 (5883): 1611–1613. Bibcode 2008Sci...320.1611M. DOI:10.1126/science.1156352. PMID 18566280.
68. ^ Zorn, J.C.; Chamberlain, G.E.; Hughes, V.W. (1963). "Experimental Limits for the Electron-Proton Charge Difference and for the Charge of the Neutron". Physical Review 129 (6): 2566–2576. Bibcode 1963PhRv..129.2566Z. DOI:10.1103/PhysRev.129.2566.
69. ^ a b Odom, B.; et al. (2006). "New Measurement of the Electron Magnetic Moment Using a One-Electron Quantum Cyclotron". Physical Review Letters 97 (3): 030801. Bibcode 2006PhRvL..97c0801O. DOI:10.1103/PhysRevLett.97.030801. PMID 16907490.
70. ^ Anastopoulos, C. (2008). Particle Or Wave: The Evolution of the Concept of Matter in Modern Physics. Princeton University Press. pp. 261–262. ISBN 0-691-13512-6.
71. ^ Gabrielse, G.; et al. (2006). "New Determination of the Fine Structure Constant from the Electron g Value and QED". Physical Review Letters 97 (3): 030802(1–4). Bibcode 2006PhRvL..97c0802G. DOI:10.1103/PhysRevLett.97.030802.
72. ^ Dehmelt, H. (1988). "A Single Atomic Particle Forever Floating at Rest in Free Space: New Value for Electron Radius". Physica Scripta T22: 102–110. Bibcode 1988PhST...22..102D. DOI:10.1088/0031-8949/1988/T22/016.
73. ^ Meschede, D. (2004). Optics, light and lasers: The Practical Approach to Modern Aspects of Photonics and Laser Physics. Wiley-VCH. p. 168. ISBN 3-527-40364-7.
74. ^ Steinberg, R.I.; et al. (1999). "Experimental test of charge conservation and the stability of the electron". Physical Review D 61 (2): 2582–2586. Bibcode 1975PhRvD..12.2582S. DOI:10.1103/PhysRevD.12.2582.
75. ^ Yao, W.-M. (2006). "Review of Particle Physics". Journal of Physics G 33 (1): 77–115. arXiv:astro-ph/0601168. Bibcode 2006JPhG...33....1Y. DOI:10.1088/0954-3899/33/1/001.
76. ^ a b c Munowitz, M. (2005). Knowing, The Nature of Physical Law. Oxford University Press. pp. 162–218. ISBN 0-19-516737-6.
77. ^ Kane, G. (October 9, 2006). "Are virtual particles really constantly popping in and out of existence? Or are they merely a mathematical bookkeeping device for quantum mechanics?". Scientific American. Retrieved 2008-09-19.
78. ^ Taylor, J. (1989). "Gauge Theories in Particle Physics". In Davies, Paul. The New Physics. Cambridge University Press. p. 464. ISBN 0-521-43831-4.
79. ^ a b Genz, H. (2001). Nothingness: The Science of Empty Space. Da Capo Press. pp. 241–243, 245–247. ISBN 0-7382-0610-5.
80. ^ Gribbin, J. (January 25, 1997). "More to electrons than meets the eye". New Scientist. Retrieved 2008-09-17.
81. ^ Levine, I.; et al. (1997). "Measurement of the Electromagnetic Coupling at Large Momentum Transfer". Physical Review Letters 78 (3): 424–427. Bibcode 1997PhRvL..78..424L. DOI:10.1103/PhysRevLett.78.424.
82. ^ Murayama, H. (March 10–17, 2006). "Supersymmetry Breaking Made Easy, Viable and Generic". Proceedings of the XLIInd Rencontres de Moriond on Electroweak Interactions and Unified Theories. La Thuile, Italy. arXiv:0709.3041. —lists a 9% mass difference for an electron that is the size of the Planck distance.
83. ^ Schwinger, J. (1948). "On Quantum-Electrodynamics and the Magnetic Moment of the Electron". Physical Review 73 (4): 416–417. Bibcode 1948PhRv...73..416S. DOI:10.1103/PhysRev.73.416.
84. ^ Huang, K. (2007). Fundamental Forces of Nature: The Story of Gauge Fields. World Scientific. pp. 123–125. ISBN 981-270-645-3.
85. ^ Foldy, L.L.; Wouthuysen, S. (1950). "On the Dirac Theory of Spin 1/2 Particles and Its Non-Relativistic Limit". Physical Review 78: 29–36. Bibcode 1950PhRv...78...29F. DOI:10.1103/PhysRev.78.29.
86. ^ Sidharth, B.G. (2008). "Revisiting Zitterbewegung". International Journal of Theoretical Physics 48 (2): 497–506. arXiv:0806.0985. Bibcode 2009IJTP...48..497S. DOI:10.1007/s10773-008-9825-8.
87. ^ Elliott, R.S. (1978). "The History of Electromagnetics as Hertz Would Have Known It". IEEE Transactions on Microwave Theory and Techniques 36 (5): 806–823. Bibcode 1988ITMTT..36..806E. DOI:10.1109/22.3600.
88. ^ Munowitz (2005:140).
89. ^ Crowell, B. (2000). Electricity and Magnetism. Light and Matter. pp. 129–152. ISBN 0-9704670-4-4.
90. ^ Munowitz (2005:160).
91. ^ Mahadevan, R.; Narayan, R.; Yi, I. (1996). "Harmony in Electrons: Cyclotron and Synchrotron Emission by Thermal Electrons in a Magnetic Field". Astrophysical Journal 465: 327–337. arXiv:astro-ph/9601073. Bibcode 1996ApJ...465..327M. DOI:10.1086/177422.
92. ^ Rohrlich, F. (1999). "The Self-Force and Radiation Reaction". American Journal of Physics 68 (12): 1109–1112. Bibcode 2000AmJPh..68.1109R. DOI:10.1119/1.1286430.
93. ^ Georgi, H. (1989). "Grand Unified Theories". In Davies, Paul. The New Physics. Cambridge University Press. p. 427. ISBN 0-521-43831-4.
94. ^ Blumenthal, G.J.; Gould, R. (1970). "Bremsstrahlung, Synchrotron Radiation, and Compton Scattering of High-Energy Electrons Traversing Dilute Gases". Reviews of Modern Physics 42 (2): 237–270. Bibcode 1970RvMP...42..237B. DOI:10.1103/RevModPhys.42.237.
95. ^ Staff (2008). "The Nobel Prize in Physics 1927". The Nobel Foundation. Retrieved 2008-09-28.
96. ^ Chen, S.-Y.; Maksimchuk, A.; Umstadter, D. (1998). "Experimental observation of relativistic nonlinear Thomson scattering". Nature 396 (6712): 653–655. arXiv:physics/9810036. Bibcode 1998Natur.396..653C. DOI:10.1038/25303.
97. ^ Beringer, R.; Montgomery, C.G. (1942). "The Angular Distribution of Positron Annihilation Radiation". Physical Review 61 (5–6): 222–224. Bibcode 1942PhRv...61..222B. DOI:10.1103/PhysRev.61.222.
98. ^ Buffa, A. (2000). College Physics (4th ed.). Prentice Hall. p. 888. ISBN [[Special:BookSources/0-13-082444-5}|0-13-082444-5}]].
99. ^ Eichler, J. (2005). "Electron–positron pair production in relativistic ion–atom collisions". Physics Letters A 347 (1–3): 67–72. Bibcode 2005PhLA..347...67E. DOI:10.1016/j.physleta.2005.06.105.
100. ^ Hubbell, J.H. (2006). "Electron positron pair production by photons: A historical overview". Radiation Physics and Chemistry 75 (6): 614–623. Bibcode 2006RaPC...75..614H. DOI:10.1016/j.radphyschem.2005.10.008.
101. ^ Quigg, C. (June 4–30, 2000). "The Electroweak Theory". TASI 2000: Flavor Physics for the Millennium. Boulder, Colorado. p. 80. arXiv:hep-ph/0204104.
103. ^ Burhop, E.H.S. (1952). The Auger Effect and Other Radiationless Transitions. Cambridge University Press. pp. 2–3. ISBN 0-88275-966-3.
104. ^ a b Grupen, C. (2000). "Physics of Particle Detection". AIP Conference Proceedings 536: 3–34. arXiv:physics/9906063. DOI:10.1063/1.1361756.
105. ^ Jiles, D. (1998). Introduction to Magnetism and Magnetic Materials. CRC Press. pp. 280–287. ISBN 0-412-79860-3.
106. ^ Löwdin, P.O.; Erkki Brändas, E.; Kryachko, E.S. (2003). Fundamental World of Quantum Chemistry: A Tribute to the Memory of Per- Olov Löwdin. Springer. pp. 393–394. ISBN 1-4020-1290-X.
107. ^ McQuarrie, D.A.; Simon, J.D. (1997). Physical Chemistry: A Molecular Approach. University Science Books. pp. 325–361. ISBN 0-935702-99-7.
108. ^ Daudel, R.; et al. (1973). "The Electron Pair in Chemistry". Canadian Journal of Chemistry 52 (8): 1310–1320. DOI:10.1139/v74-201.
109. ^ Rakov, V.A.; Uman, M.A. (2007). Lightning: Physics and Effects. Cambridge University Press. p. 4. ISBN 0-521-03541-4.
110. ^ Freeman, G.R. (1999). "Triboelectricity and some associated phenomena". Materials science and technology 15 (12): 1454–1458.
111. ^ Forward, K.M.; Lacks, D.J.; Sankaran, R.M. (2009). "Methodology for studying particle–particle triboelectrification in granular materials". Journal of Electrostatics 67 (2–3): 178–183. DOI:10.1016/j.elstat.2008.12.002.
112. ^ Weinberg, S. (2003). The Discovery of Subatomic Particles. Cambridge University Press. pp. 15–16. ISBN 0-521-82351-X.
113. ^ Lou, L.-F. (2003). Introduction to phonons and electrons. World Scientific. pp. 162, 164. ISBN 978-981-238-461-4.
114. ^ Guru, B.S.; Hızıroğlu, H.R. (2004). Electromagnetic Field Theory. Cambridge University Press. pp. 138, 276. ISBN 0-521-83016-8.
115. ^ Achuthan, M.K.; Bhat, K.N. (2007). Fundamentals of Semiconductor Devices. Tata McGraw-Hill. pp. 49–67. ISBN 0-07-061220-X.
116. ^ a b Ziman, J.M. (2001). Electrons and Phonons: The Theory of Transport Phenomena in Solids. Oxford University Press. p. 260. ISBN 0-19-850779-8.
117. ^ Main, P. (June 12, 1993). "When electrons go with the flow: Remove the obstacles that create electrical resistance, and you get ballistic electrons and a quantum surprise". New Scientist 1887: 30. Retrieved 2008-10-09.
118. ^ Blackwell, G.R. (2000). The Electronic Packaging Handbook. CRC Press. pp. 6.39–6.40. ISBN 0-8493-8591-1.
119. ^ Durrant, A. (2000). Quantum Physics of Matter: The Physical World. CRC Press. p.;ISBN 0-7503-0721-8.
120. ^ Staff (2008). "The Nobel Prize in Physics 1972". The Nobel Foundation. Retrieved 2008-10-13.
121. ^ Kadin, A.M. (2007). "Spatial Structure of the Cooper Pair". Journal of Superconductivity and Novel Magnetism 20 (4): 285–292. arXiv:cond-mat/0510279. DOI:10.1007/s10948-006-0198-z.
122. ^ "Discovery About Behavior Of Building Block Of Nature Could Lead To Computer Revolution". ScienceDaily. July 31, 2009. Retrieved 2009-08-01.
123. ^ Jompol, Y.; et al. (2009). "Probing Spin-Charge Separation in a Tomonaga-Luttinger Liquid". Science 325 (5940): 597–601. Bibcode 2009Sci...325..597J. DOI:10.1126/science.1171769. PMID 19644117.
124. ^ Staff (2008). "The Nobel Prize in Physics 1958, for the discovery and the interpretation of the Cherenkov effect". The Nobel Foundation. Retrieved 2008-09-25.
125. ^ Staff (August 26, 2008). "Special Relativity". Stanford Linear Accelerator Center. Retrieved 2008-09-25.
126. ^ Adams, S. (2000). Frontiers: Twentieth Century Physics. CRC Press. p. 215. ISBN 0-7484-0840-1.
127. ^ Lurquin, P.F. (2003). The Origins of Life and the Universe. Columbia University Press. p. 2. ISBN 0-231-12655-7.
128. ^ Silk, J. (2000). The Big Bang: The Creation and Evolution of the Universe (3rd ed.). Macmillan. pp. 110–112, 134–137. ISBN 0-8050-7256-X.
129. ^ Christianto, V. (2007). "Thirty Unsolved Problems in the Physics of Elementary Particles". Progress in Physics 4: 112–114.
130. ^ Kolb, E.W. (1980). "The Development of Baryon Asymmetry in the Early Universe". Physics Letters B 91 (2): 217–221. Bibcode 1980PhLB...91..217K. DOI:10.1016/0370-2693(80)90435-9.
131. ^ Sather, E. (Spring/Summer 1996). "The Mystery of Matter Asymmetry". Beam Line. University of Stanford. Retrieved 2008-11-01.
132. ^ Burles, S.; Nollett, K.M.; Turner, M.S. (1999). "Big-Bang Nucleosynthesis: Linking Inner Space and Outer Space". arXiv:astro-ph/9903300 [astro-ph].
133. ^ Boesgaard, A.M.; Steigman, G. (1985). "Big bang nucleosynthesis – Theories and observations". Annual Review of Astronomy and Astrophysics 23 (2): 319–378. Bibcode 1985ARA&A..23..319B. DOI:10.1146/annurev.aa.23.090185.001535.
134. ^ a b Barkana, R. (2006). "The First Stars in the Universe and Cosmic Reionization". Science 313 (5789): 931–934. arXiv:astro-ph/0608450. Bibcode 2006Sci...313..931B. DOI:10.1126/science.1125644. PMID 16917052.
135. ^ Burbidge, E.M.; et al. (1957). "Synthesis of Elements in Stars". Reviews of Modern Physics 29 (4): 548–647. Bibcode 1957RvMP...29..547B. DOI:10.1103/RevModPhys.29.547.
136. ^ Rodberg, L.S.; Weisskopf, V. (1957). "Fall of Parity: Recent Discoveries Related to Symmetry of Laws of Nature". Science 125 (3249): 627–633. Bibcode 1957Sci...125..627R. DOI:10.1126/science.125.3249.627. PMID 17810563.
137. ^ Fryer, C.L. (1999). "Mass Limits For Black Hole Formation". Astrophysical Journal 522 (1): 413–418. arXiv:astro-ph/9902315. Bibcode 1999ApJ...522..413F. DOI:10.1086/307647.
138. ^ Parikh, M.K.; Wilczek, F. (2000). "Hawking Radiation As Tunneling". Physical Review Letters 85 (24): 5042–5045. arXiv:hep-th/9907001. Bibcode 2000PhRvL..85.5042P. DOI:10.1103/PhysRevLett.85.5042. PMID 11102182.
139. ^ Hawking, S.W. (1974). "Black hole explosions?". Nature 248 (5443): 30–31. Bibcode 1974Natur.248...30H. DOI:10.1038/248030a0.
140. ^ Halzen, F.; Hooper, D. (2002). "High-energy neutrino astronomy: the cosmic ray connection". Reports on Progress in Physics 66 (7): 1025–1078. arXiv:astro-ph/0204527. Bibcode DOI:10.1088/0034-4885/65/7/201.
141. ^ Ziegler, J.F. (1998). "Terrestrial cosmic ray intensities". IBM Journal of Research and Development 42 (1): 117–139. DOI:10.1147/rd.421.0117.
142. ^ Sutton, C. (August 4, 1990). "Muons, pions and other strange particles". New Scientist. Retrieved 2008-08-28.
143. ^ Wolpert, S. (July 24, 2008). "Scientists solve 30-year-old aurora borealis mystery". University of California. Retrieved 2008-10-11.
144. ^ Gurnett, D.A.; Anderson, R. (1976). "Electron Plasma Oscillations Associated with Type III Radio Bursts". Science 194 (4270): 1159–1162. Bibcode 1976Sci...194.1159G. DOI:10.1126/science.194.4270.1159. PMID 17790910.
145. ^ Martin, W.C.; Wiese, W.L. (2007). "Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas". National Institute of Standards and Technology. Retrieved 2007-01-08.
146. ^ Fowles, G.R. (1989). Introduction to Modern Optics. Courier Dover. pp. 227–233. ISBN 0-486-65957-7.
147. ^ Staff (2008). "The Nobel Prize in Physics 1989". The Nobel Foundation. Retrieved 2008-09-24.
148. ^ Ekstrom, P. (1980). "The isolated Electron". Scientific American 243 (2): 91–101. Retrieved 2008-09-24.
149. ^ Mauritsson, J.. "Electron filmed for the first time ever". Lund University. Retrieved 2008-09-17.
150. ^ Mauritsson, J.; et al. (2008). "Coherent Electron Scattering Captured by an Attosecond Quantum Stroboscope". Physical Review Letters 100 (7): 073003. Bibcode 2008PhRvL.100g3003M. DOI:10.1103/PhysRevLett.100.073003. PMID 18352546.
151. ^ Damascelli, A. (2004). "Probing the Electronic Structure of Complex Systems by ARPES". Physica Scripta T109: 61–74. arXiv:cond-mat/0307085. Bibcode 2004PhST..109...61D. DOI:10.1238/Physica.Topical.109a00061.
152. ^ Staff (April 4, 1975). "Image # L-1975-02972". Langley Research Center, NASA. Retrieved 2008-09-20.
153. ^ Elmer, J. (March 3, 2008). "Standardizing the Art of Electron-Beam Welding". Lawrence Livermore National Laboratory. Retrieved 2008-10-16.
154. ^ Schultz, H. (1993). Electron Beam Welding. Woodhead Publishing. pp. 2–3. ISBN 1-85573-050-2.
155. ^ Benedict, G.F. (1987). Nontraditional Manufacturing Processes. Manufacturing engineering and materials processing. 19. CRC Press. p. 273. ISBN 0-8247-7352-7.
156. ^ Ozdemir, F.S. (June 25–27, 1979). "Electron beam lithography". Proceedings of the 16th Conference on Design automation. San Diego, CA, USA: IEEE Press. pp. 383–391. Retrieved 2008-10-16.
157. ^ Madou, M.J. (2002). Fundamentals of Microfabrication: the Science of Miniaturization (2nd ed.). CRC Press. pp. 53–54. ISBN 0-8493-0826-7.
158. ^ Jongen, Y.; Herer, A. (May 2–5, 1996). "Electron Beam Scanning in Industrial Applications". APS/AAPT Joint Meeting. American Physical Society. Bibcode 1996APS..MAY.H9902J.
159. ^ Beddar, A.S. (2001). "Mobile linear accelerators for intraoperative radiation therapy". AORN Journal 74 (5): 700. DOI:10.1016/S0001-2092(06)61769-9. Retrieved 2008-10-26.
160. ^ Gazda, M.J.; Coia, L.R. (June 1, 2007). "Principles of Radiation Therapy". Cancer Network. Retrieved 2008-10-26.
161. ^ Chao, A.W.; Tigner, M. (1999). Handbook of Accelerator Physics and Engineering. World Scientific. pp. 155, 188. ISBN 981-02-3500-3.
162. ^ Oura, K.; et al. (2003). Surface Science: An Introduction. Springer-Verlag. pp. 1–45. ISBN 3-540-00545-5.
163. ^ Ichimiya, A.; Cohen, P.I. (2004). Reflection High-energy Electron Diffraction. Cambridge University Press. p. 1. ISBN 0-521-45373-9.
164. ^ Heppell, T.A. (1967). "A combined low energy and reflection high energy electron diffraction apparatus". Journal of Scientific Instruments 44 (9): 686–688. Bibcode 1967JScI...44..686H. DOI:10.1088/0950-7671/44/9/311.
165. ^ McMullan, D. (1993). "Scanning Electron Microscopy: 1928–1965". University of Cambridge. Retrieved 2009-03-23.
166. ^ Slayter, H.S. (1992). Light and electron microscopy. Cambridge University Press. p. 1. ISBN 0-521-33948-0.
167. ^ Cember, H. (1996). Introduction to Health Physics. McGraw-Hill Professional. pp. 42–43. ISBN 0-07-105461-8.
168. ^ Erni, R.; et al. (2009). "Atomic-Resolution Imaging with a Sub-50-pm Electron Probe". Physical Review Letters 102 (9): 096101. Bibcode 2009PhRvL.102i6101E. DOI:10.1103/PhysRevLett.102.096101. PMID 19392535.
169. ^ Bozzola, J.J.; Russell, L.D. (1999). Electron Microscopy: Principles and Techniques for Biologists. Jones & Bartlett Publishers. pp. 12, 197–199. ISBN 0-7637-0192-0.
170. ^ Flegler, S.L.; Heckman Jr., J.W.; Klomparens, K.L. (1995). Scanning and Transmission Electron Microscopy: An Introduction (Reprint ed.). Oxford University Press. pp. 43–45. ISBN 0-19-510751-9.
171. ^ Bozzola, J.J.; Russell, L.D. (1999). Electron Microscopy: Principles and Techniques for Biologists (2nd ed.). Jones & Bartlett Publishers. p. 9. ISBN 0-7637-0192-0.
172. ^ Freund, H.P.; Antonsen, T. (1996). Principles of Free-Electron Lasers. Springer. pp. 1–30. ISBN 0-412-72540-1.
173. ^ Kitzmiller, J.W. (1995). Television Picture Tubes and Other Cathode-Ray Tubes: Industry and Trade Summary. DIANE Publishing. pp. 3–5. ISBN 0-7881-2100-6.
174. ^ Sclater, N. (1999). Electronic Technology Handbook. McGraw-Hill Professional. pp. 227–228. ISBN 0-07-058048-0.
175. ^ Staff (2008). "The History of the Integrated Circuit". The Nobel Foundation. Retrieved 2008-10-18.
External links
All translations of Electron
Advertising ▼ |
6697089b57fe6e93 | Page protected with pending changes level 1
Quantum mechanics
From Wikipedia, the free encyclopedia
(Redirected from Quantum Physics)
Jump to: navigation, search
For a more accessible and less technical introduction to this topic, see Introduction to quantum mechanics.
Solution to Schrödinger's equation for the hydrogen atom at different energy levels. The brighter areas represent a higher probability of finding an electron
Quantum mechanics (QM; also known as quantum physics, or quantum theory) is a fundamental branch of physics which describes physical phenomena at scales typical of the quantum realm of atomic and subatomic length scales, where the action is on the order of the Planck constant. At these scales, many everyday concepts concerning physical objects and energy (including the photons making up visible light) are believed to behave and interact extremely differently than is usually seen in daily life. Quantum mechanics provides an extremely accurate description of the behavior of photons, electrons, and other atomic- and molecular-scale objects. At larger (macroscopic) scales its predictions simplify to become the laws of classical mechanics familiar in the everyday world, although even in the everyday world, many phenomena can be observed with the naked eye, which cannot be explained classically but require a quantum mechanical explanation. Important applications of quantum mechanical theory include superconducting magnets, LEDs and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as MRI and the electron microscopy, and explanations for many biological and physical phenomena.
Quantum mechanics takes its name from the observation that some physical quantities exist, and can change and interact, only by discrete amounts (in a 'step-like' manner) and behave probabilistically rather than deterministically. The "steps" are so tiny that they are completely imperceptible even with a microscope, and any description must be given in terms of a wave function rather than specific particles and movements. The term "quantum" itself (plural: quanta) comes from the Latin word quantus meaning how much?, referring to a 'packet' (or amount) of energy, momentum, or any other attribute that is quantized and can only change by discrete amounts. This tiny scale is why quantum mechanics generally leads to classical mechanics in macroscopic situations: - the vast numbers of quantum effects involved in everyday observations means that discrete quantum behaviors are usually hidden by much larger statistical effects (similar to "averaging").
Other fundamental quantum mechanical principles are wave-particle duality (quanta exhibit both 'wave-like' behaviors such as refraction and 'particle-like' behavior), the uncertainty principle (attempting to measure one attribute such as velocity or position may cause another attribute to become less measurable), and superposition and the status of the observer (a wave function superimposes multiple co-existing states that have different probabilities; observation causes collapse of the wave function to some specific state, in several interpretations , as in the famous example of Schrödinger's Cat).
Quantum mechanics was initially developed as a field in the early 20th century, driven by the black-body radiation problem (reported 1859) and Albert Einsteins 1905 paper which offered a quantum-based theory to explain the photoelectric effect (reported 1887). Around 1900-1910, the atomic theory and the corpuscular theory of light[1] first came to be widely accepted as scientific fact; these latter theories can be viewed as quantum theories of matter and electromagnetic radiation, respectively. Early quantum theory was significantly reformulated in the mid-1920s by Werner Heisenberg, Max Born and Pascual Jordan (matrix mechanics); Louis de Broglie and Erwin Schrödinger (wave mechanics); and Wolfgang Pauli and Satyendra Nath Bose (statistics of subatomic particles). Moreover, the Copenhagen interpretation of Niels Bohr became widely accepted. By 1930, quantum mechanics had been further unified and formalized by the work of David Hilbert, Paul Dirac and John von Neumann[2] with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines including quantum chemistry, quantum electronics, quantum optics, and quantum information science, and its modern developments include quantum field theory, string theory, and speculative quantum gravity theories. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies.
The mathematical formulations of quantum mechanics are abstract. A mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. Mathematical manipulations of the wave function usually involve bra–ket notation, which requires an understanding of complex numbers and linear functionals. The wavefunction formulation treats the particle as a quantum harmonic oscillator, and the mathematics is akin to that describing acoustic resonance. Many of the results of quantum mechanics are not easily visualized in terms of classical mechanics. For instance, in a quantum mechanical model, the lowest energy state of a system, the ground state, is non-zero as opposed to a more "traditional" ground state with zero kinetic energy (all particles at rest). Instead of a traditional static, unchanging zero energy state, quantum mechanics allows for far more dynamic, chaotic possibilities, according to John Wheeler.
Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations.[3] In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a paper entitled On the nature of light and colours. This experiment played a major role in the general acceptance of the wave theory of light.
In 1838, Michael Faraday discovered cathode rays. These studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.[4] Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy elements) precisely matched the observed patterns of black-body radiation.
In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation,[5] known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was valid only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics.
Among the first to study quantum phenomena in nature were Arthur Compton, C.V. Raman, and Pieter Zeeman, each of whom has a quantum effect named after him. Robert A. Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. At the same time, Niels Bohr developed his theory of the atomic structure, which was later confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld.[6] This phase is known as old quantum theory.
According to Planck, each energy element (E) is proportional to its frequency (ν):
E = h \nu\
Max Planck is considered the father of the quantum theory.
where h is Planck's constant.
Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.[7] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[8] However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. He won the 1921 Nobel Prize in Physics for this work. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete quantum of energy that was dependent on its frequency.[9]
The foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wilhelm Wien, Satyendra Nath Bose, Arnold Sommerfeld, and others. In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory. Out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). From Einstein's simple postulation was born a flurry of debating, theorizing, and testing. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.
It was found that subatomic particles and electromagnetic waves are neither simply particle nor wave but have certain properties of each. This originated the concept of wave–particle duality.
While quantum mechanics traditionally described the world of the very small, it is also needed to explain certain recently investigated macroscopic systems such as superconductors, superfluids, and large organic molecules.[10]
The word quantum derives from the Latin, meaning "how great" or "how much".[11] In quantum mechanics, it refers to a discrete unit assigned to certain physical quantities such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and subatomic systems which is today called quantum mechanics. It underlies the mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.[12] Some fundamental aspects of the theory are still actively studied.[13]
Quantum mechanics is essential to understanding the behavior of systems at atomic length scales and smaller. If the physical nature of an atom was solely described by classical mechanics, electrons would not orbit the nucleus, since orbiting electrons emit radiation (due to circular motion) and would eventually collide with the nucleus due to this loss of energy. This framework was unable to explain the stability of atoms. Instead, electrons remain in an uncertain, non-deterministic, smeared, probabilistic wave–particle orbital about the nucleus, defying the traditional assumptions of classical mechanics and electromagnetism.[14]
Mathematical formulations[edit]
See also: Quantum logic
In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac,[15] David Hilbert,[16] John von Neumann,[17] and Hermann Weyl,[18] the possible states of a quantum mechanical system are represented by unit vectors (called state vectors). Formally, these reside in a complex separable Hilbert space—variously called the state space or the associated Hilbert space of the system—that is well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system—for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can attain only those discrete eigenvalues.
Generally, quantum mechanics does not assign definite values. Instead, it makes a prediction using a probability distribution; that is, it describes the probability of obtaining the possible outcomes from measuring an observable. Often these results are skewed by many causes, such as dense probability clouds. Probability clouds are approximate (but better than the Bohr model) whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude, or quantum state nuclear attraction.[22][23] Naturally, these probabilities will depend on the quantum state at the "instant" of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as eigenstates of the observable ("eigen" can be translated from German as meaning "inherent" or "characteristic").[24]
Mathematically equivalent formulations of quantum mechanics[edit]
There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics - matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger).[32]
Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born in the development of QM was overlooked until the 1954 Nobel award. The role is noted in a 2005 biography of Born, which recounts his role in the matrix formulation of quantum mechanics, and the use of probability amplitudes. Heisenberg himself acknowledges having learned matrices from Born, as published in a 1940 festschrift honoring Max Planck.[33] In the matrix formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables include energy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).[34] An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics.
Interactions with other scientific theories[edit]
List of unsolved problems in physics
Quantum mechanics and classical physics[edit]
Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.[37] According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems of objects (or a statistical quantum mechanics of a large collection of particles).[38] The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers.[39] However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.
Quantum coherence is an essential difference between classical and quantum theories as illustrated by the Einstein–Podolsky–Rosen (EPR) paradox — an attack on a certain philosophical interpretation of quantum mechanics by an appeal to local realism.[40] Quantum interference involves adding together probability amplitudes, whereas classical "waves" infer that there is an adding together of intensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena characteristic of quantum systems.[41] Quantum coherence is not typically evident at macroscopic scales, though an exception to this rule may occur at extremely low temperatures (i.e. approaching absolute zero) at which quantum behavior may manifest itself macroscopically.[42] This is in accordance with the following observations:
• Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.[43]
• While the seemingly "exotic" behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with particles of extremely small size or velocities approaching the speed of light, the laws of classical, often considered "Newtonian", physics remain accurate in predicting the behavior of the vast majority of "large" objects (on the order of the size of large molecules or bigger) at velocities much smaller than the velocity of light.[44]
Copenhagen interpretation of quantum versus classical kinematics[edit]
A big difference between classical and quantum mechanics is that they use very different kinematic descriptions.[45]
In Niels Bohr's mature view, quantum mechanical phenomena are required to be experiments, with complete descriptions of all the devices for the system, preparative, intermediary, and finally measuring. The descriptions are in macroscopic terms, expressed in ordinary language, supplemented with the concepts of classical mechanics.[46][47][48][49] The initial condition and the final condition of the system are respectively described by values in a configuration space, for example a position space, or some equivalent space such as a momentum space. Quantum mechanics does not admit a completely precise description, in terms of both position and momentum, of an initial condition or "state" (in the classical sense of the word) that would support a precisely deterministic and causal prediction of a final condition.[50][51] In this sense, advocated by Bohr in his mature writings, a quantum phenomenon is a process, a passage from initial to final condition, not an instantaneous "state" in the classical sense of that word.[52][53] Thus there are two kinds of processes in quantum mechanics: stationary and transitional. For a stationary process, the initial and final condition are the same. For a transition, they are different. Obviously by definition, if only the initial condition is given, the process is not determined.[50] Given its initial condition, prediction of its final condition is possible, causally but only probabilistically, because the Schrödinger equation is deterministic for wave function evolution, but the wave function describes the system only probabilistically.[54][55]
For many experiments, it is possible to think of the initial and final conditions of the system as being a particle. In some cases it appears that there are potentially several spatially distinct pathways or trajectories by which a particle might pass from initial to final condition. It is an important feature of the quantum kinematic description that it does not permit a unique definite statement of which of those pathways is actually followed. Only the initial and final conditions are definite, and, as stated in the foregoing paragraph, they are defined only as precisely as allowed by the configuration space description or its equivalent. In every case for which a quantum kinematic description is needed, there is always a compelling reason for this restriction of kinematic precision. An example of such a reason is that for a particle to be experimentally found in a definite position, it must be held motionless; for it to be experimentally found to have a definite momentum, it must have free motion; these two are logically incompatible.[56][57]
Classical kinematics does not primarily demand experimental description of its phenomena. It allows completely precise description of an instantaneous state by a value in phase space, the Cartesian product of configuration and momentum spaces. This description simply assumes or imagines a state as a physically existing entity without concern about its experimental measurability. Such a description of an initial condition, together with Newton's laws of motion, allows a precise deterministic and causal prediction of a final condition, with a definite trajectory of passage. Hamiltonian dynamics can be used for this. Classical kinematics also allows the description of a process analogous to the initial and final condition description used by quantum mechanics. Lagrangian mechanics applies to this.[58] For processes that need account to be taken of actions of a small number of Planck constants, classical kinematics is not adequate; quantum mechanics is needed.
Relativity and quantum mechanics[edit]
Even with the defining postulates of both Einstein's theory of general relativity and quantum theory being indisputably supported by rigorous and repeated empirical evidence, and while they do not directly contradict each other theoretically (at least with regard to their primary claims), they have proven extremely difficult to incorporate into one consistent, cohesive model.[59]
Attempts at a unified field theory[edit]
Main article: Grand unified theory
The quest to unify the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently (in the perturbative regime at least) the most accurately tested physical theory in competition with general relativity,[61][62][unreliable source?](blog) has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field.[63] Beyond this "grand unification", it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However — and while special relativity is parsimoniously incorporated into quantum electrodynamics — the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. One of those searching for a coherent TOE is Edward Witten, a theoretical physicist who formulated the M-theory, which is an attempt at describing the supersymmetrical based string theory. M-theory posits that our apparent 4-dimensional spacetime is, in reality, actually an 11-dimensional spacetime containing 10 spatial dimensions and 1 time dimension, although 7 of the spatial dimensions are - at lower energies - completely "compactified" (or infinitely curved) and not readily amenable to measurement or probing.
Philosophical implications[edit]
Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. Even fundamental issues, such as Max Born's basic rules concerning probability amplitudes and probability distributions, took decades to be appreciated by society and many leading scientists. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics."[64] According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."[65]
The Copenhagen interpretation - due largely to the Danish theoretical physicist Niels Bohr - remains the quantum mechanical formalism that is currently most widely accepted amongst physicists, some 75 years after its enunciation. According to this interpretation, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead must be considered a final renunciation of the classical idea of "causality." It is also believed therein that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the conjugate nature of evidence obtained under different experimental situations.
Albert Einstein, himself one of the founders of quantum theory, rejected the quantum theoretical doctrine that the state of a system depends on the experimental arrangement for its measurement. He held that underlying quantum mechanics there should be a theory that thoroughly and directly expresses the rule against action at a distance; in other words, he insisted on the principle of locality. He inferred that the present theory was incomplete, contrary to the Copenhagen doctrine that it is complete. He therefore produced a series of objections, the most famous of which has become known as the Einstein–Podolsky–Rosen paradox.
John Bell showed that this "EPR" paradox led to experimentally testable differences between quantum mechanics and theories that rely on added hidden variables. Experiments have been performed confirming the accuracy of quantum mechanics, thereby demonstrating that quantum mechanics cannot be improved upon by addition of hidden variables.[66] The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view.
Quantum mechanics has had enormous[69] success in explaining many of the features of our universe. Quantum mechanics is often the only tool available that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Quantum mechanics has strongly influenced string theories, candidates for a Theory of Everything (see reductionism).
Quantum mechanics is also critically important for understanding how individual atoms combine covalently to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Relativistic quantum mechanics can, in principle, mathematically describe most of chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others and the magnitudes of the energies involved.[70] Furthermore, most of the calculations performed in modern computational chemistry rely on quantum mechanics.
Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to more fully develop quantum cryptography, which will theoretically allow guaranteed secure transmission of information. A more distant goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Instead of using classical bits, quantum computers use qubits, which can be in superpositions of states. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances.
Quantum tunneling is vital to the operation of many devices. Even in the simple light switch, the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide without quantum tunneling. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells.
While quantum mechanics primarily applies to the smaller atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale. Superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. So is the closely related phenomenon of superconductivity, the frictionless flow of an electron gas in a conducting material (an electric current) at sufficiently low temperatures.
Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black-body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures.[71] Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this fundamental process of plants and many other organisms.[72] Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. Since classical formulas are much simpler and easier to compute than quantum formulas, classical approximations are used and preferred when the system is large enough to render the effects of quantum mechanics insignificant.
Free particle[edit]
Step potential[edit]
The potential in this case is given by:
where the wave vectors are related to the energy via
with coefficients A and B determined from the boundary conditions and by imposing a continuous derivative on the solution.
Each term of the solution can be interpreted as an incident, reflected, or transmitted component of the wave, allowing the calculation of transmission and reflection coefficients. Notably, in contrast to classical mechanics, incident particles with energies greater than the potential step are partially reflected.
Rectangular potential barrier[edit]
This is a model for the quantum tunneling effect which plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. Quantum tunneling is central to physical phenomena involved in superlattices.
Particle in a box[edit]
1-dimensional potential energy box (or infinite potential well)
Main article: Particle in a box
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and infinite potential energy everywhere outside that region. For the one-dimensional case in the x direction, the time-independent Schrödinger equation may be written[75]
With the differential operator defined by
the previous equation is evocative of the classic kinetic energy analogue,
with state \psi in this case having energy E coincident with the kinetic energy of the particle.
or, from Euler's formula,
The infinite potential walls of the box determine the values of C, D, and k at x = 0 and x = L where ψ must be zero. Thus, at x = 0,
and D = 0. At x = L,
in which C cannot be zero as this would conflict with the Born interpretation. Therefore, since sin(kL) = 0, kL must be an integer multiple of π,
Finite potential well[edit]
Main article: Finite potential well
A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth.
The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wavefunction is not pinned to zero at the walls of the well. Instead, the wavefunction must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well.
Harmonic oscillator[edit]
This problem can either be treated by directly solving the Schrödinger, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by
where Hn are the Hermite polynomials,
and the corresponding energy levels are
This is another example illustrating the quantization of energy for bound states.
See also[edit]
1. ^ Ben-Menahem, Ari (2009). Historical Encyclopedia of Natural and Mathematical Sciences, Volume 1. Springer. p. 3678. ISBN 3540688315. , Extract of page 3678
2. ^ van Hove, Leon (1958). "Von Neumann's contributions to quantum mechanics" (PDF). Bulletin of the American Mathematical Society 64: Part2:95–99. doi:10.1090/s0002-9904-1958-10206-2.
5. ^ Kragh, Helge (2002). Quantum Generations: A History of Physics in the Twentieth Century. Princeton University Press. p. 58. ISBN 0-691-09552-3. , Extract of page 58
6. ^ E Arunan (2010). "Peter Debye" (PDF). Resonance (journal) (Indian Academy of Sciences) 15 (12).
10. ^ "Quantum interference of large organic molecules". Retrieved April 20, 2013.
11. ^ "Quantum - Definition and More from the Free Merriam-Webster Dictionary". Retrieved 2012-08-18.
12. ^
13. ^ Compare the list of conferences presented here
14. ^ at the Wayback Machine (archived October 26, 2009)[dead link]
16. ^ D. Hilbert Lectures on Quantum Theory, 1915–1927
20. ^ "Heisenberg - Quantum Mechanics, 1925–1927: The Uncertainty Relations". Retrieved 2012-08-18.
22. ^ "[Abstract] Visualization of Uncertain Particle Movement". Retrieved 2012-08-18.
24. ^
25. ^ "Topics: Wave-Function Collapse". 2012-07-27. Retrieved 2012-08-18.
26. ^ "Collapse of the wave-function". Retrieved 2012-08-18.
27. ^ "Determinism and Naive Realism : philosophy". 2009-06-01. Retrieved 2012-08-18.
28. ^ Michael Trott. "Time-Evolution of a Wavepacket in a Square Well — Wolfram Demonstrations Project". Retrieved 2010-10-15.
29. ^ Michael Trott. "Time Evolution of a Wavepacket In a Square Well". Retrieved 2010-10-15.
32. ^ [1][dead link]
34. ^
39. ^ "Quantum mechanics course iwhatisquantummechanics". 2008-09-14. Retrieved 2012-08-18.
42. ^ (see macroscopic quantum phenomena, Bose–Einstein condensate, and Quantum machine)
43. ^ "Atomic Properties". Retrieved 2012-08-18.
44. ^
45. ^ Born, M., Heisenberg, W., Jordan, P. (1926). Z. Phys. 35: 557–615. Translated as 'On quantum mechanics II', pp. 321–385 in Van der Waerden, B.L. (1967), Sources of Quantum Mechanics, North-Holland, Amsterdam, "The basic difference between the theory proposed here and that used hitherto ... lies in the characteristic kinematics ...", p. 385.
46. ^ Dirac, P.A.M. (1930/1958). The Principles of Quantum Mechanics, fourth edition, Oxford University Press, Oxford UK, p. 5: "A question about what will happen to a particular photon under certain conditions is not really very precise. To make it precise one must imagine some experiment performed having a bearing on the question, and enquire what will be the result of the experiment. Only questions about the results of experiments have a real significance and it is only such questions that theoretical physics has to consider."
47. ^ Bohr, N. (1939). The Causality Problem in Atomic Physics, in New Theories in Physics, Conference organized in collaboration with the International Union of Physics and the Polish Intellectual Co-operation Committee, Warsaw, May 30th – June 3rd 1938, International Institute of Intellectual Co-operation, Paris, 1939, pp. 11–30, reprinted in Niels Bohr, Collected Works, volume 7 (1933 – 1958) edited by J. Kalckar, Elsevier, Amsterdam, ISBN 0-444-89892-1, pp. 303–322. "The essential lesson of the analysis of measurements in quantum theory is thus the emphasis on the necessity, in the account of the phenomena, of taking the whole experimental arrangement into consideration, in complete conformity with the fact that all unambiguous interpretation of the quantum mechanical formalism involves the fixation of the external conditions, defining the initial state of the atomic system and the character of the possible predictions as regards subsequent observable properties of that system. Any measurement in quantum theory can in fact only refer either to a fixation of the initial state or to the test of such predictions, and it is first the combination of both kinds which constitutes a well-defined phenomenon."
48. ^ Bohr, N. (1948). On the notions of complementarity and causality, Dialectica 2: 312–319. "As a more appropriate way of expression, one may advocate limitation of the use of the word phenomenon to refer to observations obtained under specified circumstances, including an account of the whole experiment."
49. ^ Ludwig, G. (1987). An Axiomatic Basis for Quantum Mechanics, volume 2, Quantum Mechanics and Macrosystems, translated by K. Just, Springer, Berlin, ISBN 978-3-642-71899-1, Chapter XIII, Special Structures in Preparation and Registration Devices, §1, Measurement chains, p. 132.
50. ^ a b Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics' here, "But in the rigorous formulation of the law of causality, — "If we know the present precisely, we can calculate the future" — it is not the conclusion that is faulty, but the premise."
51. ^ Green, H.S. (1965). Matrix Mechanics, with a foreword by Max Born, P. Noordhoff Ltd, Groningen. "It is not possible, therefore, to provide 'initial conditions' for the prediction of the behaviour of atomic systems, in the way contemplated by classical physics. This is accepted by quantum theory, not merely as an experimental difficulty, but as a fundamental law of nature", p. 32.
52. ^ Rosenfeld, L. (1957). Misunderstandings about the foundations of quantum theory, pp. 41–45 in Observation and Interpretation, edited by S. Körner, Butterworths, London. "A phenomenon is therefore a process (endowed with the characteristic quantal wholeness) involving a definite type of interaction between the system and the apparatus."
53. ^ Dirac, P.A.M. (1973). Development of the physicist's conception of nature, pp. 1–55 in The Physicist's Conception of Nature, edited by J. Mehra, D. Reidel, Dordrecht, ISBN 90-277-0345-0, p. 5: "That led Heisenberg to his really masterful step forward, resulting in the new quantum mechanics. His idea was to build up a theory entirely in terms of quantities referring to two states."
54. ^ Born, M. (1927). Physical aspects of quantum mechanics, Nature 119: 354–357, "These probabilities are thus dynamically determined. But what the system actually does is not determined ..."
55. ^ Messiah, A. (1961). Quantum Mechanics, volume 1, translated by G.M. Temmer from the French Mécanique Quantique, North-Holland, Amsterdam, p. 157.
56. ^ Bohr, N. (1928). The Quantum postulate and the recent development of atomic theory, Nature 121: 580–590.
57. ^ Heisenberg, W. (1930). The Physical Principles of the Quantum Theory, translated by C. Eckart and F.C. Hoyt, University of Chicago Press.
58. ^ Goldstein, H. (1950). Classical Mechanics, Addison-Wesley, ISBN 0-201-02510-8.
60. ^ Stephen Hawking; Gödel and the end of physics
61. ^ Excerpt from an article by Roger Penrose
62. ^ "Life on the lattice: The most accurate theory we have". 2005-06-03. Retrieved 2010-10-15.
65. ^ Weinberg, S. "Collapse of the State Vector", Phys. Rev. A 85, 062116 (2012).
66. ^ "Action at a Distance in Quantum Mechanics (Stanford Encyclopedia of Philosophy)". 2007-01-26. Retrieved 2012-08-18.
67. ^ "Everett's Relative-State Formulation of Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Retrieved 2012-08-18.
70. ^ Introduction to Quantum Mechanics with Applications to Chemistry - Linus Pauling, E. Bright Wilson. 1985-03-01. ISBN 9780486648712. Retrieved 2012-08-18.
72. ^ "Quantum mechanics boosts photosynthesis". Retrieved 2010-10-23.
74. ^ Baofu, Peter (2007-12-31). The Future of Complexity: Conceiving a Better Way to Understand Order and Chaos. ISBN 9789812708991. Retrieved 2012-08-18.
75. ^ Derivation of particle in a box,
More technical:
Further reading[edit]
External links[edit]
Course material |
42603940ca2c4fa1 | Take the 2-minute tour ×
we know that the operator
$ H= - \hbar ^{2} \frac{d^{2}}{dx^{2}}+ V(x) $ is hermitian isn't it ??
however what would happen if the potential were still real but it depends on the Wave function, for example huypothetically
$ V(x) = |\Psi (x)|^{2} $ or $ V(x) = arg \Psi (x) $
since the functions $ |x| $ and $ arg (x+iy) $ are ALWAYS real, then the hamiltonian with potentials (1) and (2) should be Hermitian but they depend on the solution so i am not sure about the Hermiticity of a Hamiltonian in the form $ H =p^{2}+ |\Psi (x) |^{2} $ i believe that a real potential makes the Hamiltonian hermitian even in the case that we do not exactly know what potential is
share|improve this question
That would give a nonlinear equation. Quantum mechanics on the other hand assumes that all observables (and hence also the time, and space evolution operators) be linear operators on the space of physical states. – user10001 Jul 16 '12 at 15:20
One problem could be that by restricting your operator to be real you can't differentiate it with respect to $\Psi(x)$ and thus also not with respect to $x$. – Laar Jul 16 '12 at 15:35
1 Answer 1
up vote 3 down vote accepted
Since the momentum operator is hermitian, it's square, the first expression, is hermitian too. The operators which are considered to be a Laplacians generally tend to have this property.
If $V(x)$ is a real function, then it's hermitian as you say. If the Hamiltonian contains any function $|\Phi(x)|^2$, then this is a priori just the previous case. If you mean this to be the respective wave function, then the operator is more of a functional. In that case the associated differential equation is certainly not linear, so you leave the framework in the standard terminology. Although the operator as such will still behave hermitean w.r.t. to the scalar product, so the answer is a cautious yes. As a remark, then function involving the argument of the wave function is pretty unphysical as the phase is exactly the quantity, which should be undetectable.
There are theories which have such a non-linear structure , see for example the Nonlinear Schrödinger equation, although notice the integral in the Hamiltonian. On that page you also find a link to the quantized version (Schrödinger field) of the model you constructed as a special case.
share|improve this answer
Seems you mean "its square", not that momentum operator is square :) – Ruslan Aug 7 '13 at 20:40
Your Answer
|
9530797832efaf6f | Online Test Banks
Score higher
See Online Test Banks
Learning anything is easy
Browse Online Courses
Mobile Apps
Learning on the go
Explore Mobile Apps
Dummies Store
Shop for books and more
Start Shopping
How Particles Tunnel Through Potential Barriers That Have Greater Energy
When a particle doesn't have as much energy as the potential of a barrier, you can use the Schrödinger equation to find the probability that the particle will tunnel through the barrier's potential. You can also find the reflection and transmission coefficients, R and T, as well as calculate the transmission coefficient using the Wentzel-Kramers-Brillouin (WKB) approximation.
Here's how it works: When a particle doesn't have as much energy as the potential of the barrier, you're facing the situation shown in the following figure.
A potential barrier E < V<sub>0</sub>.
A potential barrier E < V0.
In this case, the Schrödinger equation looks like this:
image1.png image2.png
All this means that the solutions for
are the following:
This situation is similar to the case where E > V0, except for the region
The wave function oscillates in the regions where it has positive energy, x < 0 and x > a, but is a decaying exponential in the region
You can see what the probability density,
looks like in the following figure.
How to find the reflection and transmission coefficients
How about the reflection and transmission coefficients, R and T? Here's what they equal:
As you may expect, you use the continuity conditions to determine A, B, and F:
A fair bit of algebra and trig is involved in solving for R and T; here's what R and T turn out to be:
Despite the equation's complexity, it's amazing that the expression for T can be nonzero. Classically, particles can't enter the forbidden zone
because E < V0, where V0 is the potential in that region; they just don't have enough energy to make it into that area.
How particles tunnel through regions
Quantum mechanically, the phenomenon where particles can get through regions that they're classically forbidden to enter is called tunneling. Tunneling is possible because in quantum mechanics, particles show wave properties.
Tunneling is one of the most exciting results of quantum physics — it means that particles can actually get through classically forbidden regions because of the spread in their wave functions. This is, of course, a microscopic effect — don't try to walk through any closed doors — but it's a significant one. Among other effects, tunneling makes transistors and integrated circuits possible.
You can calculate the transmission coefficient, which tells you the probability that a particle gets through, given a certain incident intensity, when tunneling is involved. Doing so is relatively easy in the above example because the barrier that the particle has to get through is a square barrier. But in general, calculating the transmission coefficient isn't so easy. Read on.
How to find the transmission coefficient with the WKB approximation
The way you generally calculate the transmission coefficient is to break up the potential you're working with into a succession of square barriers and to sum them. That's called the Wentzel-Kramers-Brillouin (WKB) approximation — treating a general potential, V(x), as a sum of square potential barriers.
The result of the WKB approximation is that the transmission coefficient for an arbitrary potential, V(x), for a particle of mass m and energy E is given by this expression (that is, as long as V(x) is a smooth, slowly varying function):
In this equation
So now you can amaze your friends by calculating the probability that a particle will tunnel through an arbitrary potential. It's the stuff science fiction is made of — well, on the microscopic scale, anyway.
• Add a Comment
• Print
• Share
blog comments powered by Disqus
Inside Sweepstakes
Win $500. Easy. |
986e93702f3394e0 |
The thing about such games is that there is an “internal universe”, in which Lara interacts with other game elements, and occasionally is killed by them, and an “external universe”, where the computer or console running the game, together with the human who is playing the game, resides. While the game is running, these two universes run more or less in parallel; but there are certain operations, notably the “save game” and “restore game” features, which disrupt this relationship. These operations are utterly mundane to people like us who reside in the external universe, but it is an interesting thought experiment (which others have also proposed :-) ) to view them from the perspective of someone like Lara, in the internal universe. (I will eventually try to connect this with quantum mechanics, but please be patient for now.) Of course, for this we will need to presume that the Tomb Raider game is so advanced that Lara has levels of self-awareness and artificial intelligence which are comparable to our own.
Imagine first that Lara is about to navigate a tricky rolling boulder puzzle, when she hears a distant rumbling sound – the sound of her player saving her game to disk. Let us suppose that what happens next (from the perspective of the player) is the following: Lara navigates the boulder puzzle but fails, being killed in the process; then the player restores the game from the save point and then Lara successfully makes it through the boulder puzzle.
Now, how does the situation look from Lara’s point of view? At the save point, Lara’s reality diverges into a superposition of two non-interacting paths, one in which she dies in the boulder puzzle, and one in which she lives. (Yes, just like that cat.) Her future becomes indeterministic. If she had consulted with an infinitely prescient oracle before reaching the save point as to whether she would survive the boulder puzzle, the only truthful answer this oracle could give is “50% yes, and 50% no”.
This simple example shows that the internal game universe can become indeterministic, even though the external one might be utterly deterministic. However, this example does not fully capture the weirdness of quantum mechanics, because in each one of the two alternate states Lara could find herself in (surviving the puzzle or being killed by it), she does not experience any effects from the other state at all, and could reasonably assume that she lives in a classical, deterministic universe.
So, let’s make the game a bit more interesting. Let us assume that every time Lara dies, she leaves behind a corpse in that location for future incarnations of Lara to encounter. (This type of feature was actually present in another game I used to play, back in the day.) Then Lara will start noticing the following phenomenon (assuming she survives at all): whenever she navigates any particularly tricky puzzle, she usually encounters a number of corpses which look uncannily like herself. This disturbing phenomenon is difficult to explain to Lara using a purely classical deterministic model of reality; the simplest (and truest) explanation that one can give her is a “many-worlds” interpretation of reality, and that the various possible states of Lara’s existence have some partial interaction with each other. Another valid (and largely equivalent) explanation would be that every time Lara passes a save point to navigate some tricky puzzle, Lara’s “particle-like” existence splits into a “wave-like” superposition of Lara-states, which then evolves in a complicated way until the puzzle is resolved one way or the other, at which point Lara’s wave function “collapses” in a non-deterministic fashion back to a particle-like state (which is either entirely alive or entirely dead).
Now, in the real world, it is only microscopic objects such as electrons which seem to exhibit this quantum behaviour; macroscopic objects, such as you and I, do not directly experience the kind of phenomena that Lara does and we cannot interview individual electrons to find out their stories either. Nevertheless, by studying the statistical behaviour of large numbers of microscopic objects we can indirectly infer their quantum nature via experiment and theoretical reasoning. Let us again use the Tomb Raider analogy to illustrate this. Suppose now that Tomb Raider does not only have Lara as the main heroine, but in fact has a large number of playable characters, who explore a large number deadly tombs, often with fatal effect (and thus leading to multiple game restores). Let us suppose that inside this game universe there is also a scientist (let’s call her Jacqueline) who studies the behaviour of these adventurers going through the tombs, but does not experience the tombs directly, nor does she actually communicate with any of these adventurers. Each tomb is explored by only one adventurer; regardless of whether she lives or dies, the tomb is considered “used up”.
Jacqueline observes several types of trapped tombs in her world, and gathers data as to how likely an adventurer is to survive any given type of tomb. She learns that each type of tomb has a fixed survival rate – e.g. a tomb of type A has a 20% survival rate, while a tomb of type B has a 50% survival rate – but that it seems impossible to predict with any certainty whether any given adventurer will survive any given type of tomb. So far, this is something which could be explained classically; each tomb may have a certain number of lethal traps in them, and whether an adventurer survives these traps or not may entirely be due to random chance.
But then Jacqueline encounters a mysterious “quantisation” phenomenon: the survival rate for various tombs are always one of the following numbers:
100\%, 50\%, 33.3\ldots\%, 25\%, 20\%, \ldots;
in other words, the “frequency” of success for a tomb is always of the form 1/n for some integer n. This phenomenon would be difficult to explain in a classical universe, since the effects of random chance should be able to produce a continuum of survival probabilities.
Here’s what is going on. In order for Lara (say) to survive a tomb of a given type, she needs to stack together a certain number of corpses together to reach a certain switch; if she cannot attain that level of “constructive interference” to reach that switch, she dies. The type of tomb determines exactly how many corpses are needed – suppose for instance that a tomb of type A requires four corpses to be stacked together. Then the player who is playing Lara will have to let her die four times before she can successfully get through the tomb; and so from her perspective, Lara’s chances of survival are only 20%. In each possible state of the game universe, there is only one Lara which goes into the tomb, who either lives or dies; but her survival rate here is what it is because of her interaction with other states of Lara (which Jacqueline cannot see directly, as she does not actually enter the tomb).
A familiar example of this type of quantum effect is the fact that each atom (e.g. sodium or neon) can only emit certain wavelengths of light (which end up being quantised somewhat analogously to the survival probabilities above); for instance, sodium only emits yellow light, neon emits blue, and so forth. The electrons in such atoms, in order to emit such light, are in some sense clambering over skeletons of themselves to do so; the more commonly given explanation is that the electron is behaving like a wave within the confines of an atom, and thus can only oscillate at certain frequencies (similarly to how a plucked string of a musical instrument can only exhibit a certain set of wavelengths, which incidentally are also proportional to 1/n for integer n). Mathematically, this “quantisation” of frequency can be computed using the bound states of a Schrödinger operator with potential. (Now, I am not going to try to stretch the Tomb Raider analogy so far as to try to model the Schrödinger equation! In particular, the complex phase of the wave function – which is a fundamental feature of quantum mechanics – is not easy at all to motivate in a classical setting, despite some brave attempts.)
The last thing we’ll try to get the Tomb Raider analogy to explain is why microscopic objects (such as electrons) experience quantum effects, but macroscopic ones (or even mesoscopic ones, such as large molecues) seemingly do not. Let’s assume that Tomb Raider is now a two-player co-operative game, with two players playing two characters (let’s call them Lara and Indiana) as they simultaneously explore different parts of their world (e.g. via a split-screen display). The players can choose to save the entire game, and then restore back to that point; this resets both Lara and Indiana back to the state they were in at that save point.
Now, this game still has the strange feature of corpses of Lara and Indiana from previous games appearing in later ones. However, we assume that Lara and Indiana are entangled in the following way: if Lara is in tomb A and Indiana is in tomb B, then Lara and Indiana can each encounter corpses of their respective former selves, but only if both Lara and Indiana died in tombs A and B respectively in a single previous game. If in a previous game, Lara died in tomb A and Indiana died in tomb C, then this time round, Lara will not see any corpse (and of course, neither will Indiana). (This entanglement can be described a bit better by using tensor products: rather than saying that Lara died in A and Indiana died in B, one should instead think of \hbox{Lara } \otimes \hbox{ Indiana} dying in \left|A\right> \otimes \left|B\right>, which is a state which is orthogonal to \left|A\right> \otimes \left|C\right>.) With this type of entanglement, one can see that there is going to be significantly less “quantum weirdness” going on; Lara and Indiana, adventuring separately but simultaneously, are going to encounter far fewer corpses of themselves than Lara adventuring alone would. And if there were many many adventurers entangled together exploring simultaneously, the quantum effects drop to virtually nothing, and things now look classical unless the adventurers are somehow organised to “resonate” in a special way.
One might be able to use Tomb Raider to try to understand other unintuitive aspects of quantum mechanics, but I think I’ve already pushed the analogy far beyond the realm of reasonableness, and so I’ll stop here. :-) |
cc229c35df5e96b0 | Atomic Physics
Also found in: Dictionary, Thesaurus, Wikipedia.
Related to Atomic Physics: quantum physics, particle physics
Atomic physics
The study of the structure of the atom, its dynamical properties, including energy states, and its interactions with particles and fields. These are almost completely determined by the laws of quantum mechanics, with very refined corrections required by quantum electrodynamics. Despite the enormous complexity of most atomic systems, in which each electron interacts with both the nucleus and all the other orbiting electrons, the wavelike nature of particles, combined with the Pauli exclusion principle, results in an amazingly orderly array of atomic properties. These are systematized by the Mendeleev periodic table. In addition to their classification by chemical activity and atomic weight, the various elements of this table are characterized by a wide variety of observable properties. These include electron affinity, polarizability, angular momentum, multiple electric moments, and magnetism. See Quantum electrodynamics, Quantum mechanics
Each atomic element, normally found in its ground state (that is, with its electron configuration corresponding to the lowest state of total energy), can also exist in an infinite number of excited states. These are also ordered in accordance with relatively simple hierarchies determined by the laws of quantum mechanics. The most characteristic signature of these various excited states is the radiation emitted or absorbed when the atom undergoes a transition from one state to another. The systemization and classification of atomic energy levels (spectroscopy) has played a central role in developing an understanding of atomic structure.
Atomic Physics
the branch of physics in which the structure and states of atoms are studied. Atomic physics arose at the turn of the 20th century. In the second decade of the 20th century it was established that the atom consisted of a nucleus and electrons bound together by electrical forces. In the first phase of its development, atomic physics also included problems associated with the structure of the atomic nucleus. In the 1930’s it was shown that the interactions which occur in the atomic nucleus were of a nature different from those which occur in the outer shell of the atom, and in the 1940’s nuclear physics branched off into an independent scientific discipline. In the 1950’s the physics of elementary particles—high-energy physics—also developed as an independent branch.
Early history: Study of atoms in the 17th to 19th centuries. Hypothesis concerning the existence of atoms as indivisible particles arose even in antiquity; ideas of atomism were first stated by the ancient Greek thinkers Democritus and Epicurus. In the 17th century, the ideas were revived by the French philosopher P. Gassendi and the English chemist R. Boyle.
The concepts of atoms that prevailed in the 17th and 18th centuries were poorly defined. Atoms were considered absolutely indivisible and immutable solid particles whose different types are distinguished by size and form. Combinations of atoms in one or another order produce various substances, and the motions of atoms determine all phenomena that take place in matter. I. Newton, M. V. Lomonosov, and certain other scientists supposed that atoms could combine into more complex particles—“corpuscles.” However, specific chemical and physical properties were not attributed to atoms. The study of atoms still had an abstract, natural-philosophical character.
In the late 18th and early 19th centuries, as a result of the rapid development of chemistry, a basis for the quantitative treatment of the study of atoms was created. The English scientist J. Dalton was the first (1803) to consider the atom as the smallest particle of a chemical element, distinguished from atoms of other elements by its mass. According to Dalton, the basic characteristic of the atom is its atomic mass. Chemical compounds are a collection of “combined atoms” which contain a specific (characteristic for a given complex substance) number of atoms of each element. All chemical reactions are mere regroupings of atoms into new compound particles. Starting from these assumptions Dalton formulated his law of multiple proportions. The investigations of the Italian scientists A. Avogadro (1811) and, in particular, S. Canizzaro (1858), drew a sharp line between the atom and the molecule. In the 19th century the optical, as well as the chemical, properties of atoms were studied. It was established that each element had a characteristic optical spectrum; spectral analysis was discovered by the German physicists G. Kirchhoff and R. Bunsen in 1860.
In this manner the atom appeared as a qualitatively unique particle of matter, characterized by strictly defined physical and chemical properties. But the properties of the atom were considered eternal and inexplicable. It was assumed that the number of types of atoms (chemical elements) was random and that there was no connection between them. However, it was gradually ascertained that there were groups of elements which had the same chemical properties, the same maximum valence, and comparable laws of variation (in the transition from one group to another) of physical properties—that is, melting point, compressibility, and so on. In 1869, D. I. Mendeleev discovered the periodic system of elements. He showed that the chemical and physical properties of the elements were periodically repeated with an increase in atomic mass (see Figures 1 and 2).
Figure 1. Periodic dependence of atomic volume on atomic number
The periodic system demonstrated the existence of relationships between the different types of atoms. This suggested the conclusion that the atom has a complex structure that varies with atomic mass. The problem of the discovery of atomic structure became the most important problem in chemistry and physics.
Origin of atomic physics. The most important events in science, from which the beginning of atomic physics followed, were the discoveries of the electron and radioactivity. In the investigation of the flow of electric current through highly rarefied gases, rays were discovered which were emitted by the cathode discharge tube (cathode rays) and which had the property of being deflected in transverse electric and magnetic fields. It was ascertained that these rays consist of rapidly moving, negatively charged particles called electrons. In 1897 the English physicist J. J. Thomson measured the ratio of the charge e of these particles to their mass m. It was also discovered that metals, upon intense heating or illumination by light of short wavelength, emit electrons. From this it was concluded that electrons are part of all atoms. Hence, it followed that neutral atoms must also contain positively charged particles. Positively charged atoms (ions) were in fact discovered in the investigation of electrical discharges in rarefied gases. The representation of the atom as a system of charged particles explained, according to the theory of the Dutch physicist H. Lorentz, the very possibility of radiation by the atom of light: electromagnetic radiation arises with the oscillation of intra-atomic charges. This was verified in the study of the influence of a magnetic field on atomic spectra. It was explained that the ratio of the charge of intra-atomic electrons to their mass, elm, found by Lorentz in his theory of the Zeeman effect, is exactly equal to the value for elm for free electrons that was obtained in Thomson’s experiments. The theory of electrons and its experimental verification yielded indisputable proof of the complexity of the atom.
The representation of the indivisibility and immutability of the atom was finally disproved by the work of the French scientists M. Skłodowska Curie and P. Curie. As a result of the investigation of radioactivity it was established by F. Soddy that atoms undergo transmutations of two types. Having emitted an alpha-particle (an ion of helium with positive charge 2e), the atom of a radioactive chemical element is transmuted into an atom of another element, located in the periodic system two positions to the left—that is, a polonium atom becomes a lead atom. Having emitted a beta-particle (electron) with negative charge -e, an atom of a radioactive
Figure 2 Periodic dependence on atomic number of (1) the quantity 1/T. 104, where T is the melting point; (2) coefficient of linear expansion α. 105; (3) compressibility factor K. 106
chemical element is transmuted into an atom of the element located one position to the right—that is, a bismuth atom becomes polonium. The mass of an atom formed as a result of such transmutations is sometimes found to be different from the atomic weight of the element into whose position it transferred. This indicated the existence of a variety of atoms of the same chemical element with different masses; these varieties were given the name isotopes (that is, atoms that occupied the same place in Mendeleev’s table). Thus, the concept of the absolute identity of all atoms of a given chemical element proved to be incorrect.
The results of the investigation of the properties of the electron and radioactivity permitted the construction of detailed models of the atom. In the model proposed by Thomson in 1903, the atom was represented in the form of a positively charged sphere in which were distributed small (in comparison with the atom) negative electrons (see Figure 3).
Figure 3. Thomson’s model of the atom. The points denote electrons impregnated in a positively charged sphere.
They were held in the atom because the forces of attraction of them by the distributed positive charge were balanced by their forces of mutual repulsion. The Thomson model gave a generally recognized explanation of the possibility of emission, scattering, and absorption of light by the atom. In the displacement of electrons from positions of equilibrium an “elastic” force arose, striving to restore equilibrium; this force is proportional to the electron’s displacement from the equilibrium position and, consequently, to the dipole moment of the atom. Under the influence of the electric forces of the incident electromagnetic wave, the electrons in the atom oscillate at the same frequency as does the electrical field strength in the light wave; the oscillating electrons, in turn, emit light at the same frequency. The scattering of electromagnetic waves by the atoms of matter occurs in this manner. According to the degree of weakening of the light beam in the mass of matter, it is possible to determine the total number of scattering electrons, and knowing the number of atoms per unit volume, it is possible to find the number of electrons in each atom.
Formulation of the Rutherford planetary model of the atom. Thomson’s model of the atom proved to be unsatisfactory. On the basis of the model it was not possible to explain the completely unexpected result of the experiments of the English physicist E. Rutherford and his co-workers K. Geiger and E. Marsden on the scattering of alpha-particles by atoms. In these experiments, fast alpha-particles were used for the direct probing of atoms. Passing through matter, the alpha-particles collide with atoms. In each collision the alpha-particle, traveling through the electrical field of the atom, changes its direction of motion—that is, it undergoes scattering. In the overwhelming majority of scattering events, the deflections of alpha-particles (scattering angles) were very small. Therefore, upon passage of a beam of alpha-particles through a thin layer of matter, only a small blowup of the beam took place. However, a very small fraction of the alpha-particles was deflected through angles greater than 90°. This result could not be explained on the basis of the Thomson model because the electrical field in a “solid” atom would not be sufficiently strong to deflect a fast and massive alpha-particle through a large angle. In order to explain the results of experiments on the scattering of alpha-particles, Rutherford proposed a model of the atom that was new in principle and resembled the structure of the solar system; it came to be called the planetary system. It had the following form. In the center of the atom is a positively charged nucleus whose dimensions (∽10-12 cm) are very small in comparison with the dimensions of the atom (∽10-8 cm) but whose mass is almost equal to the mass of the atom. Around the nucleus move electrons, similar to the movement of the planets around the sun; the number of electrons in an uncharged (neutral) atom is such that their total negative charge compensates (neutralizes) the positive charge of the nucleus. The electrons must move around the nucleus, or they would fall into it under the influence of the attractive forces. The difference between the atom and the planetary system consisted in the fact that gravitational forces operated in the latter and electrical (Coulomb) forces operated in the atom. Near the nucleus, which could be considered as a point of positive charge, there existed a very strong electrical field. Therefore, in passing close to the nucleus, positively charged alpha-particles (helium nuclei) were subjected to a strong deflection. Subsequently it was explained by G. Moseley that the charge of the nucleus increased from one chemical element to another by the elementary unit of charge, equal to the charge of the electron but with a positive sign. Numerically, the charge of the atomic nucleus, expressed in units of elementary charge e, is equal to the ordinal number of the corresponding element in the periodic system.
In order to check the planetary model, Rutherford and his co-worker C. G. Darwin calculated the angular distribution of alpha-particles scattered by a point nucleus—the center of the Coulomb forces. The result obtained was checked by experimental means—the measurement of the number of alpha-particles scattered through various angles. The results of the experiment agreed exactly with the theoretical calculations, brilliantly confirming the Rutherford planetary model of the atom.
However, the planetary model of the atom encountered fundamental difficulties. According to classical electrodynamics, a charged particle which is moving under acceleration continuously radiates electromagnetic energy. Therefore, electrons moving around the nucleus—that is, under acceleration—must be continuously losing energy by radiation. But in this case they would, in a negligibly small fraction of a second, lose all their kinetic energy and fall into the nucleus. Another difficulty, also connected with radiation, was that if it is assumed (in correspondence with classical electrodynamics) that the frequency of the light radiated by the electron is equal to the frequency of the electron’s oscillations in the atom (that is, the number of revolutions performed by it along its orbit in one second) or a multiple of it, then the radiated light, according to the degree of approach of the electron to the nucleus, must continuously change its frequency and the spectrum of the light radiated by it must be continuous. This, however, is contradicted by the experiments. The atom radiates light waves of completely fixed frequency, typical of a given chemical element, and is characterized by a spectrum which consists of individual spectral lines—a line spectrum. In the line spectra of the elements a series of regularities were experimentally established, the first of which was discovered by the Swiss scientist J. Balmer (1885) in the hydrogen spectrum. The most general rule, the combination principle, was found by the Austrian scientist W. Ritz in 1908. This principle can be formulated in this manner: for the atoms of each element, it is possible to find a sequence of numbers T1, T2, T3,... of so-called spectral terms such that the frequency v of each spectral line of a given element is expressed in the form of the difference of two terms— v = Tk –Ti. For the hydrogen atom, the term is Tn = Rin2, where n is an integer that takes on the value n = 1, 2, 3, ... and R is the so called Rydberg constant.
Thus, within the limits of the Rutherford model of the atom, the stability of the atom in relation to radiation and the line spectrum of its radiation could not be explained. On the basis of the model, neither the laws of thermal radiation nor the laws of photoelectric phenomena, which arise in the interactions of radiation with matter, could be explained. It became possible to explain these laws by proceeding from completely new—quantum—concepts first introduced by the German physicist M. Planck in 1900. For the derivation of the law of energy distribution in the spectrum of thermal radiation (the radiation of heated bodies), Planck proposed that the atoms of matter emitted electromagnetic energy (light) in the form of individual portions (quanta of light) whose energy is proportional to v (the frequency of the radiation): E = hv, where h is a constant—characteristic for quantum theory—called Planck’s constant. In 1905, A. Einstein gave a quantum explanation of photoelectric phenomena according to which the energy of the quantum hv is used for the emission of the electron from the metal, for the work function P, and for imparting to the electron a kinetic energy Tkin—hv = P + Tkin. Here Einstein introduced the concept of light quanta as a special kind of particle; these particles were subsequently called photons.
The inconsistencies of the Rutherford model could be resolved only by rejecting a number of customary concepts of classical physics. The most important step in the construction of atomic theory was made by the Danish physicist N. Bohr in 1913.
The Bohr postulates and the model of Bohr’s atom. On the basis of the quantum theory of the atom, Bohr proposed two postulates characterizing those properties of the atom which were not contained in classical physics. These Bohr postulates can be formed as follows:
(1) Existence of stationary states: The atom does not radiate and is stable only in certain stationary (unchanging in time) states which correspond to a discrete (discontinuous) sequence of “permitted” values of the energy E1, E2, E3, E4..... Any change in energy is associated with a quantum transition (jump) from one stationary state to another.
(2) Condition for radiation frequencies (quantum transitions with radiation): In the transition from one stationary state with energy Ei to another with energy Ek the atom emits or absorbs light of a specific frequency v in the form of a quantum of radiation (photon) hv according to the relation hv = E1Ek. In emission the atom passes from a state with higher energy E1 to a state with lower energy Ek; in absorption, on the other hand, it passes from a state with lower energy Ek to a state with higher energy E1.
The Bohr postulates permit immediate understanding of the physical meaning of the Ritz combination principle (see above); comparison of the relations hv = Ei- Ek and v = Tk- T1 indicates that the spectral terms correspond to stationary states and that the energies of the latter must be equal (with accuracy up to a constant term) to Ei = -hTi, Ek = -hTk.
In emission or absorption of light, the atom’s energy changes; this change is equal to the energy of the emitted or absorbed photon—that is, the law of conservation of energy holds. The line spectrum of the atom is a result of the discreteness of its possible energy values.
For the determination of the permitted energy values of the atom—the quantization of its energy—and for the calculation of the characteristics of the corresponding stationary states, Bohr applied classical (Newtonian) mechanics. “If we wish in general to compose a visual representation of stationary states, we have no other means, at least now, than ordinary mechanics,” Bohr wrote in 1913 (Tri stat’i o spektrakh i stroenii atomov, p. 22, Moscow-Petrograd, 1923). For the simplest atom—the hydrogen atom, which consists of a nucleus with charge +e (a proton) and an electron with charge -e —Bohr considered the motion of the electron around the nucleus along circular orbits. Comparing the energy of the atom E with the spectral terms Tn = R/n2 for the hydrogen atom found with high accuracy from the frequencies of its spectral lines, Bohr obtained the possible values of the atom’s energy, En= - hT n= - hR/n2 (where n = 1, 2, 3, . . .). The values correspond to circular orbits of radius an = a0 n2, where a0 = 0.53 x 10-8 cm—the Bohr radius—is the radius of the smallest circular orbit (for n = 1). Bohr calculated the frequencies of revolution vn of the electron around the nucleus along circular orbits in relation to the electron’s energy. It turned out that the frequencies of the light radiated by the atom did not coincide with the frequencies of revolution vn as required by classical electrodynamics, but rather were proportional—according to the relation hv = Ei - E k—to the energy difference of the electron in two of its possible orbits.
For the calculation of the connection between the frequency of the electron’s revolution along an orbit and the radiation frequency, Bohr made the assumption that the results of the quantum and classical theories must agree for small radiation frequencies (for large wavelengths; such agreement occurs for thermal radiation, the laws of which were derived by Planck). For large n, Bohr equated the frequency of transition v = (En+1 - En)/h to the frequency of revolution vn along an orbit with given n and calculated the value of the Rydberg constant R, which agreed to a high accuracy with the value of R found experimentally, thus confirming Bohr’s hypothesis. Bohr succeeded not only in explaining the hydrogen spectrum, but also in conclusively demonstrating that certain spectral lines which were attributed to hydrogen belonged to helium. Bohr’s hypothesis that the results of the quantum and classical theories must agree in the limiting case of small frequencies of radiation represented the original form of the so-called correspondence principle. Subsequently, Bohr successfully applied it for the calculation of spectral line intensity. As the development of modern physics indicated, the correspondence principle was found to be very general.
In the Bohr theory of the atom, energy quantization—that is, the calculation of its possible values—was found to be a particular case of the general method of calculating “permitted” orbits. According to the quantum theory, such orbits are only those for which the angular momentum of the electron in the atom is an integral multiple of h/2п. To each permitted orbit there corresponds a specific possible value of the atom’s energy.
The basic assumptions of the quantum theory of the atom—the two Bohr postulates—were totally confirmed experimentally. Particularly graphic support was given by the experiments of the German physicists G. Franck and H. Hertz (1913–16). The essence of these experiments is as follows. A beam of electrons whose energy could be controlled enters a vessel containing mercury vapor. Gradually increasing energy is imparted to the electrons. As the energy of the electron is increased, the current in a galvanometer connected to an electrical circuit increases. When the energy of the electrons becomes equal to specific values (4.9; 6.7; 10.4eV), the current decreases sharply (see Figure 4). At this moment the mercury vapor is observed to emit ultraviolet rays of a specific frequency.
The stated facts permit only one interpretation. As long as the energy of the electrons is less than 4.9 eV, the electrons do not lose energy upon collision with mercury atoms—the collisions have an elastic character. When the energy becomes equal to a specific value, namely 4.9 eV, the electrons transmit their energy to the mercury atoms, which then emit it in the form of quanta of ultraviolet light. Calculation demon strates that the energy of these photons is exactly equal to the energy lost by the electrons. These experiments proved that the internal energy of the atom can have only specific discrete values, that the atom absorbs energy from without and emits it immediately in whole quanta, and finally, that the frequency of the light radiated by the atom corresponds to the energy lost by the atom.
Figure 4. Dependence of current on voltage obtained in the experiments of J. Franck and H. Hertz
The subsequent development of atomic physics showed the correctness of the Bohr postulates not only for atoms, but also for other microscopic systems—for molecules and for atomic nuclei. These postulates must be considered as firmly established empirical quantum laws. They compose that part of the Bohr theory which was not only preserved in the further development of quantum theory, but which also received its justification. The situation is somewhat different for the Bohr model of the atom, which is based on consideration of the motion of electrons in the atom according to the laws of classical mechanics, with the imposition of the additional conditions of quantization. Such an approach permitted the attainment of an entire series of important results but was inconsistent: the quantum postulates were added artificially to the laws of classical mechanics. A systematic theory was created in the 1920’s; this was called quantum mechanics. Its formulation was prepared by the further development of the model representations of Bohr’s theory, in the course of which its strong and weak sides were investigated.
Development of the model theory of Bohr’s atom. A very important result of the Bohr theory was the explanation of the hydrogen atom spectrum. The next step in the development of the theory of atomic spectra was made by the German physicist A. Sommerfeld. Having worked out in more detail the rules of quantization, starting from a more complex picture of the motion of electrons in the atom (along elliptical orbits) and taking into account the shielding of the outer (so-called valence) electron in the field of the nucleus and inner electrons, Sommerfeld was able to give an explanation of a number of regularities of the spectra of alkaline metals.
The theory of Bohr’s atom shed light on the structure of the so-called characteristic spectra of X-ray radiation. The X-ray spectra of atoms, in the same way as their optical spectra, have a discrete line structure characteristic of a given element (hence the designation). By investigating the characteristic X-ray spectra of various elements, the English physicist G. Moseley discovered this rule: the square roots of the frequencies of the radiated lines increase uniformly from element to element over the whole Mendeleev periodic system in proportion to the atomic number of the element. It is interesting that the Moseley law completely confirmed the correctness of Mendeleev, who in certain cases violated the principle of arrangement in the table according to increasing atomic weight and who placed certain heavier elements before lighter ones.
On the basis of Bohr’s theory, it also became possible to give an explanation of the periodicity of the properties of atoms. In a complex atom electron shells are formed which are filled sequentially, beginning from the innermost, by specific numbers of electrons. (The physical principle of the formation of the shells became clear only on the basis of the Pauli principle; see below.) The structure of the outer electron shells is repeated periodically, which determines the periodic recurrence of the chemical and many physical properties of the elements which are located in the same group of the periodic system. On the basis of the Bohr theory, the German chemist W. Kossel in 1916 explained the chemical interactions in the so-called heteropolar molecules.
However, far from all of the questions of atomic theory were successfully explained on the basis of the model representations of the Bohr theory. The theory was not able to deal with many problems of the theory of spectra; it made it possible to obtain correct values for the frequencies of the spectral lines of only the hydrogen and hydrogenlike atoms. The intensities of these lines remained unexplained; for the explanation of the intensities, Bohr was forced to use the correspondence principle.
In going over to the explanation of the motions of electrons in atoms more complex than the hydrogen atom, the Bohr model theory found itself up a blind alley—the helium atom, in which two electrons move around the nucleus, did not yield a theoretical interpretation based on it. The difficulties in this case were not confined to quantitative experimental discrepancies. The theory was also useless in the solution of a problem such as the combining of atoms into a molecule. Why were two neutral hydrogen atoms combined into a hydrogen molecule? How can the nature of valence be explained in general? What links the atoms of a solid? These questions remained unanswered. Within the limits of the Bohr model it was impossible to find an approach to their solution.
The quantum mechanical theory of the atom. The limitation of the Bohr model of the atom was based on the limitation of the classical representations of the motion of microparticles. It became clear that for the subsequent development of atomic theory it was necessary to critically reconsider the basic concepts of the motion and interaction of microparticles. The unsatisfactory nature of the model based on classical mechanics with the addition of quantization conditions was clearly understood by Bohr himself, whose views exerted a great influence on the further development of atomic physics. The beginning of the new stage in the development of atomic physics was the idea stated by the French physicist L. de Broglie in 1924 concerning the dual nature of the motion of microobjects, in particular of the electron. This idea became the point of departure of quantum mechanics, formulated in 1925–26 in the papers of W. Heisenberg and M. Born (Germany), E. Schrodinger (Austria), and P. Dirac (England), and of the modern quantum mechanical theory of the atom developed on the basis of it.
The concepts of quantum mechanics concerning the motion of the electron (of a microparticle in general) differ radically from classical concepts. According to quantum mechanics, the electron does not move along a trajectory (orbit) as a solid ball does; the motion of the electron also exhibits certain properties which are characteristic of wave propagation. On the one hand, the electron always behaves (for example, in collisions) like a unified whole, like a particle which has indivisible charge and mass; at the same time electrons with a specific energy and momentum propagate like a plane wave that has a specific frequency (and wavelength). The energy E of the electron as a particle is associated with a frequency v of an electron wave by the relation E = hv, and its momentum p, with a wavelength λ by the relation p = h/λ.
Stable motions of the electron in an atom, as shown by Schrödinger (1926), are in certain respects analogous to standing waves, whose amplitudes differ at different points. In addition, in the atom, as in an oscillatory system, only certain “allowed” motions with specific values of energy, angular momenta, and projections of moments of the electron in the atom are possible. Each stationary state of the atom is described by means of a certain wave function which is a solution of a wave equation of a particular type—the Schrödinger equation; an “electron cloud,” which (on the average) characterizes the distribution of electron charge density in the atom, corresponds to the wave function. In the 1920’s and 1930’s approximate methods of calculating the distribution of electron charge in complex atoms were developed—in particular, the Thomas-Fermi method (1926, 1928). This quantity and the value of the so-called atomic factor connected with it are important in the investigation of electron collisions with atoms and their scattering of X-rays.
On the basis of quantum mechanics, the accurate calculation of the energies of electrons in complex atoms by means of the Schrödinger equation was successfully carried out. The approximate methods of such calculations were developed in 1928 by D. R. Hartree (England) and in 1930 by V. A. Fock (USSR). Investigations of atomic spectra completely confirmed the quantum mechanical theory of the atom. In addition, it was explained that the state of an electron in an atom depends essentially on its spin—the intrinsic mechanical angular momentum. An explanation was given of the effect of external electric and magnetic fields on the atom. An important general principle connected with electron spin was discovered by the Swiss physicist W. Pauli (1925): according to his principle, in each electron state in the atom it is possible to find only one electron; if the given state is already occupied by an electron, then the next electron entering into the composition of the atom is forced to occupy some other state. On the basis of the Pauli principle, the capacity of electron shells in complex atoms, which determines the periodicity of the properties of elements, was finally established. Starting from quantum mechanics, the German physicists W. Heitler and F. London in 1927 put forth a theory of the so-called homeopolar chemical bonds of two identical atoms (for example, the atoms of hydrogen in the H2 molecule), which cannot be explained within the framework of the Bohr model of the atom.
Important applications of quantum mechanics in the 1930’s and later were the investigations of bound atoms, which form molecules or crystals. The states of atoms which are part of a molecule are essentially different from the states of a free atom. Significant changes are also undergone by an atom in a crystal under the influence of intracrystalline forces, the theory of which was first worked out by H. Bethe in 1929. By studying these changes, it is possible to establish the character of the interactions of the atoms with its environment. The greatest experimental achievement in this area of atomic physics was the discovery by E. K. Zavoiskii in 1944 of electron paramagnetic resonance, which afforded the possibility of studying the different types of bonding associations of atoms with their environment.
Modern atomic physics. The basic branches of modern atomic physics are the theory of the atom, atomic (optical) spectroscopy, X-ray spectroscopy, radio spectroscopy (which also investigates the rotational levels of molecules), and the physics of atomic and ion collisions. The various branches of spectroscopy encompass different frequency ranges of radiation and, correspondingly, different energy ranges of quanta. Whereas X-ray spectroscopy investigates the radiation of atoms with quantum energies up to hundreds of thousands of eV, radio spectroscopy deals with very small quanta—down to quanta of less than 10-6 eV.
The most important problem of atomic physics is the detailed determination of all the characteristics of atomic states. The question concerns the determination of the possible values of the atom’s energy (its energy levels), the values of the angular momenta, and other quantities that characterize the states of the atom. The fine and hyperfine structures of the energy levels and changes of the energy levels under the influence of electrical and magnetic fields—both external (macroscopic) and internal (microscopic)—are investigated. Such a characteristic of the states of the atom as the lifetime of an electron at an energy level has great significance. Finally, great attention is paid to the mechanism of excitation of atomic spectra.
The fields of the phenomena which are studied by the different branches of atomic physics overlap. X-ray spectroscopy by measurement of the emission and absorption of X rays permits the determination for the most part of the binding energy of inner electrons with the atomic nucleus (ionization energy) and the distribution of the electric field within the atom. Optical spectroscopy studies the set of spectral lines which are emitted by atoms, and determines the characteristics of the atomic energy levels, the intensities of spectral lines and lifetimes of the atom in excited states associated with them, the fine structure of energy levels, and their displacement and splitting in electric and magnetic fields. Radio spectroscopy investigates in detail the width and shape of spectral lines, their hyperfine structure, shifting and splitting in a magnetic field, and intra-atomic processes in general which are caused by very weak interactions and influences of media.
The analysis of the results of the collisions of fast electrons and ions with atoms affords the possibility of obtaining information about the electron charge density distribution (“electron cloud”) within the atom, the excitation energies of atoms, and ionization energies.
The results of the detailed study of the structure of atoms find their broadest application not only in many branches of physics, but also in chemistry, astrophysics, and other fields of science. On the basis of the investigation of the broadening and displacement of spectral lines, it is possible to determine local fields in the medium (liquid, crystal) which cause these changes and the state of this medium (temperature, density, and others). Knowledge of the distribution of electron charge density in an atom and its variations during external interactions permits the prediction of the type of chemical bonds which the atom can form and the behavior of an ion in a crystalline lattice. Information concerning the structure and characteristics of atomic and ion energy levels is extremely important for quantum electronic devices. The behavior of atoms and ions during collisions—their ionizations, excitation, and charge exchange—is important for plasma physics. Knowledge of the detailed structure of atomic energy levels, particularly multiple-ionized atoms, is important for astrophysics.
Thus, atomic physics is closely connected with other branches of physics and other natural sciences. The concepts of the atom which have been developed in atomic physics also have great significance for man’s Weltanschauung. The “stability” of the atom explains the stability of various types of matter and the immutability of the chemical elements under natural conditions—for example, under ordinary atmospheric temperature and pressure found on the earth. The “plasticity” of the atom—the variation of its properties and states during the variation of the external conditions under which it exists—explains the possibility of forming more complex systems which are qualitatively unique and their ability to take on various forms of internal organization. Thus a solution is found for the conflict between the idea of immutable atoms and the qualitative diversity of substances—a conflict which has existed both in ancient and in modern times and which has served as the basis for the criticism of atomism.
Bohr, N. Tri stat’i o spektrakh i stroenii atomov. Moscow-Petrograd, 1923. (Translated from German.)
Born, M. Sovremennaia fizika. Moscow, 1965. (Translated from German.)
Brogue, L. de. Revoliutsiia v fizike. Moscow, 1963. (Translated from French.)
Shpol’skii, E. V. Atomnaia fizika, 5th ed., vol. 1. Moscow, 1963.
atomic physics
[ə′täm·ik ′fiz·iks]
The science concerned with the structure of the atom, the characteristics of the elementary particles of which the atom is composed, and the processes involved in the interactions of radiant energy with matter.
References in periodicals archive ?
For all their differences, atomic physics, space physics, and ecology have more in common with one another than with acting and literary criticism.
After immigrating to Canada, he earned a doctorate in atomic physics from the University of Windsor in Ontario.
Over the course of his career, he wrote seven books on subjects that included mathematical physics, atomic physics, plasma physics and fusion energy, and was a contributing author to several other technical books.
Dr Ward, a Manchester University graduate, comes from a varied academic background which includes research in atomic physics and work across a variety of educational sectors, including sixth form, further and higher education.
Scientists from Georgia Institute of Technology, California Institute of Technology and the University of Southern California have uncovered an astonishing parallel between the mathematics of celestial mechanics and the math governing some aspects of atomic physics.
Reliable calculations for fluorescence yield and x-ray emission rates based on atomic physics are readily available (6,7,8,9).
50 YEARS AGO: Prof M L Oliphant addressing the Radio Industries Club of Birmingham yesterday said that an American had recently expressed to him the view that the day of the radio amateur was passing and that the future might see technically-minded boys applying electronics to the study of nuclear or atomic physics because that was more interesting than the problems of communication.
Given the uniqueness of our approach, the successful implementation of the proposed research should provide an outstanding playground for conducting basic and applied research in the fields of nanophotonics, plasmonics and atomic physics, and will serve as a landmark for constructing novel miniaturized quantum devices.
He summarizes the necessary theoretical background for research for students of physics and chemistry, including information that might be strong in one of the disciplines but weak in the other, that is basic quantum mechanics and atomic physics for the chemists and some basic knowledge about molecules for physicists.
In line with this, researchers in Atomic Physics and Packaging Logistics developed a new laser instrument that could solve the problem.
In subsequent years, he applied his extensive experience in spectroscopy, atomic physics, and beam physics to Scrape-off Layer (SOL) and divertor physics issues in tokamak plasmas.
Carl Wieman's contributions have had a major impact on defining the field of atomic physics as it exists today. |
6d4a4e74d5d0da58 | General relativity
From Wikipedia, the free encyclopedia
(Redirected from Curved space-time)
Jump to: navigation, search
General relativity (GR, also known as the general theory of relativity or GTR) is the geometric theory of gravitation published by Albert Einstein in 1915[2] and the current description of gravitation in modern physics. General relativity is considered as the most beautiful of all existing physical theories[3]. General relativity generalizes special relativity and Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations.
Einstein's theory has important astrophysical implications. For example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. There is ample evidence that the intense radiation emitted by certain kinds of astronomical objects is due to black holes; for example, microquasars and active galactic nuclei result from the presence of stellar black holes and supermassive black holes, respectively. The bending of light by gravity can lead to the phenomenon of gravitational lensing, in which multiple images of the same distant astronomical object are visible in the sky. General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of a consistently expanding universe.
The Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, which eventually resulted in the Reissner–Nordström solution, now associated with electrically charged black holes.[5] In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption.[6] By 1929, however, the work of Hubble and others had shown that our universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot and dense earlier state.[7] Einstein later declared the cosmological constant the biggest blunder of his life.[8]
During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein himself had shown in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors").[9] Similarly, a 1919 expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of May 29, 1919,[10] making Einstein instantly famous.[11] Yet the theory entered the mainstream of theoretical physics and astrophysics only with the developments between approximately 1960 and 1975, now known as the golden age of general relativity.[12] Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations.[13] Ever more precise solar system tests confirmed the theory's predictive power,[14] and relativistic cosmology, too, became amenable to direct observational tests.[15]
From classical mechanics to general relativity[edit]
Geometry of Newtonian gravity[edit]
Relativistic generalization[edit]
As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics.[23] In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations and boosts.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena.[24]
With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event A, there is a set of events that can, in principle, either influence or be influenced by A via signals or interactions that do not need to travel faster than light (such as event B in the image), and a set of events for which such an influence is impossible (such as event C in the image). These sets are observer-independent.[25] In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the space–time's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a Conformal structure[26] or conformal geometry.
Einstein's equations[edit]
Einstein's field equations
On the left-hand side is the Einstein tensor, a specific divergence-free combination of the Ricci tensor and the metric. Where is symmetric. In particular,
On the right-hand side, is the energy–momentum tensor. All tensors are written in abstract index notation.[32] Matching the theory's prediction to observational results for planetary orbits (or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics), the proportionality constant can be fixed as κ = 8πG/c4, with G the gravitational constant and c the speed of light.[33] When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations,
Alternatives to general relativity[edit]
There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory.[34]
Definition and basic applications[edit]
Definition and basic properties[edit]
As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems.[39] Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers.[40] Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.[41]
Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly.[43] Nevertheless, a number of exact solutions are known, although only a few have direct physical applications.[44] The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe,[45] and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos.[46] Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub-NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).[47]
Consequences of Einstein's theory[edit]
Gravitational time dilation and frequency shift[edit]
Gravitational redshift has been measured in the laboratory[54] and using astronomical observations.[55] Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks,[56] while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS).[57] Tests in stronger gravitational fields are provided by the observation of binary pulsars.[58] All results are in agreement with general relativity.[59] However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.[60]
Light deflection and gravitational time delay[edit]
This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity.[62] As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion),[63] several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light,[64] the angle of deflection resulting from such calculations is only half the value given by general relativity.[65]
Gravitational waves[edit]
Ring of test particles deformed by a passing (linearized, amplified for better visibility) gravitational wave
Predicted in 1916[68][69] by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On February 11, 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging.[70][71][72]
Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space[75] or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves.[76] But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.[77]
Orbital effects and the relativity of direction[edit]
Precession of apsides[edit]
The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass)[79] or the much more general post-Newtonian formalism.[80] It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations).[81] Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth),[82] as well as in binary pulsar systems, where it is larger by five orders of magnitude.[83]
In general relativity the perihelion shift σ, expressed in radians per revolution, is approximately given by:[84]
Orbital decay[edit]
Orbital decay for PSR1913+16: time shift in seconds, tracked over three decades.[85]
Geodetic precession and frame-dragging[edit]
Several relativistic effects are directly related to the relativity of direction.[89] One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport").[90] For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging.[91] More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.[92][93]
Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable.[94] Such effects can again be tested through their influence on the orientation of gyroscopes in free fall.[95] Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction.[96] Also the Mars Global Surveyor probe around Mars has been used.[97][98]
Astrophysical applications[edit]
Gravitational lensing[edit]
The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing.[99] Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs.[100] The earliest example was discovered in 1979;[101] since then, more than a hundred gravitational lenses have been observed.[102] Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.[103]
Gravitational wave astronomy[edit]
Artist's impression of the space-borne gravitational wave detector LISA
Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research.[105] Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO.[106] Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 Hertz frequency range, which originate from binary supermassive blackholes.[107] A European space-based detector, eLISA / NGO, is currently under development,[108] with a precursor mission (LISA Pathfinder) having launched in December 2015.[109]
Observations of gravitational waves promise to complement observations in the electromagnetic spectrum.[110] They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string.[111] In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger.[70][71][112]
Black holes and other compact objects[edit]
Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars.[113] Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center,[114] and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.[115]
Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation.[116] Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars.[117] In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed.[118] General relativity plays a central role in modelling all these phenomena,[119] and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.[120]
where is the spacetime metric.[123] Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions,[124] allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase.[125] Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation,[126] further observational data can be used to put the models to the test.[127] Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis,[128] the large-scale structure of the universe,[129] and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.[130]
Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly.[131] There is no generally accepted description of this new kind of matter, within the framework of known particle physics[132] or otherwise.[133] Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.[134]
An inflationary phase,[135] an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation.[136] Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario.[137] However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations.[138] An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed[139] (cf. the section on quantum gravity, below).
Time travel[edit]
Kurt Gödel showed[140] that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes.
Advanced concepts[edit]
Causal structure and global geometry[edit]
Penrose–Carter diagram of an infinite Minkowski universe
There are other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon).[149] Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semi-classical radiation known as Unruh radiation.[150]
Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values.[151] Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole,[152] or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole.[153] The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.[154]
Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization.[155] The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage[156] and also at the beginning of a wide class of expanding universes.[157] However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture).[158] The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.[159]
Evolution equations[edit]
To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism.[161] These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified.[162] Such formulations of Einstein's field equations are the basis of numerical relativity.[163]
Global and quasi-local quantities[edit]
Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass)[165] or suitable symmetries (Komar mass).[166] If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity.[167] Just as in classical physics, it can be shown that these masses are positive.[168] Corresponding global definitions exist for momentum and angular momentum.[169] There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.[170]
Relationship with quantum theory[edit]
If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid state physics, would be the other.[171] However, how to reconcile quantum theory with general relativity is still an open question.
Quantum field theory in curved spacetime[edit]
Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth.[172] In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime.[173] Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation, leading to the possibility that they evaporate over time.[174] As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.[175]
Quantum gravity[edit]
The demand for consistency between a quantum description of matter and a geometric description of spacetime,[176] as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics.[177] Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.[178][179]
Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems.[180] Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity.[181] At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability").[182]
Simple spin network of the type used in loop quantum gravity
One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects.[183] The theory promises to be a unified description of all particles and interactions, including gravity;[184] the price to pay is unusual features such as six extra dimensions of space in addition to the usual three.[185] In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity[186] form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.[187]
Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff.[188] However, with the introduction of what are now known as Ashtekar variables,[189] this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.[190]
Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced,[191] there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge Calculus,[178] dynamical triangulations,[192] causal sets,[193] twistor models[194] or the path-integral based models of quantum cosmology.[195]
Current status[edit]
Observation of gravitational waves from binary black hole merger GW150914.
General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications the theory is incomplete.[197] The problem of quantum gravity and the question of the reality of spacetime singularities remain open.[198] Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics.[199] Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations,[200] while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes).[201] In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on September 14, 2015.[72][202][203] A century after its introduction, general relativity remains a highly active area of research.[204]
See also[edit]
1. ^ "GW150914: LIGO Detects Gravitational Waves". Retrieved 18 April 2016.
2. ^ O'Connor, J.J. and Robertson, E.F. (1996), General relativity. Mathematical Physics index, School of Mathematics and Statistics, University of St. Andrews, Scotland. Retrieved 2015-02-04.
3. ^ Landau, Lev Davidovich, ed. The classical theory of fields. Vol. 2. Elsevier, 2013.
4. ^ Pais 1982, ch. 9 to 15, Janssen 2005; an up-to-date collection of current research, including reprints of many of the original articles, is Renn 2007; an accessible overview can be found in Renn 2005, pp. 110ff. Einstein's original papers are found in Digital Einstein, volumes 4 and 6. An early key article is Einstein 1907, cf. Pais 1982, ch. 9. The publication featuring the field equations is Einstein 1915, cf. Pais 1982, ch. 11–15
6. ^ Einstein 1917, cf. Pais 1982, ch. 15e
9. ^ Pais 1982, pp. 253–254
10. ^ Kennefick 2005, Kennefick 2007
11. ^ Pais 1982, ch. 16
12. ^ Thorne, Kip (2003). The future of theoretical physics and cosmology: celebrating Stephen Hawking's 60th birthday. Cambridge University Press. p. 74. ISBN 0-521-82081-2. Extract of page 74
15. ^ Section Cosmology and references therein; the historical development is in Overbye 1999
16. ^ The following exposition re-traces that of Ehlers 1973, sec. 1
17. ^ Arnold 1989, ch. 1
18. ^ Ehlers 1973, pp. 5f
19. ^ Will 1993, sec. 2.4, Will 2006, sec. 2
20. ^ Wheeler 1990, ch. 2
22. ^ Ehlers 1973, pp. 10f
26. ^ Ehlers 1973, sec. 2.3
27. ^ Ehlers 1973, sec. 1.4, Schutz 1985, sec. 5.1
33. ^ Kenyon 1990, sec. 7.4
36. ^ At least approximately, cf. Poisson 2004
37. ^ Wheeler 1990, p. xi
38. ^ Wald 1984, sec. 4.4
39. ^ Wald 1984, sec. 4.1
41. ^ section 5 in ch. 12 of Weinberg 1972
42. ^ Introductory chapters of Stephani et al. 2003
45. ^ Chandrasekhar 1983, ch. 3,5,6
46. ^ Narlikar 1993, ch. 4, sec. 3.3
48. ^ Lehner 2002
49. ^ For instance Wald 1984, sec. 4.4
50. ^ Will 1993, sec. 4.1 and 4.2
51. ^ Will 2006, sec. 3.2, Will 1993, ch. 4
58. ^ Stairs 2003 and Kramer 2004
60. ^ Ohanian & Ruffini 1994, pp. 164–172
63. ^ Blanchet 2006, sec. 1.3
67. ^ Will 1993, sec. 7.1 and 7.2
70. ^ a b Castelvecchi, Davide; Witze, Witze (February 11, 2016). "Einstein's gravitational waves found at last". Nature News. doi:10.1038/nature.2016.19361. Retrieved 2016-02-11.
71. ^ a b B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration) (2016). "Observation of Gravitational Waves from a Binary Black Hole Merger". Physical Review Letters. 116 (6): 061102. doi:10.1103/PhysRevLett.116.061102. PMID 26918975.
72. ^ a b "Gravitational waves detected 100 years after Einstein's prediction | NSF - National Science Foundation". Retrieved 2016-02-11.
74. ^ For example Jaranowski & Królak 2005
75. ^ Rindler 2001, ch. 13
76. ^ Gowdy 1971, Gowdy 1974
79. ^ Rindler 2001, sec. 11.9
80. ^ Will 1993, pp. 177–181
83. ^ Kramer et al. 2006
84. ^ Dediu, Adrian-Horia; Magdalena, Luis; Martín-Vide, Carlos (2015). Theory and Practice of Natural Computing: Fourth International Conference, TPNC 2015, Mieres, Spain, December 15–16, 2015. Proceedings (illustrated ed.). Springer. p. 141. ISBN 978-3-319-26841-5. Extract of page 141
88. ^ Kramer 2004
91. ^ Bertotti, Ciufolini & Bender 1987, Nordtvedt 2003
92. ^ Kahn 2007
96. ^ Ciufolini & Pavlis 2004, Ciufolini, Pavlis & Peron 2006, Iorio 2009
97. ^ Iorio L. (August 2006), "COMMENTS, REPLIES AND NOTES: A note on the evidence of the gravitomagnetic field of Mars", Classical Quantum Gravity, 23 (17): 5451–5454, arXiv:gr-qc/0606092Freely accessible, Bibcode:2006CQGra..23.5451I, doi:10.1088/0264-9381/23/17/N01
98. ^ Iorio L. (June 2010), "On the Lense–Thirring test with the Mars Global Surveyor in the gravitational field of Mars", Central European Journal of Physics, 8 (3): 509–513, arXiv:gr-qc/0701146Freely accessible, Bibcode:2010CEJPh...8..509I, doi:10.2478/s11534-009-0117-6
101. ^ Walsh, Carswell & Weymann 1979
103. ^ Roulet & Mollerach 1997
104. ^ Narayan & Bartelmann 1997, sec. 3.7
105. ^ Barish 2005, Bartusiak 2000, Blair & McNamara 1997
106. ^ Hough & Rowan 2000
107. ^ Hobbs, George; Archibald, A.; Arzoumanian, Z.; Backer, D.; Bailes, M.; Bhat, N. D. R.; Burgay, M.; Burke-Spolaor, S.; et al. (2010), "The international pulsar timing array project: using pulsars as a gravitational wave detector", Classical and Quantum Gravity, 27 (8): 084013, arXiv:0911.5206Freely accessible, Bibcode:2010CQGra..27h4013H, doi:10.1088/0264-9381/27/8/084013
108. ^ Danzmann & Rüdiger 2003
110. ^ Thorne 1995
111. ^ Cutler & Thorne 2002
112. ^ "Gravitational waves detected 100 years after Einstein's prediction | NSF – National Science Foundation". Retrieved 2016-02-11.
113. ^ Miller 2002, lectures 19 and 21
114. ^ Celotti, Miller & Sciama 1999, sec. 3
115. ^ Springel et al. 2005 and the accompanying summary Gnedin 2005
116. ^ Blandford 1987, sec. 8.2.4
121. ^ Dalal et al. 2006
122. ^ Barack & Cutler 2004
123. ^ Originally Einstein 1917; cf. Pais 1982, pp. 285–288
124. ^ Carroll 2001, ch. 2
126. ^ E.g. with WMAP data, see Spergel et al. 2003
129. ^ Lahav & Suto 2004, Bertschinger 1998, Springel et al. 2005
137. ^ Spergel et al. 2007, sec. 5,6
139. ^ Brandenberger 2007, sec. 2
140. ^ Gödel 1949
147. ^ Bekenstein 1973, Bekenstein 1974
149. ^ Narlikar 1993, sec. 4.4.4, 4.4.5
156. ^ Namely when there are trapped null surfaces, cf. Penrose 1965
157. ^ Hawking 1966
160. ^ Hawking & Ellis 1973, sec. 7.1
164. ^ Misner, Thorne & Wheeler 1973, §20.4
165. ^ Arnowitt, Deser & Misner 1962
167. ^ For a pedagogical introduction, see Wald 1984, sec. 11.2
169. ^ Townsend 1997, ch. 5
173. ^ Wald 1994, Birrell & Davies 1984
175. ^ Wald 2001, ch. 3
177. ^ Schutz 2003, p. 407
178. ^ a b Hamber 2009
179. ^ A timeline and overview can be found in Rovelli 2000
180. ^ 't Hooft & Veltman 1974
181. ^ Donoghue 1995
182. ^ In particular, a perturbative technique known as renormalization, an integral part of deriving predictions which take into account higher-energy contributions, cf. Weinberg 1996, ch. 17, 18, fails in this case; cf. Veltman 1975, Goroff & Sagnotti 1985; for a recent comprehensive review of the failure of perturbative renormalizability for quantum gravity see Hamber 2009
185. ^ Green, Schwarz & Witten 1987, sec. 4.2
186. ^ Weinberg 2000, ch. 31
187. ^ Townsend 1996, Duff 1996
188. ^ Kuchař 1973, sec. 3
191. ^ Isham 1994, Sorkin 1997
192. ^ Loll 1998
193. ^ Sorkin 2005
194. ^ Penrose 2004, ch. 33 and refs therein
195. ^ Hawking 1987
196. ^ Ashtekar 2007, Schwarz 2007
198. ^ section Quantum gravity, above
199. ^ section Cosmology, above
200. ^ Friedrich 2005
204. ^ See, e.g., the electronic review journal Living Reviews in Relativity
Further reading[edit]
Popular books[edit]
Beginning undergraduate textbooks[edit]
Advanced undergraduate textbooks[edit]
• Ludyk, Günter (2013). Einstein in Matrix Form (1st ed.). Berlin: Springer. ISBN 978-3-642-35797-8.
Graduate-level textbooks[edit]
External links[edit]
• Courses
• Lectures
• Tutorials |
fefa850e4606c540 | Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s7123489 sensors-07-03489 Review Optical Fiber Sensing Using Quantum Dots JorgePedro1* MartinsManuel António2 TrindadeTito2 SantosJosé Luís13 FarahiFaramarz4 Unidade de Optoelectrónica, INESC Porto. Rua do Campo Alegre, 687. 4169 007 Porto, Portugal. e-mail: e-mail: Department of Chemistry - CICECO, University of Aveiro. 3810-193 Aveiro, Portugal e-mail: e-mail: Dept. Física. Faculdade de Ciências, Universidade do Porto. Rua do Campo Alegre, 687. 4169 007 Porto, Portugal. Department of Physics & Optical Science. UNC Charlotte, 306-A Grigg. Charlotte, NC 28223-0001 e-mail: Author to whom correspondence should be addressed. 12 2007 21 12 2007 7 12 3489 3534 19 11 2007 20 12 2007 © 2007 by MDPI ( 2007
Reproduction is permitted for noncommercial purposes.
Recent advances in the application of semiconductor nanocrystals, or quantum dots, as biochemical sensors are reviewed. Quantum dots have unique optical properties that make them promising alternatives to traditional dyes in many luminescence based bioanalytical techniques. An overview of the more relevant progresses in the application of quantum dots as biochemical probes is addressed. Special focus will be given to configurations where the sensing dots are incorporated in solid membranes and immobilized in optical fibers or planar waveguide platforms.
quantum dots biochemical sensors optical fibers
The advent of Nanotechnology, introducing control over matter at the nanometer scale, has produced a new class of materials with novel properties, creating new possibilities in a diversity of domains. In particular, biotechnology is already taking advantage from the versatility of nanoparticles with a variety of sizes, shapes and compositions (metal, polymer, semiconductor,…) [1-3]. The reduced dimensions of the particles is a key factor leading to the possibility of enhancement and tailoring of many of the material properties (electrical, optical, chemical…) which, at such scale, become size and/or shape dependent. Enhanced mechanical properties, tunable light scattering or luminescence due to quantum size effects, are examples of features that can be controlled at the nanoscale. In addition, the small size of the particles provides a large interfacial area which enables bioconjugation (i.e., combination of the nanoparticle with biomolecules like antibodies) allowing such properties to be integrated in biological systems. In this way, the performance of conventional techniques can be largely improved and an entirely new set of exciting applications becomes available [4-6].
In this context, luminescent semiconductor nanocrystals, or quantum dots (QDs), are particularly attractive for biochemical sensing and imaging applications. Fluorescence is the basis of a large number of bioassays and chemical sensing techniques. In this regard, the unique optical properties of QDs are highly favorable when compared to those of traditional molecular fluorophores. The ability to tune their luminescence characteristics by particle size control, combined with relatively high quantum yields, narrow fluorescence emission, very broad absorption spectrum and an outstanding photo-stability provide new solutions to many of the problems associated with traditional luminescence sensors and are the promise for a completely new set of applications [7-9].
QDs' unique features have attracted a considerable interest and the variety of new applications is expanding quickly. Although this is a relatively recent field of research, the synthesis of QDs associated to their applications in a diversity of domains has been the subject of several reviews. Early reviews focused on synthesis, functionalization and material properties, and provided a prospective view of the field [10, 11]. More recently, the use of QDs as labels in biomedical imaging or in immunoassays has seen several breakthroughs which were analyzed by different authors [8, 12, 13]. Very good reviews have been published on the use of QDs in new sensing strategies, giving either a general overview of the field [14], or a more focused discussion on biosensing applications and future trends [15-17]. Other fields of interest such as laser applications or fundamental physical experiments have also been reviewed [18-20].
Nevertheless, due to the large research efforts dedicated to this topic, a number of sensing applications has recently appeared and this is currently an expanding field of research. In particular, the immobilization of nanocrystals in solid membranes and its use in combination with optical fiber or planar platforms has seen some progress. Such arrangements are a key step towards the development of advanced analytical instrumentation, aiming small scale and multiparameter capability. For this, QDs are very well suited because of their high photostability and multiplexing ability.
Quantum dots: a brief overview
Quantum Dots are small particles of a semiconductor material, consisting of a few hundreds to thousands of atoms. Their small size, ranging for most of the systems from 1 nm to 10 nm, is mostly responsible for their unique optical, electrical and chemical properties.
The main procedures by which QDs can be fabricated, attaining 3D confinement of the charge carriers, include diffusion controlled growth, lithography, epitaxy and colloidal chemistry. Combined lithographic patterning and etching is a possible pathway [21], and epitaxial growth over pre-patterned substrates has also been investigated [22]. More recently, techniques exploring self-assembly mechanisms are being successfully explored [20]. Nevertheless they rely on expensive MBE or MOCVD systems. In all physical deposition approaches the resulting QDs are embedded in a solid matrix and are, therefore, more adapted to integrated optoelectronic devices (QD lasers and detectors). Other techniques have been explored for applications requiring further chemical manipulation and processing.
Alternative approaches include the synthesis of QDs using colloidal chemistry techniques which are very often associated to molecular precursor chemistry. For these methods the semiconductor nanoparticles are homogeneously generated in a coordinating solvent or in the presence of a chemical stabilizer. The synthesis of QDs in high boiling point solvents have been particularly successful in yielding nearly monodispersed QDs with very narrow emission bands [10, 18]. Because QDs produced in this way have their surfaces capped with organic ligands, they are compatible with further (bio)chemical surface modification. So, they are particularly suited for sensing applications involving luminescence. This review will focus on the sensing applications of QD-based devices.
Quantum confinement and optical properties
The main differences between the macrocrystalline semiconductor and the corresponding nanocrystalline material arise from two fundamental factors that are size related. The first one is associated to the large surface area to volume ratio of nanoparticles and the second one is related to the three-dimensional quantum confinement of their charge carriers. In the particular case of semiconductors, quantum confinement takes place whenever the nanoparticles' size is smaller than the exciton Bohr radius of the bulk semiconductor, aB (typically in the 1 nm to 10 nm range which is still much larger than the semiconductor lattice constant, <1nm).
A direct consequence of the 3D confinement is that the energy levels of the excited carriers (exciton) will become discrete and approach the molecular behavior as the particle size decreases. An ideal quantum dot can be treated like a spherical quantum box, and will display an atomic like absorption spectrum. The energies of the quantized states in the conduction and valence “band” can be calculated using the Schrödinger equation and the effective mass approximation. However, considering that both electron and hole are confined into a space smaller than the Bohr radius of the exciton, they cannot be considered as mutually independent making the solutions of the equation harder to get. Many authors have suggested different approaches to this problem from which resulted some approximate analytical expressions or even numerical solutions that are in good agreement with experimental data [23]. Brus et al. [24] showed that, for cadmium sulfide (CdS) and cadmium selenide (CdSe) nanocrystals, the size dependence for the fundamental electron-hole state, E1S1s, can be described by E 1 S 1 s = E g + π 2 h 2 2 a 2 μ 1.786 e 2 ε a ,where a is the particle radius, μ the electron-hole reduced mass, e the electronic charge and ε the dielectric constant of the bulk semiconductor. The first term on the right, Eg, corresponds to the bulk bandgap energy, the second term accounts for the confinement energy and the third term for the electron-hole Coulomb interaction. Equation (1) shows that, besides inducing energy quantization, decreasing the dots size makes the Coulomb term shift the total energy to lower values with a a-1 dependence. Conversely, the confinement term adds to the total energy with a a-2 dependence. This way, for smaller dot sizes, the confinement term becomes dominant and the optical spectrum shows a blue shift in the band edge energy when the QD's size is decreased below aB. The smaller the dot the greater will be the blue shift observed relative to the typical Eg of the bulk semiconductor.
Like in bulk semiconductors, the band edge energy determines the absorption onset and the luminescence peak emission (near band edge). Therefore, QDs will absorb any photon with energy h ν > E1S1s and display an emission peak around the same energy value. In this context, Equation (1) shows that QD technology allows band gap tuning by control of the nanoparticles size (a) or material type (ε).
As the nanoparticles size decreases, an increasing overlap of the confined charge carriers wave functions translates into strongly enhanced absorption coefficients. Because the amount of possible transitions grows with photon energy, the absorption coefficient increases steadily as the excitation wavelength is shifted towards the blue. On the other hand, emission processes in QDs result from electron-hole recombination and are strongly dependent on the competition between such radiative processes and non-radiative recombination mechanisms. Non-radiative processes occur mainly at defects located at the nanocrystal surface. In this context, the large surface/volume ratio of QDs allows to obtain enhanced quantum yields by control of their surface chemistry and passivation of surface defects. In a particularly successful strategy, overcoating the nanocrystal core (CdSe) with an outer shell of an higher bandgap semiconductor (ZnS) resulted in quantum yields in the 50% range. Besides the increased brightness, core-shell systems provide increased photostability and chemical resistance [25, 26]. More recently, highly luminescent CdSe core nanocrystals caped with a multi-shell layer (CdS and ZnS) have been reported, displaying quantum yields in a 70-85 % range [27].
While an ideal quantum dot displays an atomic like discrete spectrum, in synthetic nanocrystals the linewidths are limited by thermal broadening. This way, the emission spectrum of a single QD can have, at room temperature, full widths at half maximum (FWHM) of a few nanometers. However, the major source of emission broadening in QDs comes from their size distribution. In a colloidal dispersion the solid particles have approximately, but not exactly, the same size. Because the emission of a QD is related to its size, the slight differences in size result in slight variations in the emission wavelength. As a consequence, the emission spectrum of a certain nanocrystal ensemble will be much broader than the individual QDs spectra. Currently, it is possible to achieve size distributions with variation lower than 5%. This translates into a FWHM of approximately 25-30 nm, which is quite narrow in comparison to the spectral response of many luminescent dyes.
A typical absorption spectrum of a CdSe QD ensemble is shown in Figure 1 (left). In contrast with traditional dyes, which have broad emission spectra with a characteristic long red tail, nanocrystals present a symmetrical (approximately Gaussian) and relatively narrow emission profile and a very broad absorption spectrum. These two features combined allow to obtain a large equivalent Stokes shift, which can be tuned by setting excitation at any wavelength lower than the emission peak, facilitating the discrimination of the QDs' luminescent emission. In addition, it also introduces the possibility of exciting different QDs with the same optical source. This can be observed in Figure 1 (right) where the emission of different CdSe and CdTe QDs, coated with a ZnS shell, was obtained by excitation with a single blue LED.
Depending on the particles size, the emission of CdSe quantum dots can be continuously tuned from 465 nm to 640 nm, corresponding to a size ranging from 1.9 nm to 6.7 nm (diameter), respectively. This range can be extended with cadmium-telluride (CdTe) quantum dots which emit further into the red (600nm-725nm). Core-shell systems of cadmium-based semiconductor materials overcoated with ZnS, provide bright luminescence over the whole visible range and are the preferred systems of most sensing applications. Figure 2 shows typical samples of organically capped CdSe/ZnS QDs, dispersed in toluene, under UV irradiation at room temperature. Higher-energy emission is possible with CdS (350nm-470nm). Infrared emission is also available (780nm-2000nm) using indium arsenide (InAs), or lead selenide /sulfide (PbSe/PbS) nanocrystals.
In comparison with conventional molecular dyes, QDs have bright, narrow and stable photoluminescence, provide larger equivalent Stokes shifts, and allow to tune their properties. This set of characteristics gives nanocrystals a great advantage in many traditional luminescence applications. All these features are highly attractive for optical fiber and planar platforms applications.
In particular, QDs' wavelength multiplexing ability is very well suited for many existing optical fiber or microarray configurations. In this regard, having identical nanoparticles from the material point of view, but with different optical properties, introduces the possibility of batch processing multiparameter/multipoint sensors with much simpler procedures. Each set of nanoparticles can be prepared with similar quantum yield, surface chemistry and set of environmental sensitivities. Conversely, when using traditional dyes, different emission wavelengths are provided by completely different chemical species, sometimes presenting dramatically different quantum yields and chemical properties.
In fiber and planar waveguide platforms, the increased photostability of core-shell QDs is a key feature. In fact, in such devices sensing can take place at very small scale and small sample volumes can be easily exposed to very high excitation energy densities. While most dyes present severe photodegradation when illuminated by energetic radiation, quantum dots have demonstrated to be photostable in most situations. Although photobleaching has been reported in bare dots, nanocrystals with an adequate protective shell are known to remain extremely bright even after several hours of exposure to moderate to high levels of UV radiation. On the other hand, the luminescence emission of common dyes can vanish completely after a few minutes.
In practice, the photostability of nanocrystals depends on their surface coatings (bare dots, core shell, or other), and on the surrounding environment (solution or solid matrix). Depending on these circumstances, different behaviors have been observed. Some authors reported photo-enhancement, i.e., a gradual increase of the luminescence intensity, under UV irradiation, both in solution and in solid hosts [28, 29]. Both reversible and non-reversible photo-enhancements were observed and, usually, both components can coexist. The increase in luminescence is explained by a light-induced rise of the dots potential barrier, preventing excited carriers from escaping the QDs and favoring radiative recombination. Photo-induced passivation of surface defects is another possible mechanism involved. After enhancement, slow photo-degradation usually follows. The time scales of these behaviors depend strongly on the power densities to which the samples are submitted.
For CdSe nanocrystals, encapsulated in a glass matrix, Verma and collaborators observed that the photoluminescence intensity showed a linear dependence with the excitation optical power up to moderately high power densities (aprox. 400W/cm2). For higher excitation powers, excited carrier densities dramatically increased and the observed dependence was no longer linear. In addition, shifts in the band-edge luminescence could be observed either due to ‘band-filling’ processes (blue shift) or to photo-induced heating of the sample (red shift)[30].
Much research has been carried out regarding QDs' photostability and apparently contradictory results are often reported. However, these apparent contradictions are often caused by differences in host environments and/or distinct nanocrystals surfaces chemistry. While QDs surface chemistry is the key to obtain highly photostable nanocrystals, it is also the way by which the nanoparticles can become sensitive to a given analyte, can be rendered water soluble, or can become biocompatible. This control is typically achieved by functionalizing the dots surface with different chemical or biological ligands and will ultimately determine the usefulness of QDs as a sensing tool.
Chemical synthesis
Long before the popularization of QDs, a theoretical framework concerning QDs' properties had already been established [31]. However, pioneering synthesis methods failed to provide high quality QDs. Arrested precipitation in solution generally yielded QDs with a low degree of crystallinity and broad size distributions, while synthesis in confined structured medium, such as in porous materials, poses problems with the recovery and functionalization of the QDs. In the literature, there are several reviews on the synthesis of QDs [10, 32] and in this section the focus will be on aspects related to the chemistry of QDs of II/VI materials.
It was after the landmark work of Murray, Norris and Bawendi that high quality QDs became readily available in reasonable amounts [33]. These authors showed that CdE (E=S, Se Te) semiconductor nanocrystallites, with a close control of their size, could be obtained by the injection of dimethylcadmium and the respective chalcogenide source into a hot solvent such as tri-n-octylphosphine oxide (TOPO). The injection of these precursors into the hot solvent (typically between 200-300°C) results in a short burst of nuclei which are generated in homogeneous solution. This rapid nucleation depletes in a large extent the reactants and limits the occurrence of further nucleations and subsequent growth occurs almost exclusively through Ostwald ripening. The molecules of the high boiling point solvents normally used also have the capability to coordinate to the nanocrystals surfaces, hence acting as barriers against the nanoparticles coalescence into bulk powders. The QDs produced by this method and similar strategies are of high quality, i.e. they consist of nanocrystals with narrow size distributions and organically capped surfaces. The size of the nanocrystals can be controlled by adjusting experimental parameters such as the type of solvent, reaction temperature and time, with the size of the particles increasing with increasing reaction temperature. Also control on the injection rate and temperature on further additions of molecular precursors to the reacting mixture, allows control of the shape and type of the polymorph observed in the final QDs[34].
The original TOPO method has been successfully implemented to coat the QDs core with a higher bandgap semiconductor (typically, ZnS). This can be achieved by injection of solutions of dimethyl or diethylzinc and hexamethyldisilathiane ((TMS)2S) into the hot solvent containing the core CdE nanocrystallites [26].
The major drawback of the above method is the high toxicity and difficult manipulation associated to some of the starting chemicals. For instance Cd(CH3)2 is extremely toxic and pyrophoric, and these limitations become more relevant for high temperature synthesis. Although maintaining some of the conceptual characteristics of the original synthetic method [31], alternatives to this approach have emerged in the following years. One of these approaches was introduced by Trindade and O'Brien and involves the thermal decomposition of single-molecule precursors, i.e. a source compound that contains in the same molecule the elements required for the formation of the semiconductor, such as alkyldiseleno- or alkyldithiocarbamato metal complexes [35, 36]. This method is particularly attractive when readily available air-stable precursors can be employed in a one-step process to obtain nanocrystalline materials; this has been clearly the case of the synthesis of several metal sulphides from dithiocarbamato or xantate complexes [36, 37]. This approach has also been investigated to coat CdSe QDs with ZnS and CdS, either by the thermal decomposition of the respective metal dithiocarbamate complexes [38] or via sonochemical decomposition of metal xanthates [39].
An important advance on the synthesis of QDs using TOPO related methods was carried out by Peng and Peng during their mechanistic studies of such type of reactions to produce semiconductor nanocrystals [40]. They observed that by introducing a strong ligand in the reacting mixture, such as hexylphosphonic acid (HPA) or tetradecylphosphonic acid (TDPA), Cd(CH3)2 was immediately converted into Cd(II) HPA/TDPA complexes which in turn could act as the cadmium source. This observation was of great relevance because these cadmium complexes can be easily obtained by dissolving other Cd(II) compounds, such as CdO or Cd(CH3CH2O2)2, in a range of organic solvents (such as a phosphonic acids, fatty acids, or amines). These cadmium compounds are easier to handle and less expensive when compared to Cd(CH3)2, thus facilitating the scale-up synthesis of QDs of II/VI materials using diverse types of solvents (e.g. long chain carboxylic and phosphonic acids as well as amines) [41].
Surface modification and functionalization
At present, the lyothermal syntheses using high boiling point solvents, are the most commonly used routes to QDs of II/VI materials. Although organically capped QDs produced by these approaches are of high quality there has been interest in developing water based synthesis of QDs with comparable quality. In fact, for a number of analytical applications, in particular biological related, stabilization of the QDs in aqueous medium is crucial. Quantum dots of mercury telluride (HgTe) and cadmium telluride (CdTe) with superior optical properties have been reported as interesting examples [42]. Also, aqueous based preparations of thiol-capped CdS [43], CdSe [44] and CdTe [45-47] have been reported. These generally involve the reaction between a water soluble Cd(II) salt and a chalcogenide source in the presence of thiolates which act as stabilizers similarly to the TOPO molecules in the above methods. However, in this case the capping molecules are amphiphilic molecules containing a thiol group strongly coordinated to the QDs surfaces and a polar group (−OH, −COOH and −NH2), that confers compatibility with aqueous solutions to the QDs. In these methods, the availability of sources for the chalcogenide (E=S, Se and Te) is limited and the corresponding acids (H2E), either directly or during its preparation (NaHE), have been used. Although the lack of source alternatives have hampered fast developments of these aqueous-based methodologies, they are promising synthetic strategies to expand the range of water soluble QDs. For instance, precursors like (NH4)2Te (obtained from reaction between Te metal and ammonium hydroxide in the presence of aluminium) have been introduced recently for the synthesis of CdTe [48].
Another possibility to obtain water compatible QDs is to promote appropriate chemical reactions at the surface of organically capped QDs. Phase transfer of such QDs to aqueous solution requires surface functionalization with hydrophilic ligands. To this end, two main general routes have been developed: i) exchange of the native hydrophobic ligands by hydrophilic molecules, commonly designated as ‘cap exchange’, or ii) encapsulation of QDs in a heterofunctional coating, through hydrophobic interaction with the capping molecules.
In the first approach, the ‘cap exchange’ involves the replacement of native capping molecules (e.g. TOPO) with ambidentate organic ligands. These are bifunctional molecules that on one hand can coordinate to the QDs surface trough a soft acidic group (usually a thiol) and on the other hand have hydrophilic groups (for example carboxylic or aminic groups) which point outwards from the QDs surfaces to bulk water molecules. Regarding this matter, several monothiolated ligands have been used but decrease of luminescence quantum yields overtime has been reported [49-53]. A possible explanation relates to the coordination of water molecules to the QDs surface. In fact substitution of monothiols by polythiols or phosphines usually improves stability [54, 55]. Improved stability can also be achieved by “cap exchange” and encapsulation in dendron boxes [56, 57]. These supramolecular structures offer superior protection to the QDs due to the cross-linking of the dendron ligands.
The second strategy that can be used to promote hydrophilicity of capped QDs involves the growth of amorphous silica shells which can be further functionalized with other molecules or polymers [58]. Silica coating starts with a ‘cap exchange’ reaction where the native ligands are substituted by silanes, e.g. mercaptopropyltris(methyloxy)silane (MPS). Condensation of the methoxysilane groups to the dots surfaces occurs through hydrolysis reaction in alkaline media. Further growth of the SiO2 shells can be achieved through hydrolysis of a silicon alkoxide (e.g. TEOS) using the Stöber method. The reactivity of amorphous SiO2 has wide use in different branches of chemistry and this siloxane chemistry can be employed to anchor a variety of organic molecules to the SiO2 surfaces [59].
The exchange of the native capping ligands is not a necessary condition to tailor hydrophilic surfaces in QDs. Encapsulation of the QDs is also possible by ‘wrapping’ the dots with amphiphilic macromolecules. The hydrophobic moieties of the cap interact with the native TOPO molecules (or analogous functional ligands) at the QDs surface, whereas the hydrophilic outer block points to the aqueous phase. This type of QDs encapsulation has been reported for amphiphilic copolymers [60-62], phospholipids [63-65] and ‘bulky’ molecules such as tetrahexyl ether derivatives of p-sulfonatocalix[4]arene [66, 67]. However, this method is not restricted to amphiphilic molecules. For instance, QDs can be encapsulated with polymers via phase separation in oil-in-water microemulsions in the presence of a surfactant [68]. Solvent evaporation and crosslinking of the polymer structure results in robust nanostructures of polymer encapsulated QDs [68].
The combined use of polymers and QDs can produce an endless number of nanocomposites with diverse properties. Additionally to the QDs' unique properties, there is interest in exploiting synergistic effects which might arise from their intimate combination with the polymer matrix. For example, the colloidal stability of organically capped QDs in organic media, provided by the native capping molecules, can be exploited to prepare polymer nanocomposites by in-situ miniemulsion polymerization in the presence of the dots. This technique was prototyped with a series of inorganic nanofillers, such as TiO2[ 69-71], SiO2 [72] and carbon black powders [73], in which the nanofillers have been dispersed in the monomer without any previous surface treatment. More recently, the encapsulation of organically capped nanoparticles (QDs and nanometals) by in situ miniemulsion polymerization was demonstrated for PS [74, 75], PBA [74, 76] and PS/PMMA [77]. The optical behavior of the nanocomposites seems to depend on a number of parameters which include the type of polymer used. Studies on the luminescence behavior of homogeneous CdSe/PBA show highly luminescent nanocomposites unlike their polystyrene analogues [78].
Note that in this strategy there is no formation of strong covalent bonds between the capping molecules and the polymer molecules. In fact, the experimental conditions need to be optimized in order to limit migration of the dots to the surface of the polymer beads. Nevertheless, these nanocomposites contain functional groups that allow surface modification by relatively simple methods. For example, PBA based nanocomposites contain polyester groups that can be hydrolyzed to provide carboxylic functionalities at the surface and hence, allowing biofunctionalization with anti-bodies via formation of peptide linkages [76].
The functionalization of QDs with polimerizable groups can induce graft copolymerization with vinyl monomers, thus leading to a homogeneous distribution of QDs inside polymer latexes. The introduction of vinyl-functionalized CdSe/ZnS in styrene droplets, by exchange of the native capping by 4-mercaptovinylbenzene, has been investigated in miniemulsion polymerization but still migration of the dots to the surface was observed along with the deterioration of the photoluminescence of the final material [75]. An hybrid method that comprises in a first step the silica encapsulation of the CdSe/ZnS dots, followed by functionalization of the silica at the surface with methacryloxypropyltrimethoxysilane (MAS) and then graft copolymerization, gave core-shell QDs-polymer nanocomposites, but again there was a decrease on the luminescence of the nanomaterials [79].
Figure 3 summarizes some of the innovative chemical strategies that have been used to produce organically capped and/or polymer encapsulated QDs, as discussed above. However, there is still a number of practical issues which need improvement. The broad particle size distribution of the nanocomposites and difficulties to obtain an homogeneous distribution of QDs inside the polymer particles, sometimes associated to a detrimental effect on the optical behavior of the nanocomposites, led researchers to introduce new polymerization methods, such as surface initiated controlled/living polymerization. This type of approach has been successfully demonstrated for the controlled growth of dense polymer brushes from several inorganic surfaces, such as gold, [80, 81] SiO2, [82, 83] and magnetic nanoparticles (magnetite/PMMA) [84]. More recently, these surface initiated methods have been investigated to attain the controlled growth of polymeric phases from the surface of functionalized QDs. As such, there is intense research in using synthetic techniques, such as addition–fragmentation chain transfer polymerization (RAFT) [85], nitroxide-mediated polymerization (NMP) [86] and atom-transfer radical polymerization (ATRP) [87], applied to the functionalization of QDs surfaces.
Sensing mechanisms and applications
The combination of appealing optical properties with the facility of bioconjugation make semiconductor nanocrystals an attractive tool for a wide variety of applications. In the field of optical sensors, the ability to tune the QDs optical properties and to tailor the chemical and biological characteristics of their surface contribute to an increasing number of sensing strategies. An overview of the use of QDs in different fields of optical sensing will be addressed in the following sections. Focus will be given to key developments and latest reported advances, particularly those with potential for future optical fiber sensing applications.
Physical sensors
While at present, some of the most attractive features of QDs are being explored in biosensing applications, their temperature behavior renders them excellent temperature probes with many potential uses [88-90].
Walker et al. first characterized the temperature response of colloidal quantum dots immobilized in polymer hosts [91]. In this work, core-shell CdSe/ZnS nanocrystals entrapped in poly(lauryl methacrylate) - PLMA, were subjected to temperature changes while excited with a 488 nm laser line. Monitoring the photoluminescence of the doped polymer membrane revealed that the peak wavelength (λpeak), the linewidth, and the intensity of this emission were all strongly temperature dependent. A blue shift of 20 nm in the peak emission was observed, while the temperature decreased from 42°C to ‐173°C. In the same temperature range, the FWHM decreased from 26 nm at higher temperatures to 22 nm at lower temperatures. However, the strongest effect was observed in the photoluminescence intensity which increased by a factor of five as temperature decreased. In particular, for the near ambient temperature range (5°C to 40°C), the photoluminescence intensity change was linear and in the order of -1.3% per °C. It was shown that, in this temperature range, the wavelength shift was small (∼ 2 nm) and the FWHM variation was negligible (<1 nm). All the changes were reversible with temperature. Good reproducibility was observed even after 3 hours of continuous irradiation, demonstrating high photostability of immobilized CdSe/ZnS QDs. This temperature behavior was essentially identical for QDs excited at different wavelengths (460, 530, and 580 nm) and in different host materials (polymer and sol-gel).
This work established the suitability of QDs as temperature references in luminescence based sensing applications. The fact that sensing was achieved within a solid host opened a pathway for future works with optical fiber and waveguide platforms.
Chemical sensors
In contrast with the fast widespread use of QDs in bio-imaging and sensing applications, the use of such semiconductor nanoparticles as chemical sensing probes had a somewhat slower start [11]. Until very recently, few reports had been published concerning the application of QDs as luminescent indicators for detection of chemical species. Nevertheless, the luminescence of core QDs can be very sensitive to the surrounding chemical environment. This can be the path for using nanocrystals as chemical sensors, provided that some selectivity can be achieved. The desired selectivity can be controlled by chemically tailoring the outer surface of the semiconductor nanoparticles. The methods referred to above to control the nanocrystals solubility and stability are the starting point from which further functionalization will allow QD-based sensing indicators. Presently, a significant number of chemical sensing strategies using QDs is being explored [14].
Coating the QDs' surfaces with suitable ligands can have a strong effect on its luminescent response to specific chemical species. In fact, the presence of the analyte can quench or enhance the nanocrystal luminescence, depending on the functionalization strategy. In a representative work, Chen and Rosenzweig [92] were able to alter the selectivity of CdS nanoparticles to respond either to Zn2+or Cu2+ ions, simply by changing their capping layer. They showed that, while polyphosphate-capped CdS quantum dots responded to almost all mono and divalent metal cations (showing no ion selectivity), thioglycerol-capped CdS nanocrystals were sensitive only to Cu2+ and Fe3+. The QDs luminescence was quenched by iron(III) and copper(II). However, it was not affected by other ions occurring at similar concentrations. On the other hand, the luminescence emission of L-cysteine-capped CdS quantum dots was enhanced in the presence of zinc ions but was not affected by other cations such as Cu2+, Ca2+ and Mg2+. Different mechanisms were responsible for the luminescence quenching or enhancement, ranging from electron-transfer to filter effects. The different processes, depend on the QD-probe/ion combination and are discussed in detail in [92]. Quenching by iron(III) was observed, which interfered with the detection of copper and zinc, however, this was attributed to an inner filter effect, and could be eliminated by adding fluoride ions to the solutions in order to form a colorless complex. Using this set of QD probes, the authors established the selective detection of zinc and copper in physiological buffer samples, where several other metal ions were present. Quantitative measurements were performed where detection limits of 0.8 μM and 0.1 μM were achieved for zinc(II) and copper(II), respectively. This was claimed to be the first use of semiconductor nanoparticles as selective ion probes in aqueous samples. More recently, the detection of copper in physiological buffer solutions, with a detection limit of 0.1nM, was achieved by using CdSe/ZnS nanocrystals modified with bovine serum albumin [93].
The same principle was applied for the detection of inorganic anions. The use of surface modified CdSe quantum dots functionalized with tert-butyl-N-(2-mercaptoethyl)-carbamate (BMC) groups for the determination of cyanide was demonstrated by W.J. Jin et al.[94]. Surface modification can also promote sensitivity to other ionic species. The detection of a diversity of ions like Fe(III), Ag(I), Mn(II), Ni(II) or I-, by means of a variety of processes such as inner filter effects, non-radiative recombination pathways, electron-transfer processes and ion-binding interactions, has been reported by several authors and was adequately reviewed [14].
More complex chemical species have also been determined using different cappings at the QDs surfaces. In a recent work, G. H. Shi et al. demonstrated that QDs coated with oleic acid were readily quenched by a diversity of nitroaromatic explosive molecules, such as 2,4,6-trinitrotoluene (TNT) or nitrobenzene (NB) [95]. Different quenching behaviors were observed for different molecules. Nevertheless, in most cases, modified Stern-Volmer equations could be used to provide linear calibration curves. Time domain measurements showed that static quenching was the dominant process since no change in luminescence lifetime was observed in the presence of the analyte. Detection limits of 10−6 to 10−7 M seem poor when compared to previously reported methods [96]. Nevertheless the detection mechanism is comparatively simpler and has room for improvement.
The sensing of gases using polymer embedded nanocrystals has also been demonstrated [97]. The authors found that photoexcitation of the electrons in QDs could make the ligands monolayer readily permeable to gas molecules. Under photoirradiation, the PL properties of the nanocrystals responded to the environment in a reversible, rapid, and species-specific fashion. The photo-stimulated response was thought to be related to photon-phonon coupling of the optical absorption and emission processes occurring in the nanocrystals.
Based on the same principle, more recently Potyrailo and co-workers showed that when different sized TOPO capped CdSe QDs were incorporated into a polymer film and photoactivated, CdSe nanocrystals of different sizes unexpectedly demonstrated distinct photoluminescence response patterns when expososed to polar and non-polar vapors in air [98]. The authors suggested that by using principal component analyses (PCA) of the spectral response of a multi QD doped film, a selective gas sensor could be obtained, thus introducing the possibility of multiparameter gas sensing. The principle was demonstrated by submitting a PMMA film doped with CdSe QDs with distinct average sizes (2.8 nm and 5.6 nm diameter), to different concentrations of methanol and toluene. In order to obtain selective chemical response from individual response patterns of the two distinct QD populations, principal components analysis technique was applied to different wavelength ranges. Plots obtained demonstrated that responses of the sensor film to the different vapors were well separated in the PCA space, allowing for easy discrimination. Such work has the potential to complement existing solvatochromic organic dye sensors with more photostable and reliable sensor materials.
Uncoated QDs are sensitive to the environment pH. Nonetheless, they are also prone to oxidation and photodegradation. While the protective overcoating renders the nanocrystals highly photostable, their pH sensitivity is very much reduced. However, functionalization of the dots surface with pH sensitive ligands can render the resulting nanoparticles responsive to acidity levels. The functionalization of CdSe/ZnS QDs with a chromophore whose absorption spectrum shifted according to the surrounding pH has been reported [99]. The absorption shift changed the relative overlap with the QDs' emission modulating the fluorescence resonance energy transfer (FRET) efficiency according to pH. Using this method the authors demonstrated an approximately linear variation of the QDs' luminescence intensity within the pH range 3-11 (a 30% intensity change was observed in the full range). It was suggested that similar organic ligands could be designed to alter their absorption and redox properties in response to target analytes other than H+ (or OH-), changing the luminescence of their conjugated QDs. A potential problem of this strategy lies in the fact that purely intensity based measurements are prone to be affected by several sources of optical power drift. Although the QDs' luminescence lifetime was shown to be pH dependent, average values in the order of 15 ns were reported. Therefore the implementation of intensity independent lifetime or frequency domain interrogation techniques would require very fast optoelectronics.
In a more sophisticated approach, a QD based ratiometric probe was designed by Bawendi's team [100]. This was achieved by conjugating CdSe/ZnS overcoated with TOPO with a NIR luminescent squaraine dye. pH modulated FRET resulted in a luminescent emission displaying an isobestic point at 640 nm enabling, therefore, the implementation of ratiometric detection schemes. The generalization of this approach to sensing different biochemical species is possible as the narrow, size-tunable emission spectrum of QDs, acting as donors, can be matched with the acceptor absorption features of an analyte sensitive dye, maximizing FRET efficiency.
Overall, the reported different approaches indicate the feasibility of using surface modified QDs as analytical probes for the determination of biochemical species. Nevertheless, most experiments took place in an aqueous media while fiber sensing instrumentation most often requires the sensing dye immobilization.
Very recently, however, Ruedas-Rama et al. have reported the development of multi-ion sensing using different combinations of QDs with selected ionophores or organic fluorophores embedded in a polymeric composite material [101]. In this work, the authors explored the differences in efficiency of fluorescence resonance energy transfer between quantum dots and proximal organic dyes in order to discriminate ion sensitive emission signals, while using a single excitation wavelength. In particular, by co-immobilizing in acrylic nanospheres green emitting QDs with lucigenin, or valinomycin and a selected chromoionophore, Cl2 sensitive and K+ selective sensors could be created, respectively. Embedding the resulting nanospheres in a common polymeric matrix, dual sensitive ionic sensors with no cross talk could be created. In the presence of K+ and Cl2 the fluorescence of lucigenin is quenched, and QDs act as donors interacting with the deprotonated chromoionophore (acceptor) by FRET. Because each ion was sensed by a different independent mechanism, the resulting luminescent response of the ensemble allowed the authors to independently monitor the presence of each one of the analytes in different spectral regions (see Figure 4).
A similar strategy was used in order to develop a selective Na+ sensor. This was achieved by co-doping polymer microspheres with TOPO capped CdTe QDs, a sodium ionophore X, a Nile Blue derivative ETH 2439 and a lipophilic tetraphenylborate cation exchanger. These probes were shown to respond to Na+ in the 10-4 to 0.1 M range, at pH 4.8 with excellent selectivity towards common interferences [102]. The microspheres were prepared by sonic particle casting, a method that was shown to enable the implementation of dual doped particles having precisely controlled amounts of two types of nanocrystals.
The fabrication of composite materials such as these, is a very versatile technology that can be explored for simultaneous sensing of a diversity of chemical species, both in solution based assays or in solid platforms.
In a different approach, molecular imprinting technology was used to render the photoluminescence of semiconductor nanocrystals sensitive to specific molecules [103]. If the synthesis of a polymer is made in the presence of an imprint or template molecule, cavities will be produced in the polymer, which are highly selective for the imprint. C.I. Lin et al. prepared molecular imprinted photoluminescent polymers (MIP) using CdSe nanoparticles functionalized with 4-vinylpyridine. Several polymers containing the QDs were imprinted with different template molecules (caffeine, uric acid, L-cysteine). The resulting solid polymers were ground to a fine powder and sieved. The template molecules were then extracted from the obtained powder.
The detection of the analytes was performed by incubation of the MIPs with the corresponding template molecules in aqueous solutions. It was observed that the photoluminescence emission of the MIPs was strongly quenched in the presence of the corresponding templates. However, no quenching occurred in the presence of other molecules. Strong quenching was observed for the caffeine imprinted polymer in the presence of caffeine while the presence of similar molecular structures like theophylline and theobromine had no effect on the photoluminescence. Also, a control polymer, with no imprint, showed no change in the QDs photoluminescence. These results demonstrated that it is possible to couple nanocrystals with the selective recognition capability of MIPs, opening several possibilities for the application of semiconductor nanoparticles in optical chemical sensing.
The fact that these sensing principles were demonstrated in solid hosts, where the sensing indicators were hydrophobically, or covalently retained, introduces the possibility of transposing these technologies to optical fiber or planar waveguide platforms.
The unique optical properties of QDs established them as appealing alternatives to traditional organic dyes in the context of biotechnology, offering potentially greater performance in fluorescence based immunoassays and bio-imaging applications [12, 13, 104]. The introduction of QD technology in the biological domain involves its chemical stabilization, the control of its hydrophobic/hydrophilic properties and, finally, its conjugation with a biomolecule of interest, which will define its bio-functionality. All these processes will influence the luminescence properties of the original QDs. Nevertheless, several successful strategies have been developed with covalently (or non-covalently) bound biomolecules to the surfaces of QDs, and some of these bioconjugated QDs are already commercially available [105, 106]. The size of each QD, which is much larger than that of a single dye molecule, is compatible with the simultaneous conjugation with more than one biomolecule. This provides QDs with the potential for an increased sensitivity, multi-analyte detection with single QD species, and other new functionalities. On the other hand, the increased size also brings some concern about interference with the biomolecules mobility and functionality [12].
In one of the first reported bio-assay applications of CdSe/ZnS QDs, these were covalently coupled to a protein and exhibited 20 times more luminescence intensity when compared to Rhodamine [107]. Additionally, the QDs were reported to be 100 times more resistant to photobleaching. This allowed the authors to perform ultrasensitive detection at the single-dot level. However, an on/off behavior of single dot emission was observed. This fluorescence ‘blinking’, also observable in some dye molecules, can have off-periods of several seconds in QDs which can compromise single dot measurements. Nevertheless, when QD ensembles are observed, this effect goes unnoticed due to averaging of their luminescence emission [108]. While preserving the optical properties of the QDs, the authors also demonstrated that the attached biomolecules were still active and able to recognize specific analytes. The first example of a nanocrystal in-vitro immuno-assay was the case of quantum dots labeled with IgG antibodies, which were recognized and agglutinated by polyclonal anti-igG. The authors also demonstrated cell labeling by transporting QD-transferrin bioconjugates into cultured HeLa cells via receptor-mediated endocytosis.
The potential of QDs for multicolor assays was first demonstrated by Bruchez et. al. when two different sized CdSe/CdS QDs, emitting green and red luminescence, were specifically bound to different parts of 3T3 mouse fibroblast cells, and were excited by a single optical source [7]. In this pioneer experiment, some nonspecific binding was observed. Higher degrees of specificity are required in in-vivo applications where background biomolecules can generate false positives. Stabilization and conjugation techniques rapidly evolved and higher levels of specificity have been achieved by different authors [60, 63, 109].
In a pioneer work, Mattoussi and coworkers [109] prepared CdSe/ZnS QDs with mixed protein adaptors to provide nanocrystals with the specific recognition capability of antibodies. A representative example of such labeling/sensing platform is illustrated in Figure 5. Positively charged avidin and maltose-binding protein, with a positively charged tail (MBPzb), self assembled on the negatively charged surface of QDs capped with dihydrolipoic acid (DHLA), was bind to biotinylated antibodies specific for Pgp. These QD probes were selective to cells that expressed detectable levels of labelled PgP-GFP (green fluorescent protein). The approach is a generic strategy since different QDs can be conjugated to a wide range of antibodies, thus providing a whole new set of highly specific probes capable of very specific labeling target molecules in imaging and sensing applications.
Since this seminal work, many similar strategies have been reported for highly specific in vivo applications. Xiaohu Gao et al. have developed multifunctional QD probes rendering them into efficient cancer markers in living organisms [62]. CdSe/ZnS protected by TOPO ligands were first encapsulated with an amphiphilic triblock copolymer for further protection against enzymatic degradation, thus avoiding particle aggregation and detrimental luminescence effects as well. Different labeling solutions were then explored by coating the resulting nanoparticles with polyethylene glycol (PEG) and a variety of tumor-targeting ligands, such as peptides, antibodies or small molecule inhibitors. The developed luminescent tags were shown to be extremely photostable preserving their luminescence characteristics even in a pH range from 1 to 14 or in different salt conditions (0.01 M to 1 M). In vivo tumor labeling was then demonstrated in live mice where the QD probes were delivered to tumors by both passive and active targeting mechanisms exploring the enhanced permeability and retention effect of cancer cells, or the specificity of antibodies (in particular QDs tags were conjugated with a prostate-specific membrane antigen -PSMA-). Because many copies of the same labelling ligand can be linked to single dots, binding affinities could be increased by ten orders of magnitude as compared to single ligand markers. The authors also presented detailed studies on the in vivo behavior of the developed QD probes, including their biodistribution, nonspecific uptake, cellular toxicity and pharmacokinetics. The simultaneous imaging of multicolor QD-encoded microbeads in living animals was also demonstrated and compared with green fluorescent protein (GFP). QDs showed a much higher performance offering high contrast against tissue autofluorescence and long term imaging capabilities.
Many other possibilities have been reported by other authors, from QD stained polymer beads for multiple color-code cell labeling, to the use of high contrast non-specific vascular imaging and the clear imaging and delineation of sentinel nodes using type II QDs (emission 700-800 nm) (this enables the possibility of providing in situ visual aid for surgical removal of small lesion otherwise unnoticeable)[110-112]. Presently, QDs based probes are already established tools in many medical imaging applications. However, concerns have been raised over the nanocrystals' toxicity that may hinder their clinical use. Although cadmium ions alone can be highly toxic to cells, coating the core-shell dots with polymer and lipid layers minimizes the risk of contamination. However, while adequately coated QDs showed little or no toxicity in a variety of a cells and living animals, when exposed to UV excitation for long periods of time, they can become toxic to cells [113]. UV light seems to induce semiconductor degradation by photolysis leading to the release of highly toxic cadmium ions. Because the unique features of QDs are decidedly attractive for clinical use, lately a growing number studies regarding toxic effect of semiconductor nanocrystals and ways to circumvent them, such as the use of QDs without heavy metals or application of lower energy excitation, have been reported [114, 115]. Nevertheless, in a near future, research on the clinical use of QDs will be certainly directed for in vitro applications, although toxicity concerns should also be taken into account.
The strategies developed for specific labeling in imaging applications were quickly adapted to the improvement of biosensors. The CdSe/ZnS QD-antibody conjugates developed by Matoussi et al. [8] were used in several direct, sandwich and competition fluorescence based immunoassays, in order to detect different toxins (staphylococcal enteroxin B, cholera toxin) and small molecule explosives like TNT. In the several assays performed, limits of detection were achieved that were, at least, as low as the ones obtained with dye-based assays. The same authors performed the first study about fluorescence resonance energy transfer in QD-protein conjugates. Nanocrystals were used as energy donors to acceptor dye molecules that were attached to the conjugated proteins. This configuration enabled the exploration of the influence of parameters such as donor-acceptor spectral overlap and donor-acceptor ratio on the FRET efficiency [117]. The resultant FRET enhancement contributes for increased assay sensitivity. These studies were intended to evaluate the possibility of quantifying analyte concentrations by fluorescence quenching. In sequence of the results obtained, a prototype of a QD FRET sensor for sugar detection was presented. Each QD was conjugated, via His-Zn coordination, with 15 to 20 maltose binding proteins (MBP) and with further processing a QSY-9 quencher was bounded to each MBP. The concentration dependent quenching of the QD emission was obtained for two reasons: (1) the QSY-9 absorption overlapped perfectly with the emission of the 555 nm emitting QDs, and (2) its separation distance from the QD center was within the range of FRET critical radius. When maltose was added to the solution, the quencher was displaced and FRET was interrupted. An apparent binding constant of 7.0 μM was found from the titration curve with maltose. A response was obtained when only certain sugars have been used, showing that the QD-MBP conjugate kept its specificity. These results confirmed QDs as excellent FRET donors, thus establishing a new tool for sensitive and specific biosensing.
These seminal studies and applications on QD based FRET mechanisms were widely explored in a diversity of configurations where the nanocrystals acted either as donors or acceptors. This later approach is not very effective because the wide absorption spectra of QDs makes it very difficult to avoid direct excitation of the QDs by the excitation source. The different schemes for biosensing using QD FRET were recently reviewed by Rebecca C. Somers et al. [17] and include nucleic acid recognition by hybridization with DNA labeled acceptors, nanocrystal to nanocrystal FRET, conjugation with analyte sensitive chromophores or luminophores, among others. Such variety of solutions makes FRET-based schemes the main mechanism for QD-based biosensing.
While early works were very generic proof of principle assays, more recently, systematic approaches to sensing important chemical or biological species have been addressed. In particular, a diversity of strategies was reported for detection of different proteins and virus species (eg. H9 avian influenza virus) [9, 118, 119]. While in most works described, sensing is made with the QDs dispersed in a solvent, recent works have reported the use of QDs doped polymer beads for immunodetection [120, 121]. In a particular example, the surface of polymer beads was functionalized with human IgG and used to detect goat antihuman-IgG labled with a luminophore [121]. In this particular case, the dual QDs emission was used to univocally identify a particular group of beads, making way to multiparameter immuno-sensing.
A particular important biosensing application of QDs is the case of glucose sensing via luminescence. In the first reported QD glucose sensor, nanocrystals functionalized with carboxylic groups at their surfaces showed a decrease in fluorescence upon the introduction of a viologen quencher [122]. The author then observed a strong recovery of the luminescence intensity as increasing amounts of glucose were added to the solution. Glucose binding to the boronic acid substituted viologen quencher/receptor introduced electronic and steric changes that reduced the quenching ability. Non-linear Stern Volmer plots were obtained allowing for sensor calibration. In a different approach, MSA (D,L-mercaptosuccinic acid), caped QDs were used to probe the change in acidity resulting from the oxidation of glucose by the enzyme glucose oxidase, thus allowing for glucose quantification [123].
The unique features of QDs have established them as reference tools in imaging and sensing applications. Although most of the works reported refer to ‘in solution’ assays, some important progresses were described where biosensing took place using polymer encapsulated QDs. Such strategies have a strong application potential in waveguide based instrumentation.
Optical fiber Sensing
Optical fiber technology can introduce some interesting features in optical sensing applications. Real-time remote detection, miniaturization, immunity to electromagnetic interference and multiplexing ability are some of the most important ones. This has long been recognized by the scientific community and, presently, optical fiber sensing is already an established technology with many applications in industry, environmental monitoring and in the medical field [124, 125]. Currently this technology has already a considerable commercial market with a growing number of companies making its appearance. This is especially true for the measurement of physical parameters like strain temperature or pressure. Nevertheless, as a result of a strong research effort and many technological advances in areas like materials science, immobilization chemistry and optoelectronics, the use of optical fibers in a wide range of chemical and biological sensing applications has been enabled[126, 127]. In this context, luminescence based sensors are by far the most representative. In fact, in recent years, some examples of optical fiber analytical instruments for the determination of biochemical parameters by luminescent methods became commercially available (e.g. Oxygen and pH sensitive). However, in spite of great advances, some sensors suffer from limitations. Among others, leaching and photo-bleaching of the sensing dyes usually are important problems that limit long term stability and, therefore, the reliability and commercial viability of the sensing device [128, 129].
In this context, the unique features of QDs are highly attractive for guided wave sensing platforms such as integrated optics and optical fiber. In fact, their unsurpassed photostability and multiplexing capability can provide new interesting solutions for fiber sensing. In spite of this, to date, the application of QDs in optical fiber technology remains largely unexplored. Nevertheless, very recent progresses have been reported that will be addressed below.
QDs' immobilization at solid surfaces
A key step in luminescence based sensing applications using a solid platform, such as an optical fiber, is to immobilize the sensing indicators at the surface. In an ideal membrane, the sensing dyes should be encapsulated while retaining their sensing properties, avoiding leaching into the solution and simultaneously allowing a chemical equilibrium to be established with the probed media. This is a challenging task and very often some compromises need to be considered. In this context, the immobilization of colloidal QDs in solid hosts is a necessary step towards the development of guided wave QD sensing devices.
Sol-gel methods are extremely versatile because they enable the encapsulation of sensing dyes in a porous matrix by addition of the dyes prior to the polymerization stage. Choosing the right precursors allows to obtain materials with different properties. In particular, by using diverse Si alkoxides, porous SiO2 glasses membranes can be obtained that are highly compatible with guided wave devices.
Different sol-gel procedures have been proposed to dope materials with QDs ([14] and references therein). However, achieving immobilization while maintaining the QDs unique optical properties is not an easy task. In early attempts the sensitivity of QDs to the surrounding environment often led to broad size distributions and poor stability [130]. Litran et al. have reported the preparation of CdS/SiO2 xerogel composites using a sonocatalytic method. Cd(II) doped monoliths were first prepared by ultrasound-promoted hydrolysis and were subsequently exposed to an H2S atmosphere, at high temperature in order to induce the formation of CdS nanocrystals within the solid host. The authors showed that this technique led to matrices with finer porosity allowing to obtain narrower QDs size distribution. In addition, thermal annealing at high temperature allowed to obtain long term stability of the optical properties. Nevertheless, at high concentration the aggregation of semiconductor particles lead to loss of quantum confinement effects and consequent spectral broadening [131, 132]. In addition, the high temperature that is necessary for simultaneous synthesis and immobilization led to the formation of dense glasses. While these glasses can find application in optoelectronic devices or solar cell implementation, its use in most sensor applications is precluded. Several approaches have been investigated aiming to overcome these limitations. Recent progresses include the use of supercritical CO2 drying allowing to obtain highly porous aerogel structures amenable to further functionalization and liquid phase interactions [133].
An efficient process to obtain concentrated and monodispersed quantum dots in a solid host is to prepare QDs in a previous step and then adding them to the sol before the aging or deposition steps. This enables proper passivation and functionalization of the dots to be performed in advance, thus resulting in highly photostable doped glasses [134]. Bullen and co-workers, for instances, have reported the successful immobilization of previously prepared QDs using colloidal methods [135]. In a second stage, the nanoparticles were capped with aminoethylaminopropyltrimethoxysilane (AEAPTMS) enabling their dissolution in polar solvents (such as propanol or ethanol). The capped nanoparticles were then added to the reacting mixture yielding a sol that could be used for deposition of thin films on different substrates (SiO2 glass, soda-lime glass, silicon, polymers) by spin- or dip-coating techniques. The reported procedure allowed the homogenous dispersion of QDs in thin films without affecting their intrinsic emission properties. The authors also showed that the final films presented some waveguiding properties that depended on the type of immobilized nanocrystals. As the refractive index of the host matrix can be adjusted, this method is compatible with the realization of QD doped planar waveguide structures.
Although highly luminescent materials can be obtained by doping a sol-gel host with QDs, most of the reported applications are related with QD laser devices. To date the implementation of a chemical or biochemical sensor using sol-gel QD doped materials has been limited. Nevertheless, considering the variety of chemical and biological sensing applications using sol-gel immobilized organic dyes, such implementation would be realistic.
Conversely, a considerable number of polymer encapsulated QD sensing applications has been reported. A variety of materials and strategies have been addressed for chemical and biological sensing with polymer encapsulated QDs. Some of the more representative approaches were addressed in previous sections and include PMMA CdSe films for temperature sensing [91], acrylic nanospheres for QD ion sensing [101], a variety of polymer beads for imaging and labeling applications [121], QD MIP based polymers for selective sensing [103], among others. A summary of the most representative immobilized-QD sensing applications is given in Table 1. While a significant number of these applications used QD doped polymer micro- or nano-particles, and not bulk thin films, the application of the former can certainly be explored in composite materials amenable with coating techniques. This introduces the possibility of use in solid platforms such as fibers or planar waveguides.
Alternative immobilization approaches include the deposition of nanocrystals onto hydrophilic substrates using the Langmuir-Blodgett technique [136] or use of layer-by-layer (LbL) electrostatic self-assembly. The later technique is particularly attractive as it allows a very fine control of film thickness and deposition in surfaces of complex shape. In addition, it is compatible with nanocrystal functionalization and combination with further sensing dyes. Crisp and coworkers [137] have reported the coating of optical fibers and the inner surface of glass capillary with CdTe/ PDDA - poly(diallyldimethyl-ammoniumchloride) - using LbL. Confocal microscopy data showed that the prepared coatings were found to be uniform, continuous and highly luminescent.
Optical fiber sensing applications
There are several techniques able to immobilize QDs, thus allowing the preparation of thin films displaying QDs' luminescent properties. While it is straight forward to obtain such luminescent materials, the type of solutions where QDs are simultaneously immobilized and allowed to interact with the environment for sensing purposes is still very limited.
In this context, it is no wonder that most optical fiber sensing schemes involving QDs are thermometry applications, where the only interaction requirement is the establishment of thermal equilibrium between the sensing membrane and the environment
Barmenkov et al. reported the first thermometry application using optical fibers and semiconductor nanocrystals [138]. In this work, the temperature dependence of the absorption band edge of CdSe QD-doped phosphate glass was used as the sensing mechanism. The CdSe doped glass plate (6 mm thickness) was submitted to temperature changes while it was illuminated trough a standard multimode optical fiber using a white light source. The temperature dependent absorption spectra was then monitored using a second fiber and an optical spectrum analyzer. A spectral shift of the band edge of 0.12 K/nm was reported. Using an HeNe laser to illuminate the sensor, the authors could translate the spectral shift into temperature dependent transmitted intensity, which was linear in the observed range. The operation range was of 0-150 °C due to limited thermal resistance of the polymeric fiber coating but could be extended up to 350 °C using special fibers. Because intensity measurements were performed, the application of this sensor in a practical application would be severely compromised by any source of optical power drift.
Later on, a similar principle was used to provide an oxygen sensor with a temperature reference [139]. In this case, a colored glass filter (GG455 –cut-off 455nm) from Schott was placed at the distal end of one of the fiber tips of a 50/50 fiber coupler (multimode fiber with 550/600 μm core/cladding diameters). The filter cut-off wavelength was determined from the observed optical bandedge of CdSe QDs and it was shifted to higher wavelengths with increasing temperature by 0.08 nm/°C. The filter was chosen in order to maximize sensitivity by coinciding the cut-off wavelength with the spectral emission peak of the excitation source, a blue LED. When the blue radiation crossed the filter at the distal end of the fiber, it was possible to observe a temperature dependence on the back reflected intensity. In this case, a ratiometric scheme was implemented to avoid error induced by optical power drift. Because a fiber taper doped with an oxygen sensitive sol-gel glass was connected to the other fiber output of the coupler, this system enabled simultaneous oxygen and temperature measurements.
In the works mentioned above, the semiconductor nanocrystals have not been passivated and, therefore, the resulting composite material showed low luminescence at room temperature. In a recent work the use of luminescent colloidal nanocrystals for optical fiber temperature sensing has been reported [140]; distinct sized core-shell CdSe/ZnS and CdTe/ZnS QDs, with peak emission wavelengths at 520 nm, 600 nm and 680 nm (QD520, QD600, QD680) were immobilized using a non-hydrolytic sol-gel matrix [141]. The temperature behavior of different doped sol-gel samples was tested using an optical fiber bundle to excite and collect the nanocrystals' luminescent emission. A blue LED was used to excite all samples which were interrogated using a CCD spectrometer. The observed response was in accordance with the observation of Walker et al.[91]: as the temperature of the sensing samples was increased, the luminescence intensity decreased, the peak wavelength shifted towards the red and the spectral width increased. In the temperature range investigated, the changes were linear and reversible.
In Figure 6a the spectral response of a sample doped with QD520 can be observed. The temperature of the sample was increased from 14°C to 43°C. From this data, it was possible to estimate that the luminescence intensity was decreased, as temperature increased, with a rate of -1.6% per °C. A similar behavior was obtained with other nanocrystals of different sizes with intensity decrease rates ranging between -0.7% and -1.6% per °C.
The peak wavelength of all samples, λpeak, increased in a linear way as the temperature was raised. This is shown in Figure 6b for a sample doped with QD600. The experimental data was well fitted to the Varshni relation which describes the temperature dependence of the band gap energy in the bulk semiconductor [142]. This indicates that the dominant process behind the temperature dependence of the luminescence peak in QDs is the same as in the bulk semiconductor and, therefore, it should be independent of the dot's size. In fact, as temperature increases, the band gap energy decreases because the crystal lattice expands and the inter-atomic bonds are weakened. As a consequence, less energy will be necessary to excite a charge carrier. On the other hand, it can be calculated that the change in the dot size due to thermal expansion has a negligible effect in the confinement energy. In addition, because the energy gap of both core and shell material change, although at different rates, the confinement energy change is still small when compared to the observed shift [143].
Comparing the wavelength shift in different samples it was concluded that for all the cases, the rate of change of λpeak towards longer wavelengths, as the temperature increased, was approximately 0.2 nm/°C.
Both luminescence intensity variation and the wavelength shift could be used to obtain temperature information. However, simple intensity measurements are prone to error due to optical power source fluctuations, detector drift, changing coupling conditions, etc. Therefore, the system susceptibility to optical power changes was evaluated. For this purpose the luminescent intensity response of QD600 to temperature was recorded for three different levels of LED output power (100%, 90% and 80%). In Figure 7a, it is clearly shown that the luminescence intensity response depended strongly on the LED output. Additionally, in each individual curve, a non-linearity was observed at lower temperatures, which was ascribed to water condensation at the surface of the sample which consequently changed the coupling conditions.
A simple detection scheme was implemented in order to take self-referenced temperature measurements. This was achieved by taking two signals, S1 and S2, corresponding to two narrow spectral windows on opposite sides of the spectrum, which were normalized according to SQD=(S1‐S2)/(S1+S2). Due to the presence of a temperature dependent wavelength shift, the resulting output was proportional to temperature and independent of the optical power level in the system. With the application of this scheme, the temperature measurements were rendered independent of the optical power level (shown in Figure 7b). An accuracy of approximately 0.3 °C was estimated by linear regression analysis.
These results demonstrated that QDs can be used as self-referenced temperature probes. The ratiometric processing is made possible by the presence of a wavelength shift and can be a valuable tool in many applications since, most of the times, temperature is an important parameter for any biological system.
The same detection scheme was used to simultaneously monitor two samples doped with distinct QDs using the same optical fiber system. Both transmission and reflection topologies were successfully tested, using optical fiber bundles (shown in Figure 8), demonstrating the QDs' potential for multiplexed optical fiber sensing. Figure 9a shows the spectral response of two different samples (QD600 and QD680) displaying well discriminated emission spectra with independent temperature responses. This can be further confirmed observing the simultaneous response of both samples as their temperature changed independently (Figure 9b). To the best of our knowledge this was the first optical fiber multiplexing application using QDs [140].
In a different approach, De Bastida et al. used the LbL technique to coat a tapered optical fibre tip with CdTe QDs of different sizes [144]. The produced fiber probes were then used as luminescent thermometers. The behaviour of the dots luminescence intensity and wavelength was very similar to those reported by other authors, which demonstrated the preservation of the QDs' luminescent features. Nevertheless, because no core-shell QDs were used, thermal annealing under a nitrogen atmosphere was carried out in order to increase the resistance of QDs towards oxidation. In another work, the same authors demonstrated the feasibility of coating the inner surface of hollow core fibers with a film composed of QDs [145]. The fiber inner diameter was 50 μm demonstrating the ability of the LbL technique for coating morphological complex surfaces of reduced dimensions. The produced tips were used as a temperature sensor.
While this was the first reported LbL QDs sensor, the versatility of this technique will contribute for the feasibility of further biochemical optical fiber QD-based sensing applications. A representative example of the versatility of a QD-LbL combination was reported by Ruan et al.. In this work, the authors took advantage of the semiconductor properties of CdSe/ZnS QDs in order to assemble a photodetector on the surface of an optical fiber tip using the LbL technique [146]. This demonstration allows one to envision future applications where QDs can simultaneously fulfill the role of an optical source, a detector and the sensing layer.
Temperature is of paramount importance in many biochemical sensing applications because the luminescent response of organic dyes always depends on this working parameter. Therefore, combining nanocrystals with a sensing dye, it is possible to obtain simultaneous information about a chemical parameter and temperature. Due to the ability to tune their optical properties, QDs with no spectral overlap with a particular sensing dye can be easily chosen allowing this technique to be implemented in a variety of applications.
Recently, the application of QDs in a dual sensing configuration was demonstrated in the context of an oxygen sensor [147]. Temperature has a double effect on the sensor calibration function. It introduces higher probability of non-radiative transitions, thus decreasing the luminescence yield and the lifetime of the excited state, and it changes the oxygen diffusion coefficient into the sensing membranes. In order to obtain an univocal measurement of oxygen concentration by luminescent methods the simultaneous determination of temperature is required. CdSe/ZnS QDs were used to provide such independent measurement. The nanocrystals were immobilized in a dense non-hydrolytic SiO2 sol-gel material, with low oxygen permeability, presenting an oxygen independent luminescence output. Also, QDs emitting at 520 nm were used to minimize any spectral overlap with the excitation source (a blue LED – 473 nm) or the oxygen sensitive luminescence signal (600 nm emission from Tris(4,7-diphenyl-1,10-phenanthrolin) ruthenium (II) chloride -Ru(dpp)-). An optical fiber coupler (600/550 μm silica fiber) was used to excite and interrogate the QDs sample together with an oxygen sensing sample (sol-gel doped with Ru(dpp)).
Each of the sensing elements was individually calibrated and it was shown that oxygen measurements could be made with an uncertainty of ± 2 % for oxygen concentration, while for temperature this range was ± 1°C. This poor performance, as compared to previous reports using single parameter sensing, was mainly due to a reduced signal-to-noise-ratio (SNR). LED excitation together with the low efficiency of extrinsic sensing elements contributed to this problem. Moreover, it was possible to demonstrate dual parameter sensing using this scheme. In Figure 10, the response of the two sensors to applied temperature and oxygen changes can be observed. The sensors were submitted to alternate atmospheres of air and pure nitrogen while a temperature step was applied.
Figure 10a shows the oxygen measurements obtained directly from the Ru(dpp) luminescent output. It is shown that the luminescence intensity followed the changes in oxygen content. However, while at ambient temperature it accurately yielded concentrations of nearly 0% (in nitrogen) and 21% (in air), at an higher temperature the retrieved values were of 10% (in nitrogen) and 30% (in air). The increased concentration values resulted from a temperature induced error.
The QDs output, on the other hand, allowed to accurately retrieve the applied temperature step, showing no sensitivity to the changes in oxygen concentration (Figure 10b). Since the temperature response of Ru(dpp) was known, using the temperature information from the QDs, it was possible to correct the oxygen measurements. Figure 10c shows that the corrected response was accurate throughout the whole temperature range. Therefore, a 10% error in oxygen concentration, induced by a temperature variation of a few degrees, could be fully compensated using the dual sensing scheme.
While the results obtained where not impressive from the accuracy point of view, there is much room for improvement. Using intrinsic fiber probes, associated with laser excitation, can allow to increase the SNR and, therefore, the precision of the measurements. The QDs' tuneability, on the other hand, will enable the application of this scheme to virtually any luminescence based sensor.
Other possibilities of QDs improving the performance of oxygen sensors were recently addressed [148]. By using a sol-gel prepared SiO2 sample doped with CdSe/ZnS QDs, with low oxygen permeability, combined with an oxygen sensing film, it was possible to implement a ratiometric reference scheme. In this particular case, temperature was kept constant and the luminescent output of QD520 was used as an intensity reference. The oxygen sensor was obtained by doping sol-gel glasses with the complex [Ru(bpy)3]2+ (bpy: 2,2′-bipyridine). Only wavelengths lower than 520 nm are absorbed by these nanocrystals. Therefore, its luminescent intensity was affected by the excitation radiation but not by the emission of ruthenium dyes. As a consequence, oxygen had no effect on the QDs' emission, which was proportional only to the exciting optical power. From these conditions it resulted that the ratiometric detection of the luminescence emissions of QDs and a [Ru(bpy)3]2+doped sensor were proportional to oxygen and independent of the optical source intensity. This is clearly shown in Figure 11a where the ratiometric output is compared with the raw [Ru(bpy)3]2+emission while the luminescence output of the excitation source is slowly modulated.
Using this scheme it was possible to compensate intensity changes of the excitation source up to 80%. In principle, it would be possible to use this scheme even with changing temperatures. Temperature would be given by the wavelength shift and used to correct the intensities of both QDs and the oxygen sensor. The corrected QDs intensity could then be used as an intensity reference. In spite of all, the scheme does not eliminate the calibration drift due to photobleaching.
Using QDs with an emission wavelength shorter than that of an associated sensing dye, will provide a variety of luminescence based biochemical sensors with very stable intensity or temperature references. Conversely, if the QD's emission is shifted towards the red, using an emission wavelength higher than that of the sensing dye will introduce the possibility of transforming the QDs in a sensing indicator.
The possibility of obtaining oxygen sensors with different spectral signatures was investigated. CdTe/ZnS, with emission peak at 680 nm, were immobilized in sol-gel glass and sandwiched together with a Ru(dpp) doped sol-gel sample, as shown in the insert of Figure 12b, and placed inside a closed chamber. To avoid direct excitation of the QDs from the LED, a long pass (600 nm cut-off) filter was placed between the two samples. Excitation and detection were performed using fiber bundles.
Figure 12b shows the spectral response of the sandwich arrangement to atmospheres of pure oxygen and pure nitrogen. The response of a film of [Ru(bpy)3]2+alone is shown in Figure 12a for comparison purposes. The emission peak corresponding to the nanocrystals emission is clearly visible in Figure 12b. In addition, some features appear in the spectral region of 650-675 nm, looking like a secondary peak or a depression in the dye emission, which could be due to some wavelength dependent absorption of the dye radiation by the QDs. The emission of the nanocrystals is clearly oxygen dependent.
A careful comparison between the spectral responses with and without the QDs allowed to confirm an increase in the sensitivity to oxygen in the spectral region between 700-800 nm. Maximum enhancement took place at approximately 760nm. At this particular wavelength, the quenching efficiency was improved by a factor of 2.4.
These results demonstrate that it is possible to obtain oxygen sensitivity in different spectral regions using QDs. The ideal situation would be to have nanocrystals emitting in the near infrared with no spectral overlap with the ruthenium dyes. The application of this principle using nanocrystals with different emission peaks, combined with the adequate sensing dye, would allow to obtain a set of oxygen sensors with different spectral signatures, suitable for wavelength multiplexing. Nanocrystals with longer emission wavelengths would greatly enhance the performance of this sandwich configuration since their emission spectra would not overlap with that of the sensing dye. In principle, this could be achieved using InAs or PbSe QDs, providing interesting solutions for near infrared wavelength multiplexed chemical sensing. In a more sophisticated approach, FRET could be used if the QDs and the indicator were adequately conjugated. However, the major problem of avoiding direct excitation of the sensing dye by the QDs would still limit the sensor performance.
In what was probably the first QD-based optical fiber biosensor, a reagentless, regenerable and portable immunosensor was developed by Aoyagi and Kudo [149]. Qdot655™ –labeled proteinA, purchased from Quantum Dot Corporation, were immobilized on the surface of a 1 mm diameter glass slide. The functionalized glass membrane was them placed on top of an Y fiber bundle through which excitation and detection of luminescent signals could be performed using a fluorometer (Figure 13a).
The sensing probe was then used to selectively detect immunoglobulin G (IgG) in standard solutions containing other proteins. It was shown that the binding of IgG to proteinA caused quenching of the QDs luminescence intensity due to FRET between the nanocrystals and the bound sample protein. The quenching rate was proportional to the immunoglobulin concentration in the 0.0 to 6.0 mg/mL as it can be seen in Figure 13b. The measuring range could be modified by changing parameters such as the amount of immobilized protein A on the glass plate and the diameter of the detecting optical fiber. Although 20 min were necessary for luminescence to attain the steady state, the authors observed the dynamics of the quenching process and concluded that most of the quenching events took place within the first minute of reaction, allowing for nearly real time detection. The sensing probes could be regenerated as the IgG bonds to protein A can be broken in a low pH solution (pH 2–4). This pioneer work demonstrated that QDs can be used to develop biosensing probes with very attractive characteristics such as selectivity, reversibility and in-situ operation with no need to add extra reagents to the probed solutions.
Besides the LbL configurations, all the sensing schemes reported so far were extrinsic probe configurations. Microstructured optical fibers (MOF), such as photonic crystal fibers (PCF) or holey fibers, promise to be excellent platforms for the development of intrinsic fiber probes. Particularly, some of these fibers typically rely on an array of air-filled tubular structures surrounding the nucleus to provide guiding with an high index contrast. Besides giving the fiber superior guiding properties, these holes can be filled with doping materials which can be evanescently excited. These holey structures have a great potential for sensing applications that is just starting to unveil.
In what concerns combination of MOF with QDs, some reports have been made where the nanocrystals were used to dope the fiber's voids. While the aim in these particular works was to use QDs as a gain medium for optical fiber lasers, the techniques that were developed can be applied in sensing devices. Meissner et al. have recently studied the behavior of CdSe-ZnS nanocrystals entrained in an array of 14.6 μm holes surrounding the 12.7 μm core of a MOF [150]. Pieces of approximately 10 cm length were immersed in a colloidal suspension of 573 nm emitting nanocrystals in heptane. A 5 min immersion was enough to fill the holey structure with the QDs solution by capillarity. The doped fibers were then pumped with a 488 nm argon line and probed with a 594 nm HeNe laser propagating through the fiber core.
Using this method, the authors were able to excite the QDs luminescence which was then coupled into the fiber core and guided. In addition, it was observed that the amount of probe light observed from the end of the fiber was increased when the pump and probe were both present. The authors suggested that the extra light appearing in both the fiber core as well as the outer, solid clad region, could be caused by optical gain. This claim, however, was a subject of debate because, typically, high rate pulsed pumping is necessary for QD gain to take place [151, 152].
In short term observations the QDs' emission was shown to be preserved, however, slow degradation of the entrapped solution in the long term is a concern. In a more fundamental approach, Yu and coworkers have developed a versatile method for doping polymer MOF [153]. The authors where able to dope the core of micro-structured plastic optical fiber (POF) with CdSe-ZnS nanocrystals. This was achieved by acting prior to the drawing process. A polymeric rod doped with QDs was inserted into the central hole of an intermediate perform with an 11 mm external diameter. After the drawing process a suspended-core fiber with outer and core diameters of 400 μm and 130 μm, respectively, was obtained. Because the QDs were in the fiber core a more efficient excitation and guiding capability could be obtained. The authors suggested that due to the relatively low processing temperatures employed (∼ 200 °C) this method will enable the incorporation of both organic and inorganic materials. In this context, although the primary goal of the authors was the development of QD based optical sources and switches, this method was an important step towards the feasibility of MOF based biochemical sensors.
The possibility of implementing intrinsic fiber sensors using PCF was investigated using organic luminescent dyes and QDs as doping agents [154]. The fiber used was an endlessly single mode PCF type ESM-12-01 with 54 microchannels of 6.2 μm diameter separated by 8.0 μm spacing. The holey structure of small pieces of PCF (typically 20 mm) were filled, by capillarity, with different quantum dots, dissolved in toluene or mesitylene at micromolar concentrations.
The fluorescence emission of the QDs inside the PCF was then observed using a microscope with adequate filtering to reject the excitation light at 365 nm. The cross section of the doped fiber is shown in Figure 14A where the QDs green luminescence is clearly visible.
In order to evaluate the possibility of detecting free radical species, the doped fiber tips were exposed to a 0.5 M toluene solution of TEMPO (2,2,6,6-tetramethylpiperidine-N-oxide free radical). This oxidative species was shown to strongly quench the luminescence of the QDs in solution (Figure 14C). The addition of the TEMPO solution to the fiber tip lead to an almost immediate quenching of the observed luminescence in the immersed tip, while the opposite end (20 mm away) remained fluorescent with little immediate change of emission intensity. As shown in Figure 14B, the fluorescence intensity of the sensing tip was then almost totally recovered after a few minutes, remaining practically unchanged for at least a week. The recovery of the luminescence was attributed to the high non-linearity of the quenching of QDs' emission by paramagnetic species combined with the diffusion/dilution of the analyte into the fiber. This was confirmed by observing that larger QDs, emitting in the red, displayed a more linear quenching behavior, because of their lower diffusion coefficient. This way, the recovery of luminescence was very poor and much slower when using larger nanoparticles.
The authors suggested that QDs could alternatively be incorporated in a polymer matrix coating the inner surface of the micro channels. This would enable potentially reversible sensing tips to probe different sample solutions without loosing the indicator. Although the potential of MOFs for biosensors is being explored by different authors a fundamental difficulty arises from the need for analyte solution to flow through the fiber microstructure. This precludes the fusion of the MOF tips in standard fiber interrogation systems. Nevertheless, recent works have been presented that demonstrated the possibility of incorporating MOF into microfluidic circuits for use in biochip applications [155, 156].
A summary of the applications where QDs were applied in optical fiber sensors is given in Table 2.
Although, to date, very few applications on the use of nanocrystals in fiber sensors have been reported, some of the works presented were key steps towards the development of new sensing tools. QDs immobilized in different matrix were used in a diversity of applications from thermometry to biosensing demonstrating that the unique features of QDs together with its physicochemical versatility will soon introduce a new class of advanced analytical tools.
Applications of QDs in planar structures
Planar structures using waveguide or microarray configurations are very appealing platforms that, when combined with microfluidics and electronics, can configure Lab-on-a-Chip (LOC) devices. LOC systems offer the possibility to carry out different functions, such as sample preparation, concentration and detection in a single miniaturized platform that has strong potential for batch analysis purposes. While LOC technologies have seen some impressive progresses [157-159], the application of QDs in this field is still small. Nevertheless, QD properties are very appealing for such applications, particularly because of their high photostability, multicolor ability and increased equivalent Stokes shift as these characteristics allow high density detection with increased noise rejection.
Besides the immobilization methods previously described, some patterning and manipulation techniques need to be developed for the feasibility of QD based LOC devices.
In a very interesting approach, QD encoded polymer microbeads where manipulated trough the use of a microfluidic Lab-on-a-disk structure [160]. The combination of centrifugal forces together with microfluidic channels and a geometrical barrier allowed to aggregate the color-encoded beads in a monolayer within a disk-based detection chamber. A parallel read out scheme could then be implemented using a color CCD-camera. The automated localization, color identification, and fluorescent detection aiming for color-multiplexed fluorescence immunoassays was made possible by a dedicated image processing software. The authors could successfully identify three distinct encoded microbeads, using two types of QDs and an organic luminophore, thus demonstrating multiplexing ability. In a practical situation, each color-code can identify polymer beads functionalized with different antibodies, allowing for multiplexed imunodetection using standard fluorescence immunoassay techniques. The viability of the detection system was successfully demonstrated by performing a hepatitis A and a tetanus assays on the microfluidic lab-on-a-disk platform.
In order to avoid systems with moving parts and readout schemes with complex signal processing, the nanocrystals should be immobilized on a solid subtract. In such cases, patterning strategies are necessary for the implementation of multiparameter devices.
Lithographic techniques have been adapted to implement spatially separated patterns of different colloidal nanocrystals in an aminofunctionalized substrate [161]. By using masks and exposure with UV light, different sites could be bounded to the substrate surface in a selective mode allowing to obtain well defined pixilated microarrays. Similar patterning could be obtained using contact masking and ion implantation techniques [162]. Also by functionalizing the nanocrystals surfaces with either hydrophobic or hydrophilic ligands, as discussed previously, they can bind to selected sites on the substrate forming dual color arrays [163]. More recently, using the sol-gel technique, QDs patterns were formed by using UV assisted photosynthesis [164]. Using this method a precursor solution was spin coated on a subtract and illuminated by a patterned UV beam. Because the photochemical reactions only took place on the illuminated sites, an ordered array of QD pixels with diameters of few microns could be formed. Reshaping the curing UV beam, patterns could be made in a range of sizes up to hundreds of microns.
In most of these techniques very bright luminescent patterns could be obtained indicating the preservation of the nanocrystals properties after immobilization. In a recent approach, however, the nanocrystals luminescent emission could be strongly enhanced by coating them over substrates where highly ordered triangular-shaped gold nanopatterns (typical dimensions 200 nm) have been fabricated by electron beam lithography [165]. The enhancement is caused by the interaction between the surface plasmon resonances of the metallic structure and the luminescent radiation of the nanocrystals. Surface plasmons are a promising tool for LOC technologies because it enables the control of the quantum yield and the radiation emission pattern of luminescent indicators.
All the described techniques introduce interesting possibilities of using QDs in microarray based configurations. However, sensing applications using such devices is yet to be demonstrated. Nevertheless, an important step towards the realization of QD based micro-arrays biosensors was reported by Sapsford et al.[166]. The authors used glass slides coated with a monolayer of neutravidin as the template. QDs functionalized with maltose binding protein (MBP) and avidin coordinated to their surface were then attached to the glass slides in discrete patterns using an intermediary bridge of biotinylated MBP or antibody linkers. This method allowed to control the surface location and concentration of the QD-protein-based structures. A six-channel patterning PDMS flow cell, was used to define waveguide patterns with the biotin-labeled proteins. Exposure of this biotin-protein patterned waveguides to MBP-QD-avidin then allowed to functionalize the waveguides with the QD probes. Surface FRET events were demonstrated using the fabricated arrays and CCD imaging indicating the feasibility of implementing surface immobilized QD biosensors.
The use of QDs as a gain material in the implementation of laser sources is an highly studied subject. Recently, the possibility of using QD doped cavities as highly sensitive sensors was discussed by Sommers et. al [17]. Distributed feedback (DFB) and ‘whispering gallery modes’ spherical lasing platforms doped with environmentally responsive nanocrystals can configure extremely sensitive biochemical sensors due to the non-linear behavior of the lasing process. In particular, the FRET mechanisms discussed before may be used to introduce analyte induced modulation of the laser gain coefficient.
A striking example of the variety of bio-applications where QDs can introduced significant improvements is the case where CdSe/CdS nanocrystal were used to label a cantilever post array to study cellular microforces [167]. Passive bed of nails (BoN) arrays are used to study the forces exerted on the cell surface. The cells are cultured on top of an array of micropilars (Figure 15B) and, since each micropilar behaves like a miniature cantilever, the deformation induced by the forces exerted on the cell walls can be optically monitored. This can be achieved by standard optical microscopy, however, because the refractive index of a cell is close to that of the surrounding aqueous media, it is difficult to identify the borders of the cell under bright field microscopy. Techniques to increase contrast include differential interference contrast microscopy or confocal microcopy. Alternatively, the posts can be labeled with luminescent proteins. However, proteins tend to be dissolved in buffers or ingested by cells. In this context, the use of QDs allowed to obtain a much better contrast and longer experimental observation times without photobleaching, thus improving the tracking of posts.
The tops of the posts were coated using a very thin layer of PDMS in which a suspension of quantum dots was previously mixed. The luminescent BoN was then used to obtain the map of the traction forces acting on a human airway smooth muscle (HASM) cell (Figure 15A).
Even though virtually no sensing configuration using QDs in planar platforms has been reported, the fundamental techniques for the implementation of these devices are presently well developed. In this context, it is expected that a burst on the use of QD-based sensing devices in biosensing application is to be expected soon.
Concluding remarks
Quantum dots are the basis for well established tools in biomedical imaging applications. In addition, its use in sensing configurations has seen considerable progress with a wide range of chemical and biosensing configurations already demonstrated. However, the application of these sensing principles in practical devices requires the immobilization of functionalized QDs in standard sensing platforms, such as optical fibers and integrated optical chips. Although there have been few reports on the use of QDs to obtain optical fiber or LOC configurations, some important steps were given towards the feasibility of these technologies.
In particular, several versatile immobilization techniques are currently available ranging from LbL to sol-gel or polymer encapsulation that allow to preserve the unique optical properties of QDs. In addition, the use of QD-based sensors while immobilized in such membranes has been reported in many different applications, ranging from multi-ion detection to FRET based immunodetection. Few but representative examples of such sensors were demonstrated at the tip of an optical fiber or in a planar platform. A strong research effort is presently dedicated to QD technology which will contribute for the increasing quality of the available semiconductor nanoparticles, in particular increasing their photostability and limiting their potential toxicity. The same is true for the sensitivity and selectivity of many functionalization strategies.
In this context, a considerable increase in the number of solid platform based QD sensors is to be expected soon. In particular, the convergence of many of the reported techniques will allow to implement high performance instruments: using LbL, the inner surface of microstrucured optical fibers can be functionalized with highly selective QD-antibody conjugates allowing for sensitive FRET detection of different species in simultaneous assays. Coupling such devices with microfluidic capabilities will enable the design of high throughput sensing tools. QDs will also play different roles either as an optical source, as detectors or a sensors. Ultimately, QDs sources themselves can be transformed into extremely sensitive sensing devices. Considering current developments, it is expected that in the near future QD based devices will play an important role in a new generation of nanotechnological instruments.
References and Notes LiangS.PierceD. T.AmiotC.ZhaoX.Photoactive nanomaterials for sensing trace analytes in biological samplesSynthesis and Reactivity in Inorganic Metal-Organic and Nano-Metal Chemistry2005359661668 OzkanM.Quantum dots and other nanoparticles: what can they offer to drug discovery?Drug Discovery Today200492410651071 WillnerI.BasnarB.WillnerB.Nanoparticle-enzyme hybrid systems for nanobiotechnologyFebs Journal20072742302309 PennS.G.HeL.NatanM.J.Nanoparticles for bioanalysisCurrent Opinion in Chemical Biology200375609615 KurnerJ.M.KlimantI.KrauseC.PringsheimE.WolfbeisO.S.A New Type of Phosphorescent Nanospheres for Use in Advanced Time-Resolved Multiplexed BioassaysAnalytical Biochemistry20012973241 TeolatoP.Silica nanoparticles for fluorescence sensing of Zn-II: Exploring the covalent strategyChemistry-a European Journal200713822382245 BruchezM.B.Jr.Semiconductor nanocrystals as fluorescent biological labelsScience199828120132016 MattoussiH.Luminescent quantum dot-bioconjugates in immunoassays, FRET, biosensing, and imaging applicationsJALA200492832 ZhangY.Using cadmium telluride quantum dots as a proton flux sensor and applying to detect H9 avian influenza virusAnalytical Biochemistry20073642122127 TrindadeT.O'BrienP.PickettN.L.Nanocrystalline semiconductors: synthesis, properties, and perspectivesChemistry of Materials2001131138433858 MurphyC.J.Optical sensing with quantum dotsAnalytical Chemistry200274520A526A JaiswalJ.K.SimonS.M.Potentials and pitfalls of fluorescent quantum dots for biological imagingTrends in Cell Biology2004149497504 RieglerJ.NannT.Application of luminescent nanocrystals as labels for biological moleculesAnal Bioanal Chem2004379913919 Costa-FernandezJ.M.PereiroR.Sanz-MedelA.The use of luminescent quantum dots for optical sensingTrac-Trends in Analytical Chemistry2006253207218 ClappA.R.MedintzI.L.MattoussiH.Forster resonance energy transfer investigations using quantum-dot fluorophoresChemphyschem2006714757 WangX.Ruedas-RamaM.J.HallE.A.H.The Emerging Use of Quantum Dots in AnalysisAnalytical Letters20074014971520 SomersR.C.BawendiM.G.NoceraD.G.CdSe nanocrystal based chem-/bio- sensorsChem. Soc. Rev.200736579591 KlimovV.I.Nanocrystal quantum dotsLos Alamos Science200328214220 GrundmannM.The present status of quantum dot lasersPhysica E20005167184 SkolnickM.S.MowbrayD.J.Self-assembled semiconductor quantum dots: Fundamental physics and device applicationsAnnual Review of Materials Research200434181218 SchererA.CraigheadH.G.Fabrication of small laterally patterned multiple quantum-wellsApplied Physics Letters1986491912841286 FukuiT.GaAs tetrahedral quantum dot structures fabricated using selective area metalorganic chemical vapor-depositionApplied Physics Letters1991581820182020 GaponenkoS.V.Optical properties of semiconductor nanocrystalsCambridge Studies in Modern OpticsKnightP.L.MillerA.CambridgeCambridge University PressVol. 231998 BawendiM.G.SteigerwaldM.L.BrusL.E.The quantum mechanics of larger semiconductor clusters (“quantum dots”)Annual Review of Physical Chemistry199041477496 HinesM.A.Guyot-SionnestP.Synthesis and characterization of strongly luminescing ZnS-capped CdSe nanocrystalsJ. Phys. Chem.1996100468471 DabbousiB.O.(CdSe)ZnS core-shell quantum dots: Synthesis and characterization of a size series of highly luminescent nanocrystallitesJournal of Physical Chemistry B19971014694639475 XieR.Synthesis and Characterization of Highly Luminescent CdSe-Core CdS/Zn0.5Cd0.5S/ZnS Multishell NanocrystalsJ. Am. Chem. Soc.200512774807488 KorsunskaN.E.Reversible and non-reversible photo-enhanced luminescence in CdSe/ZnS quantum dotsSemiconductor Science and Technology2005208876881 ZhelevZ.Enhancement of the photoluminescence of CdSe quantum dots during long-term UV-irradiation: privilege or fault in life science research?J Photochem. Photobiol. B2004751-299105 VermaP.IrmerG.MoneckeJ.Laser power dependence of the photoluminescence from CdSxSe1-x nanoparticles in glassJournal of Physics-Condensed Matter200012610971110 BrusL.E.A Simple-Model for the Ionization-Potential, Electron-Affinity, and Aqueous Redox Potentials of Small Semiconductor CrystallitesJournal of Chemical Physics1983791155665571 MurrayC.B.KaganC.R.BawendiM.G.Synthesis and characterization of monodisperse nanocrystals and close-packed nanocrystal assembliesAnnual Review of Materials Science200030545610 MurrayC.B.NorrisD.J.BawendiM.G.Synthesis and Characterization of Nearly Monodisperse Cde (E = S, Se, Te) Semiconductor NanocrystallitesJournal of the American Chemical Society19931151987068715 CozzoliP.D.Shape and phase control of colloidal ZnSe nanocrystalsChemistry of Materials200517612961306 TrindadeT.OBrienP.A single source approach to the synthesis of CdSe nanocrystallitesAdvanced Materials199682161163 TrindadeT.OBrienP.ZhangX.M.Synthesis of CdS and CdSe nanocrystallites using a novel single-molecule precursors approachChemistry of Materials199792523530 NairP.S.Cadmium ethylxanthate: A novel single-source precursor for the preparation of CdS nanoparticlesJournal of Materials Chemistry200212927222725 MalikM.A.O'BrienP.RevaprasaduN.A simple route to the synthesis of core/shell nanoparticles of chalcogenidesChemistry of Materials200214520042010 MurciaM.J.Facile sonochemical synthesis of highly luminescent ZnS-shelled CdSe quantum dotsChemistry of Materials200618922192225 PengZ.A.PengX.G.Formation of high-quality CdTe, CdSe, and CdS nanocrystals using CdO as precursorJournal of the American Chemical Society20011231183184 PengX.G.Green chemical approaches toward high-quality semiconductor nanocrystalsChemistry-a European Journal200282335339 RogachA.Colloidally prepared HgTe nanocrystals with strong room-temperature infrared luminescenceAdvanced Materials1999117552555 VossmeyerT.Cds Nanoclusters - Synthesis, Characterization, Size-Dependent Oscillator Strength, Temperature Shift of the Excitonic-Transition Energy, and Reversible Absorbency ShiftJournal of Physical Chemistry1994983176657673 RogachA.L.Synthesis and characterization of a size series of extremely small thiol-stabilized CdSe nanocrystalsJournal of Physical Chemistry B19991031630653069 RogachA.L.Synthesis and characterization of thiol-stabilized CdTe nanocrystalsBerichte Der Bunsen-Gesellschaft-Physical Chemistry Chemical Physics19961001117721778 RogachA.L.Synthesis, morphology and optical properties of thiol-stabilized CdTe nanoclusters in aqueous solutionBerichte Der Bunsen-Gesellschaft-Physical Chemistry Chemical Physics19971011116681670 GaoM.Y.Strongly photoluminescent CdTe nanocrystals by proper surface modificationJournal of Physical Chemistry B19981024383608363 GreenM.A facile route to CdTe nanoparticles and their use in bio-labellingJournal of Materials Chemistry2007171919891994 AldanaJ.WangY.A.PengX.G.Photochemical instability of CdSe nanocrystals coated by hydrophilic thiolsJournal of the American Chemical Society20011233688448850 LiuT.C.Temperature-dependent photoluminescence of water-soluble quantum dots for a bioprobeAnalytica Chimica Acta20065591120123 LiuY.S.pH-sensitive photoluminescence of CdSe/ZnSe/ZnS quantum dots in human ovarian cancer cellsJournal of Physical Chemistry C2007111728722878 HuangC.P.Plate-based biochemical assay using quantum dots as a fluorescent labeling agentSensors and Actuators B-Chemical20051081-2713720 MattoussiH.Self-assembly of CdSe-ZnS quantum dot bioconjugates using an engineered recombinant proteinJournal of the American Chemical Society2000122491214212150 UyedaH.T.Synthesis of compact multidentate ligands to prepare stable hydrophilic quantum dot fluorophoresJournal of the American Chemical Society20051271138703878 KimS.BawendiM.G.Oligomeric Ligands for luminescent and stable nanocrystal quantum dotsJournal of the American Chemical Society2003125481465214653 GuoW.H.Luminescent CdSe/CdS core/shell nanocrystals in dendron boxes: Superior chemical, photochemical and thermal stabilityJournal of the American Chemical Society20031251339013909 GuoW.Z.Conjugation chemistry and bioapplications of semiconductor box nanocrystals prepared via dendrimer bridgingChemistry of Materials2003151631253133 GerionD.Synthesis and properties of biocompatible water-soluble silica-coated CdSe/ZnS semiconductor quantum dotsJournal of Physical Chemistry B20011053788618871 IlerR.K.The Chemistry of Silica: Solubility, Polymerization, Colloid and Surface Properties and Biochemistry of Silica.John Wiley & SonsNew York1979 WuX.Y.Immunofluorescent labeling of cancer marker Her2 and other cellular targets with semiconductor quantum dotsNature Biotechnology20032114146 PellegrinoT.Hydrophobic nanocrystals coated with an amphiphilic polymer shell: A general route to water soluble nanocrystalsNano Letters200444703707 GaoX.H.In vivo cancer targeting and imaging with semiconductor quantum dotsNature Biotechnology2004228969976 DubertretB.In vivo imaging of quantum dots encapsulated in phospholipid micellesScience2002298559917591762 MattheakisL.C.Optical coding of mammalian cells using semiconductor quantum dotsAnalytical Biochemistry20043272200208 BallouB.Noninvasive imaging of quantum dots in miceBioconjugate Chemistry20041517986 JinT.Calixarene-coated water-soluble CdSe-ZnS semiconductor quantum dots that are highly fluorescent and stable in aqueous solutionChemical Communications20052228292831 JinT.Amphiphilic p-sulfonatocalix[4]arene-coated CdSe/ZnS quantum dots for the optical detection of the neurotransmitter acetylcholineChemical Communications20053443004302 LiuJ.A.Use of ester-terminated polyamidoamine dendrimers for stabilizing quantum dots in aqueous solutionsSmall200628-99991002 ErdemB.Encapsulation of inorganic particles via miniemulsion polymerization. III. Characterization of encapsulationJournal of Polymer Science Part a-Polymer Chemistry2000382444414450 ErdemB.Encapsulation of inorganic particles via miniemulsion polymerization. II. Preparation and characterization of styrene miniemulsion droplets containing TiO2 particlesJournal of Polymer Science Part a-Polymer Chemistry2000382444314440 ErdemB.Encapsulation of inorganic particles via miniemulsion polymerization. I. Dispersion of titanium dioxide particles in organic media using OLOA 370 as stabilizerJournal of Polymer Science Part a-Polymer Chemistry2000382444194430 TiarksF.LandfesterK.AntoniettiM.Silica nanoparticles as surfactants and fillers for latexes made by miniemulsion polymerizationLangmuir2001171957755780 TiarksF.LandfesterK.AnoniettiM.Encapsulation of carbon black by miniemulsion polymerizationMacromolecular Chemistry and Physics200120215160 EstevesA.C.C.Polymer encapsulation of CdE (E = S, Se) quantum dot ensembles via in-situ radical polymerization in miniemulsionJournal of Nanoscience and Nanotechnology200555766771 JoumaaN.Synthesis of quantum dot-tagged submicrometer polystyrene particles by miniemulsion polymerizationLangmuir200622418101816 MartinsM.A.Biofunctionalized ferromagnetic CoPt3/polymer nanocompositesNanotechnology2007182156095615 FleischhakerF.ZentelR.Photonic crystals from core-shell colloids with incorporated highly fluorescent quantum dotsChemistry of Materials200517613461351 PeresM.A green-emitting CdSe/poly(butyl acrylate) nanocompositeNanotechnology200516919691973 ZhuM.Q.Surface modification and functionalization of semiconductor quantum dots through reactive coating of silanes in tolueneJournal of Materials Chemistry2007178800805 OhnoK.Fabrication of ordered arrays of gold nanoparticles coated with high-density polymer brushesAngewandte Chemie-International Edition2003422427512754 BaoZ.Y.BrueningM.L.BakerG.L.Rapid growth of polymer brushes from immobilized initiatorsJournal of the American Chemical Society20061282890569060 OhnoK.Synthesis of monodisperse silica particles coated with well-defined, high-density polymer brushes by surface-initiated atom transfer radical polymerizationMacromolecules200538621372142 ZhaoH.Y.KangX.L.LiuL.Comb-coil polymer brushes on the surface of silica nanoparticlesMacromolecules200538261061910622 MarutaniE.Surface-initiated atom transfer radical polymerization of methyl methacrylate on magnetite nanoparticlesPolymer200445722312235 SkaffH.EmrickT.Reversible addition fragmentation chain transfer (RAFT) polymerization from unprotected cadmium selenide nanoparticlesAngewandte Chemie-International Edition2004434053835386 SillK.EmrickT.Nitroxide-mediated radical polymerization from CdSe nanoparticlesChemistry of Materials200416712401243 EstevesA.C.C.Polymer grafting from Cds quantum dots via AGET ATRP in miniemulsionSmall20073712301236 WuY.-h.AraiK.YaoT.Temperature dependence of the photoluminescence of ZnSe/ZnS quantum-dot structuresPhysical Review B19965316R 10 485R 10 488 LabeauO.TamaratP.LounisB.Temperature dependence of the luminescence lifetime of single CdSe/ZnS quantum dotsPhysical Review Letters2003902574047407 BijuV.Temperature-sensitive photoluminescence of CdSe quantum dot clustersJournal of Physical Chemistry B2005109291389913905 WalkerG.W.Quantum-dot optical temperature probesApplied Physics Letters2003831735553557 ChenY.RosenzweigZ.Luminescent CdS quantum dots as selective ion probesAnalytical Chemistry2002741951325138 XieH.-Y.Luminescent CdSe-ZnS quantum dots as selective Cu2+ probeSpectrochimica Acta Part A20046025272530 JinW.J.Surface-modified CdSe quantum dots as luminescent probes for cyanide determinationAnalytica Chimica Acta200452218 ShiG.H.Fluorescence quenching of CdSe quantum dots by nitroaromatic explosives and their relative compoundsSpectrochimica Acta Part A2007 AlbertK.e.J.Field-Deployable Sniffer for 2,4-Dinitrotoluene DetectionEnviron. Sci. Technol.20013531933200 NazzalA.Y.Photoactivated CdSe Nanocrystals as Nanosensors for GasesNano Lett.200336819822 PotyrailoR.A.LeachA.M.Selective gas nanosensors with multisize CdSe nanocrystal/polymer composite films and dynamic pattern recognitionApplied Physics Letters200688134110 TomasuloM.pH-Sensitive Ligand for Luminescent Quantum DotsLangmuir2006221028410290 SneeP.T.A Ratiometric CdSe/ZnS Nanocrystal pH SensorJ. Am. Chem. Soc.20061281332013321 Ruedas-RamaM.J.WangX.HallE.A.H.A multi-ion particle sensorChem. Commun.200715441546 XuC.BakkerE.Multicolor Quantum Dot Encoding for Polymeric Particle-Based Optical Ion SensorsAnal. Chem.20077937163723 LinC.I.Molecularly imprinted polymeric film on semiconductor nanoparticles: analyte detection by quantum dot photoluminescenceJournal of Chromatography A20041027259262 AlivisatosP.The use of nanocrystals in biological detectionNature Biotechnology20042214752 Evident Technologies.cited 2007Available from: Quantum Dot Corporation.cited 2007Available from: ChanW.C.W.NieS.Quantum dot bioconjugates for ultrasensitive nonisotopic detectionScience199828120162018 SarkW.G.J.H.M.v.Blueing, bleaching, and blinking of single CdSe/ZnS quantum dotsChemphyschem2002310871879 JaiswalJ.K.Long-term multiple color imaging of live cells using quantum dot bioconjugatesNature Biotechnology20032114751 GaoX.In vivo molecular and cellular imaging with quantum dotsCurrent Opinion in Biotechnology2005166372 SmithA.M.Multicolor quantum dots for molecular diagnostics of cancerExopert Rev. Mol. Diagn.2006 SmithJ.D.The use of quantum dots for analysis of chick CAM vasculatureMicrovascular Research2007737583 MurphyC.CofferJ.Quantum dots: a primerAppl. Spectrosc.200256116A27A LiangJ.Study on DNA damage induced by CdSe quantum dots using nucleic acid molecular “light switches” as probeTalanta20077116751678 GuoG.Probing the cytotoxicity of CdSe quantum dots with surface modificationMaterials Letters20076116411644 MattoussiH.Self-assembly of CdSe-ZnS quantum dot bioconjugates using an engineered recombinant proteinJournal of the American Chemical Society2000122491214212150 ClappA.R.Fluorescence resonance energy transfer between quantum dot donors and dye-labeled protein acceptorsJournal of the American Chemical Society20041261301310 LiuZ.D.Light scattering sensing detection of pathogens based on the molecular recognition of immunoglobulin with cell wall-associated protein AAnalytica Chimica Acta2007599279286 YuY.Synthesis of functionalized CdTe/CdS QDs for spectrofluorimetric detection of BSASpectrochim. Acta Part A: Mol. Biomol. Spectrosc.200710.1016/j.saa.2007.02.016 StsiapuraV.Functionalized nanocrystal-tagged fluorescent polymer beads: synthesis, physicochemical characterization and immunolableing applicationAnalytical Biochhemistry2004334257265 WangH.-Q.Multicolor encoding of polystyrene microbeads with CdSe-ZnS quantum dots and its application in immunoassayJornal of Colloid and Interface Science200710.1016/j.jcis2007.08.065 CordesD.B.GamseyS.SingaramB.Fluorescent Quantum Dots with Boronic Acid Substituted Viologens To Sense Glucose in Aqueous SolutionAngew. Chem. Int. Ed.20064538293832 HuangC.-P.A new approach for quantitative determination of glucose by using CdSe/ZnS quantum dotsSens. Actuators B: Chem200710.1016/j.snb.2007.08.021 HolstG.MizaikoffB.Fiber Optic Sensors for Environmental ApplicationsHandbook of Optical Fibre Sensing TechnologyJohn Wiley & Sons, LTD.2002729755 GrattanK.T.V.SunT.Fiber optic sensor technology: an overviewSensors and Actuators a-Physical2000821-34061 WolfbeisO.S.Fiber Optic Chemical Sensors and BiosensorsAnal. Chem.20027281R89R WolfbeisO.S.Fiber Optic Chemical Sensors and BiosensorsAnal. Chem.20047632693284 ButlerT.M.MacCraithB.D.McDonaghC.Leaching in sol–gel-derived silica films for optical pH sensingJournal of Non-Crystalline Solids1998224249258 HartmannP.LeinerM.J.P.KohlbacherP.Photobleaching of ruthenium complex in polymers used for oxygen optodes and its inhibition by singlet oxygen quenchersSensors and Actuators B199851196202 CostaV.C.ShenY.BrayK.L.Luminescence properties of nanocrystalline CdS and CdS:Mn2+ doped silica-type glassesJournal of Non-Crystalline Solids2002304217223 LitranR.Confinement of CdS Nanocrystals in a Sonogel MatrixJournal of Sol-Gel Science and Technology19978275283 WangY.Optical responses of ZnSe quantum dots in silica gel glassesJournal of Crystal Growth2004268580584 ArachchigeI.U.BrockS.L.Sol-Gel Methods for the Assembly of Metal Chalcogenide Quantum DotsAcc. Chem. Res.200740801809 ReisfeldR.SaraidarovT.Innovative materials based on sol-gel technologyOptical Materials2006286470 BullenC.Incorporation of a highly luminescent semiconductor quantum dot in ZrO2-SiO2 hybrid sol-gel glass filmJ. Mater. Chem.20041411121116 FerreiraP.M.S.Langmuir-Blodgett manipulation of capped cadmium sulfide quantum dotsThin Solid Films2001389272277 CrispM.T.KotovN.A.Preparation of Nanoparticle Coatings on Surfaces of Complex GeometryNano Lett.200332174177 BarmenkovY.O.StarodumovA.N.LipovskiiA.A.Temperature fiber sensor based on semiconductor nanocrystallite-doped phosphate glassesApplied Physics Letters1998734 JorgeP.A.S.Luminescence-based optical fiber chemical sensorsFiber and Integrated Optics2005243-4201225 JorgeP.A.S.Quantum dots as self-referenced optical fibre temperature probes for luminescent chemical sensorsMeasurement Science & Technology200617510321038 BenrashidR.VelascoP.High performance sol-gel spin-on glass materialsWaveguide SolutionsCharlotte, NC2005p. 27 OdonnellK.P.ChenX.Temperature-dependence of semiconductor band-gapsApplied Physics Letters1991582529242926 ValeriniD.Temperature dependence of the photoluminescence properties of colloidal CdSe/ZnS core/shell quantum dots embedded in a polystyrene matrixPhys. Rev. B200571235409 deBastidaG.Quantum dots based optical fiber temperature sensors fabricated by Layer-by-LayerIEEE Sensors Journal2006 BravoJ.Fiber Optic temperature sensor depositing quantum dots inside hollow core fibers using layer by layer techniqueEWOFS 2007.NapoliSPIE661966191912007 RuanH.Self assembled optical detectors for optical fiber sensorsEWOFS 2007.NapoliSPIE661966192W12007 JorgeP.A.S.Simultaneous determination of oxygen and temperature using quantum dots and a ruthenium complexEWOFS 2007.NapoliSPIE661966191Y2007 JorgeP.A.S.Applications of quantum dots in optical fiber luminescent oxygen sensorsApplied Optics2006451637603767 AoyagiS.KudoM.Development of fluorescence change-based, reagent-less optic immunosensorBiosensors and Bioelectronics20052016801684 MeissnerK.E.HoltonC.SpillmannW.B.Jr.Optical characterization of quantum dots entrained in microstructured optical fibersPhysica E200526377381 FinlaysonC.E.Comment on “Optical characterization of quantum dots entrained in microstructured optical fibers” Physica E 26 (2005) 377-381Physica E200631107108 MeissnerK.E.HoltonC.SpillmannW.B.Jr.Response to comment on “Optical characterization of quantum dots entrained in microstructured optical fibers”Physica E200631109110 YuH.C.Y.Quantum dot and silica nanoparticle doped polymer optical fibersOptics Express2007151699899994 GalianR.E.LaferriereM.ScaianoJ.C.Doping of photonic crystal fibers with fluorescent probes: possible functional materials for optrode sensorsJ. Mater. Chem.20061616971701 RindorfL.Towards biochip using microstructured optical fiber sensorsAnal. Bioanal. Chem.200638513701375 HassaniA.SkorobogatiyM.Design of the Microstructured Optical Fiber-based Surface Plasmon Resonance sensors with enhanced microfluidicsOptics Express2006142411616 CraigheadH.Future lab-on-a-chip technologies for interrogating individual moleculesNature20064427101387393 DittrichP.S.ManzA.Lab-on-a-chip: microfluidics in drug discoveryNature Reviews Drug Discovery200653210218 LoonbergM.CarlssonJ.Lab-on-a-chip technology for determination of protein isoform profilesJournal of Chromatography A200611271-2175182 RieggerL.Read-out concepts for multiplexed bead-based fluorescence immunoassays on centrifugal microfluidic platformsSensors and Actuators A2006126455462 VossmeyerT.Combinatorial approaches toward patterning nanocrystalsJournal of Applied Physics199884736643670 MeldrunA.Micropixelated Luminescent nanocrystal arrays synthesized by ion implantationAdvanced Materials20041613134 ChenC.-C.Self-Assembly of Monolayers of Cadmium Selenide Nanocrystals with Dual Color EmissionLangmuir19991568456850 BertinoM.F.Patterning porous matrices and planar substrates with quantum dotsJ Sol-Gel Sci Techn200639299306 PompaP.P.Fluorescence enhancement in colloidal semiconductor nanocrystals by metallic nanopatternsSensors and Actuators B2007126187192 SapsfordK.E.Surface-Immobilized Self-Assembled Protein-Based Quantum Dot NanoassembliesLangmuir20042077207728 Addae-MensahK.A.A flexible, quantum dot-labeled cantilever post array for studying cellular microforcesSensors and Actuators A2007136385397 Figures and Tables
Left –Typical absorption spectrum of CdSe nanocrystal QDs. Right - Normalized emission spectra of different samples of core-shell QDs immobilized in a sol-gel matrix. Peak emissions from blue to red are 520nm (CdSe/ZnS), 610nm (CdSe/ZnS) and 680nm (CdTe/ZnS). The spectrum of the excitation source, a blue LED (470nm), is also shown.
Photoluminescent CdSe/ZnS QDs dispersed in toluene (UV light irradiation).
Scheme illustrating some of the methods for chemical surface modification of QDs.
Emission spectra of acrylic/QDs nanospheres acting as: (a) single K+ sensors after addition of KHCO3 solutions. (b) Single Cl- sensors after addition of NaCl solutions. (c) K+/Cl2 multi-ion sensors after addition of KCl solutions. Concentrations: 0mM(black), 2 mM(red), 5mM(green), 10mM (blue), 20 mM (cyan), 50 mM (pink) and 200 mM (yellow). Fluorescent signals were recorded between 450 and 750 nm (λex = 400 nm) with a spectrofluorimeter (Cary Eclipse, Varian) in a 96-well microplate. For all samples the pH was fixed at 7.0 by using phosphate buffer. [101] - Reproduced by permission of The Royal Society of Chemistry.
Specific labelling of live cells with QDs. (a) Schematic representation of the QD antibody conjugation strategy. (b) Labelling of cell membranes with the QD bioconjugates: only cells expressing detectable levels of Pgp–GFP were labelled, those that did not express Pgp–GFP (marked with arrows) did not bind with QD probes. Yellow coloring in the fluorescence image indicates an overlap of green (Pgp–GFP) and red (QDs bioconjugate) fluorescence emission. (See Ref. [109, 116] for further details) Reproduced from Trends in Cell Biology, 14, Jaiswal, J. K. and Simon, S. M., Potentials and pitfalls of fluorescent quantum dots for biological imaging, 497-504, Copyright (2004), with permission from Elsevier.
Temperature response of the luminescence emission of sol-gel immobilized CdSe/ZnS nanocrystals for a range of 11°C to 48°C: a) Spectral response of QD520; b) Peak emission wavelength of QD600 as a function of temperature.
a) Luminescent intensity of QD600 as a function of temperature for three levels of LED optical power (100%, 90%, and 80%); b) Corresponding normalized outputs (SQD).
Reflection (a) and transmission (b) configurations used to interrogate simultaneously two different samples of sol-gel glass doped with semiconductor nanocrystals.
a) Spectral response of two distinct QD samples (QD600 and QD680) to independent changes of temperature (inserted: Normalized SQD signals for both samples as function of the applied temperature – only the temperature of QD600 was changed); b) Normalized responses, SQD600 and SQD680 during a time interval in which alternate independent temperature changes were applied to each sample.
Sensor output when subjected to simultaneous change of oxygen and temperature: (a) O2 measured by Ru(dpp); (b) temperature measured by QD; (b) temperature compensated O2 measurement.
a) Sensor response to O2/N2 saturation cycles while excitation optical power was slowly changed from 100% to 70%. Both the raw luminescent output signal of Ru(bpy), and the ratiometric signal obtained using the QDs luminescence are shown. b) a picture of the luminescent QD and oxygen sensing samples excited using a fiber bundle.
Spectral responses in saturated atmospheres of N2 an O2 (Inserted: a scheme of the sensing configuration): (a) Ru(bpy); (b) Ru(bpy) + long pass filter (600 nm) + QD680.
a) Schematic diagram of the reagent-less fiber-optic fluorescent immunosensor. b) Typical relationship between the IgG concentration and the fluorescence intensity change of QD-protein A on a glass plate [149]. Reproduced from Biosensors and Bioelectronics, 20, Satoka Aoyagi and Masahiro Kudo, Development of fluorescence change-based, reagent-less optic immunosensor, 1680-1684, Copyright (2005), with permission from Elsevier.
(A) Fluorescence photograph of the PCF end containing green QDs (2.4 nm) after addition of TEMPO and recovery. (B) Time lapse fluorescence photographs (from left to right) before adding TEMPO, immediately after, 5, 10, and 17 minutes after addition. Photographs were taken in normal (A) and transverse (B) mode. Small color differences with angle are common in photonic crystals. (C) Stern Volmer plot for quenching of green QD by TEMPO. The numbers in (C) indicate approximately the delay following addition, with ‘1’ corresponding to data before TEMPO addition. [154] - Reproduced by permission of The Royal Society of Chemistry.
A) Overlay of traction forces on a HASM cell fixed and stained on quantum dot labeled posts. B) SEM of profile of section of BoN 2μm in diameter, 5μm spacing and 7μm tall [167]. Reproduced from Sensors and Actuators A, 136, Addae-Mensah et al., A flexible, quantum dot-labeled cantilever post array for studying cellular microforces, 385-397, Copyright (2007), with permission from Elsevier.
Some representative examples of optical sensing with QDs immobilized in solid hosts.
QD-coating Matrix measurand Mechanism Features platform REF
CdSe SiO2Sol-gel, PLMA Temperature luminescence quenching and bandgap shift; 100-315 K, 0.1 nm/K glass slides [91]
CdSe PMMA Methanol and toluene luminescence quenching and enhancement Selectivity, PCA glass slides [98]
CdSe-ZnS Acrylic (nanospheres) K+, Cl- FRET with chromophores and luminophores Dual sensing latex microspheres [101]
CdSe-CdS PVC/DOS Na+ Ion exchange and energy transfer Selectivity microspheres [102]
CdSe-ZnS MIP Caffeine luminescence quenching; Selectivity grinded polymer [103]
CdSe-ZnS polystyrene IgG affinity binding with labelled goat antihuman IgG QDs used for color coding polymer microbeads [121]
– QD applications in optical fiber sensing.
QD-coating Matrix Measurand Mechanism Features Fiber platform probe REF
CdSe Phosphate glass Temperature bandgap shift - absorption 0-150°C, 0.12 nm/K multimode extrinsic [138]
CdSe Schott glass filter Temperature bandgap shift - absorption Simultaneous detection O2 and temperature multimode extrinsic [139]
CdSe-ZnS-TOPO; CdTe-ZnS Sol-gel Temperature luminescence quenching and bandgap shift; Self referenced Multiplexed fiber bundle extrinsic [140]
CdTe- PDDA LbL Temperature luminescence quenching; 30-100°C, 0,2 nm/°C Multimode/ tapered intrinsic [144]
CdTe- PDDA LbL Temperature luminescence quenching; 30-100C, 0,2 nm/°C hollow core intrinsic [145]
CdSe-ZnS-TOPO + Ru(dpp) Sol-gel O2 ratiometric detection QD as intensity reference fiber bundle extrinsic [148]
CdSe-ZnS-TOPO + Ru(dpp) Sol-gel O2 & temperature luminescence quenching and bandgap shift; Simultaneous detection O2 and temperature multimode extrinsic [147]
CdTe-ZnS +Ru(dpp) Sol-gel O2 QD excited by O2 sensing dye. O2 sensitivity at higher λ fiber bundle extrinsic [148]
CdSe solution in PCF holes Oxidative species luminescence quenching partially reversible PCF intrinsic [154]
Qdot655-ProteinA Glass plate IgG FRET quenching fast reagent less imunodetection fiber bundle extrinsic [149] |
19bcbb9c462e2edc | Masato Hisakado
Learn More
This econophysics work studies the long-range Ising model of a finite system with N spins and the exchange interaction J N and the external field H as a model for homogeneous credit portfolio of assets with default probability P d and default correlation ρ d. Based on the discussion on the (J, H) phase diagram, we develop a perturbative calculation method(More)
This paper generalizes Moody's correlated binomial default distribution for homogeneous (exchangeable) credit portfolio, which is introduced by Witt, to the case of inhomogeneous portfolios. As inhomogeneous portfolios, we consider two cases. In the first case, we treat a portfolio whose assets have uniform default correlation and non-uniform default(More)
Observational learning is an important information aggregation mechanism. However, it occasionally leads to a state in which an entire population chooses a suboptimal option. When this occurs and whether it is a phase transition remain unanswered. To address these questions we perform a voting experiment in which subjects answer a two-choice quiz(More)
We consider a situation where one has to choose an option with multiplier m. The multiplier is inversely proportional to the number of people who have chosen the option and is proportional to the return if it is correct. If one does not know the correct option, we call him a herder, and then there is a zero-sum game between the herder and other people who(More)
The soliton structure of a gauge theory proposed to describe chiral excitations in the multi-Layer Fractional Quantum Hall Effect is investigated. A new type of derivative multi-component nonlinear Schrödinger equation emerges as effective description of the system that supports novel chiral solitons. We discuss the classical properties of the solutions and(More)
We study a simple model for social learning agents in a restless multiarmed bandit. There are N agents, and the bandit has M good arms that change to bad with the probability q_{c}/N. If the agents do not know a good arm, they look for it by a random search (with the success probability q_{I}) or copy the information of other agents' good arms (with the(More) |
7cbeed4d30a919f4 | Dismiss Notice
Join Physics Forums Today!
Schrödinger equation: P(r)>1 ?
1. Nov 19, 2005 #1
Schrödinger equation: P(r)>1 ???
I have the solution of the Schrödinger equation for the ground state of the hydrogen electron. The solution ist:
If I want to calculate some probabilty values I do this with:
If I set r=10^-13 I get a value that is greater than 1, I get P(10^-13)=10^34.
This cannot be. Whats wrong with my probability formula?
2. jcsd
3. Nov 19, 2005 #2
User Avatar
Staff Emeritus
Science Advisor
Gold Member
P is a probability... density. To get the probability that it lies in a region, you need to integrate over that region.
4. Nov 19, 2005 #3
With what function are the graphs plotted? (I mean one like this:
But to calculate the charge distribution of the electron I can take:
thanks for your help
5. Nov 19, 2005 #4
User Avatar
Staff Emeritus
Science Advisor
Gold Member
I'm not familiar with those graphs -- I might look at the Bernoulli or Airy functions, but if it's not them, I don't think I could guess. (Also, I think I've read that the Airy functions appear in dealing with the hydrogen atom)
Have something to add?
Similar Discussions: Schrödinger equation: P(r)>1 ?
1. Schrödinger equation (Replies: 11) |
171ab5fa61fdcae6 | Take the 2-minute tour ×
How to derive the Schrodinger equation for a system with position dependent effective mass? For example, I encountered this equation when I first studied semiconductor hetero-structures. All the books that I referred take the equation as the starting point without at least giving an insight into the form of the Schrodinger equation which reads as
$$\big[-\frac{\hbar^2}{2}\nabla \frac{1}{m^*}\nabla + U \big]\Psi ~=~ E \Psi. $$
I feel that it has something to do with conserving the probability current and should use the continuity equation, but I am not sure.
share|improve this question
Hi ballkikhaal - I edited out the part of your question asking about a book, because we limit the number of book recommendation questions on the site. See if anything in this question helps you. – David Z Oct 10 '12 at 16:23
5 Answers 5
I would be very surprised if you managed to find a strict mathematical derivation of the Schrödinger equation anywhere – at least I have not encountered one until now. However, it might be worth pointing out that the ‘general’ time-dependent Schrödinger equation, which is often taken as an axiom of quantum mechanics, is usually
$$i \hbar \partial_t \Psi = \hat H \Psi \quad .$$
In the case of a stationary Hamiltonian (usually $U(x,t) \equiv U(x)$), this equation separates and you get the stationary Schrödinger equation, namely
$$ \hat H \Psi = E \Psi \quad ,$$
that is, an eigenvalue equation for the Hamiltonian.
Given this equation, it is then relatively simple to work out the form of the Hamiltonian (in your case, $-\frac{\hbar^2}{2} \nabla \frac{1}{m^\star} \nabla + U$) and plug it into the equation. The exact form of the Hamiltonian is usually guesswork based on observation and analogies to classical mechanics. In general, we have
$$ \hat H = \hat T + \hat U $$
where $\hat T$ and $\hat U$ denote the operators for kinetic and potential energy, correspondingly.
It is worth noting that you can derive the continuity equation (which is identical to probability conservation in this case) from the Schrödinger equation by adding the complex conjugate of the Schrödinger equation to itself.
share|improve this answer
thank you for the answer but what i am asking is how to derive it within the regime of effective mass approximation and that too when the mass has a spatial profile... for example in the case of Al/GaAs high electron mobility transistor we have a position dependent mass. – baalkikhaal Oct 10 '12 at 16:55
Are you looking for a derivation of the Hamiltonian $\hat H$ or the Schrödinger equation? I am positive that neither probability conservation nor the continuity equation have anything to do with the earlier. – Claudius Oct 10 '12 at 16:57
I have come across this equation in Hamaguchi on page 347 <books.google.co.in/…; – baalkikhaal Oct 10 '12 at 18:13
Where do you think Schrodinger equations come from, if not by some sort of a derivation? – Ron Maimon Oct 10 '12 at 19:31
The answer is not by guesswork, it is from the tight-binding approximation with CP invariance to guarantee that the hopping parameter is real, and then Hermiticity guarantees the hopping is symmetric and equal to the given Hamiltonian. If the hopping is slowly locally varying, then you get the Hamiltonian they say. The Schrodinger equation which is axiomatic is not as specific as the Schrodinger equation in space, which has a specific ansatz for the kinetic term which can be justified from tight binding, as Feynman does in his lectures. – Ron Maimon Oct 10 '12 at 19:47
The derivation is straightforward if you consider the source of the effective mass is a slowly varying hopping parameter on a tight-binding (lattice particle) model. Here you have a particle on a square lattice with a probability amplitude to go left, right, up and down, forward, backwards. The main physical requirement is Hermiticity, which in 1d can be used (with a phase choice on the wavefunction) to turn the phase everywhere real.
Once you do this, there is a real amplitude at site n to hop one square to the right r(n) and an amplitude to hop one square to the left, which by hermiticity and reality, must be r(n-1)--- it must be the complex conjugate of the amplitude to hop right from position n-1. So the amplitude equation is
$$ i{dC\over dt} = r(n-1) C_{n-1} - (r(n-1)+r(n))C_n + r(n) C_{n+1} $$
This is, when r is slowly varying, equivalent to the continuum equation found by Taylor expanding and keeping only the most relevant terms:
$$ i {d\psi \over dt} = {1\over 2} {\partial \over \partial x} (r(x) {\partial\over \partial x} \psi(x)) $$
As Feynman noted but never published (Dyson published this comment posthumously, in a paper in American Journal of Physics titles something like "Feynman's derivation of the Maxwell equations from Schrodinger equation"), Dirac's phase trick doesn't work in higher dimensions, because you can't fix all the phases. Then the commutators have a magnetic field addition, and to make it consistent, the magnetic field has to end up obeying Maxwell's equation, since the phase rotation gives a U(1) symmetry. This is not a true derivation of Maxwell's physics from quantum mechanics, it is just a way of showing you need the extra assumption of CP invariance to make the hopping hamiltonian real (which is true).
Then with the extra assumption, you just get
$$ i {d\psi\over dt} = {1\over 2} \nabla \cdot (t(x) \nabla \psi) + V(x) \psi $$
Where I have added back the potential. This is the continuum limit of a tight binding model with spatially slowly varying hopping, or inverse effective mass.
share|improve this answer
In addition to Claudius' and Ron Maimon's answers, I would like to make three comments:
1. Classically, the Hamiltonian function for the effective mass approximation reads $$\tag{1} H({\bf r}, {\bf p})~:=~\frac{{\bf p}^2}{2m^*({\bf r})}+V({\bf r}).$$
2. Quantum mechanically, when one quantizes the classical model (1), one should pick a self-consistent choice for the Hamiltonian operator $\hat{H}$. It is natural to replace the classical variable ${\bf r}$ and ${\bf p}$ in the Hamiltonian (1) with the operators $$\tag{2}\hat{\bf r}~=~{\bf r} \qquad\text{and}\qquad \hat{\bf p}~=~\frac{\hbar}{i}\nabla $$ (in the Schrodinger representation). But which operator ordering prescription should one choose? One natural choice, which (under appropriate boundary conditions) makes the Hamiltonian Hermitian, is $$\tag{3} \hat{H}~:=~\hat{\bf p}\cdot \frac{1}{2m^*(\hat{\bf r})}\hat{\bf p}+V(\hat{\bf r})~=~-\frac{\hbar^2}{2}\nabla\cdot \frac{1}{m^*({\bf r})}\nabla+V({\bf r}).$$
3. Finally, let us mention a somewhat related/generalized Hermitian Hamiltonian operator $$\tag{4} \hat{H}~=~-\frac{\hbar^2}{2}\Delta_g +V({\bf r}), $$ which may give another useful (anisotropic) effective mass model. Here $\Delta_g$ is the Laplace-Beltrami operator for a Riemannian $3\times 3$ metric $g_{ij}=g_{ij}({\bf r})$, which, roughly speaking, may be viewed as an (anisotropic) effective mass tensor.
share|improve this answer
For derivation of the PDM Schrodinger equation see K. Young, Phys. Rev. B 39, 13434–13441 (1989) "Position-dependent effective mass for inhomogeneous semiconductors". Abstract.:A systematic approach is adopted to extract an effective low-energy Hamiltonian for crystals with a slowly varying inhomogeneity, resolving several controversies. It is shown that the effective mass $m_R$ is, in general, position dependent and enters the kinetic energy operator as $ -\nabla({m_R-1})\nabla/2$. The advantage of using a basis set that exactly diagonalizes the Hamiltonian in the homogeneous limit is emphasized.
share|improve this answer
Link in answer(v2) behind paywall. @MKB: In the future please link to abstract pages rather than pdf files, e.g. prb.aps.org/abstract/PRB/v39/i18/p13434_1 – Qmechanic Nov 30 '12 at 22:54
A Hamiltonian must be self-adjoint. The equation must also reduce to the familiar equation in the case of a constant mass. Now the form of the operator is already determined as the only simple self-adjoint generalization of the position-independent Schroedinger equation to the position dependent case.
If you specialize to 1 dimension, you get the Sturm-Liouville equation. At
you can find a discussion of its self-adjointness. Everything genaralizes to the PDE case.
share|improve this answer
Your Answer
|
5f59eb7a2b155a3a | European physicists have won the race to observe zitterbewegung, the violent trembling motion of an elementary particle that was predicted by Erwin Schrödinger in 1930. To observe this phenomenon, the team simulated the behaviour of a free electron with a single, laser-manipulated calcium ion trapped in an electrodynamic cage.
They took this approach because it is currently impossible to detect the quivering of a free electron, which has an amplitude of just 10–13 m and a frequency of 1021 Hz. Computational simulations are also ruled out, because today's computers have insufficient power and memory capabilities.
The researchers claim that their triumph may also serve as an important step towards using trapped ions and atoms to simulate high-temperature superconductivity, magnetism and even black holes.
Relativistic realization
According to Christian Roos at the University of Innsbruck, Austria, one of the keys to success was to make their non-relativistic ion behave as if it was a relativistic particle. This is crucial because zitterbewegung is predicted by the Dirac equation, which describes relativistic quantum mechanics.
Roos did the work along with colleagues at Innsbruck and the University of the Basque Country. "When the right conditions are met, the Schrödinger equation that describes this ion as a quantum system looks identical to the Dirac equation of the free electron," he explained. The trapped, laser-manipulated ion can then be studied as an analogue of a relativistic free electron.
Calcium ions were chosen because they can be excited with visible wavelength lasers. "In addition, calcium's level structure is sufficiently simple to allow the experimentalist a near-perfect control over the internal states of the ion, but complex enough to carry out the quantum measurements needed for inferring the position of the particle."
Simulations begin by putting the calcium ion into a particular quantum state. This is allowed to evolve for a certain time, before the researchers measure the position of the ion.
Tiny movements
"In these measurements the particle moves by much less than the wavelength of visible light, so we cannot directly use an imaging technique to determine the position of the ion," explains Roos. "Instead, we use a suitably tailored laser-ion interaction that maps the information about the position of the particle onto the internal states of the ion." The ion's position is then determined from its internal state, and this uncovers the quivering motion.
The act of measuring the ion's position collapses its wave function, so the researchers have to reconstruct the desired initial wave function for every single measurement. This process is relatively quick, however, and they are able to carry out 50 experiments per second.
Adjusting the output of the laser alters the simulated particle's kinetic energy to rest-mass energy ratio, and opens the door to studies of relativistic and non-relativistic physics.
The researchers found that changes to the particle's effective mass while its momentum was kept constant led to the disappearance of zitterbewegung in the non-relativistic and highly relativistic limits (large and small effective masses, respectively). However, the quivering motion was clearly present in the regime between these limits.
Inspirational work
Jay Vaishnav from Bucknell University, Pennsylvania, says that the work of Roos and his co-workers represents a major step forward for quantum mechanical simulations, and she believes that it will inspire other research groups to attempt similar things.
She says that the building of an atomic version of the Datta-Das transistor – a spin-based device that has never successfully been built with electrons – could lead on from Roos' work. "The workings of this transistor are based on creating a relativistic set-up using cold atoms."
The work is reported in Nature. |
cbc57d8cd9467d12 | Monday, July 15, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Bohmian mechanics, a ludicrous caricature of Nature
Some people can't get used to the fact that classical physics in the most general sense – a class of theories that identify Nature with the objectively well-defined values of certain (classical) degrees of freedom that are observable in principle and that evolve according to some (classical) equations of motion, usually differential equations that depend on time, mostly deterministic ones – has been excluded as a possible fundamental description of Nature for almost a century.
Classical physics has been falsified and the falsification – a death for a theory – is an irreversible event. Nevertheless, those people would sleep with this zombie and do anything and everything else that is needed (but isn't sufficient) to resuscitate it. Of course, it's not possible to resuscitate it but those people just won't stop trying.
Bohmian mechanics, one of the main strategies to pretend that classical physics hasn't died and hasn't been superseded by fundamentally different quantum mechanics, was invented by Prince Louis de Broglie in 1927 who called it "the pilot-wave theory". In the late 1920s, the 1930s, and 1940s, physicists were largely competent so they didn't have any doubts that the pilot wave theory was misguided by its very own guiding wave ;-). Exactly 25 years later, the approach was revived by David Bohm who made the picture popular, largely because he was a fashionable, media-savvy commie (he's almost certainly the recipient of Wolfgang Pauli's famous criticism "not even wrong" that was ironically hijacked by aggressive Shmoitian crackpots in the recent decade). Prince Louis de Broglie liked the new life that apparently returned to the veins of his old sick theory so he didn't even care too much that his theory was going to be attributed to someone else and that the someone else was a Marxist rather than an aristocrat.
A constraint that defines Bohmian mechanics is simple: it should be a classical theory that emulates quantum mechanics as well as it can. The champions of the Bohmian theory know that getting the same predictions as quantum mechanics is the maximum goal they may dream about – they can never beat quantum mechanics – and they sort of realize that even this tie is too much to ask in general. Most of the Bohmian advocates seem to know that their theory can't be accurate, especially because of its fundamental conflict with relativity – but they don't seem to care. The fact that the Bohmian mechanics agrees with their fully discredited preconception that Nature is fundamentally classical is more important for them than the (in)accuracy of the predictions extracted from their pet theory.
It's straightforward to explain why it's possible to design a classical theory that parrots quantum mechanics when it comes to certain questions.
Bohmian mechanics is at least vaguely defensible in the non-relativistic quantum mechanical models only; in more general theories, it collapses completely. How does it rebuild non-relativistic quantum mechanics for one particle, for example?
Proper quantum mechanics of this system may be written down in Schrödinger's picture that dictates the following time evolution to the wave function:\[
i\hbar\frac{\partial}{\partial t}\psi(q,t)=-\sum_{i=1}^{N}\frac{\hbar^2}{2m_i}\nabla_i^2\psi(q,t) + V(q)\psi(q,t)
\] The way how this wave is evolved in agreement with the equation above contains all the "mathematical beef" of quantum mechanics for the given system and to get the right numbers, any classical caricature of quantum mechanics simply has to contain some objects that are pretty much equivalent to \(\psi(q,t)\). These objects are then assigned totally different, wrong interpretations in the caricatures but they must be there and they must evolve according to the same Schrödinger's equation.
Bohmian mechanics buys \(\psi(q,t)\) and incorrectly interprets it as a classical wave – a field that has objective values and is in principle measurable. Of course, we know from quantum mechanics as well as experiments that the value of the wave function simply shouldn't be and isn't measurable in a single repetition of an experiment. So the Bohmian apologists must also invent convoluted mechanisms to make the wave unmeasurable – because it is unmeasurable according to the experiments – despite the fact that the wave function is fundamentally measurable in their theory.
Bohmian Rhapsody, via Dilaton.
Is this the real life? Is this just fantasy?
Caught by the guiding wave. No escape from reality.
Open your eyes. Look up to the skies and see:
I'm just [a] state vector, I need no images.
Because I'm easy come, easy go.
A little high, little low.
Anyway the [pilot] wave blows, doesn't really matter to me, to me.
The pilot-wave theory adopts \(\psi(q,t)\) as an objective classical wave – which it gives a new name, the "guiding wave" or "pilot wave" – but in order to agree with the fact that particles may be observed at sharp locations despite the fuzziness of the wave functions associated with them, they must add some additional degrees of freedom: the actual classical position of the particle. The defining philosophy of Bohmian mechanics is that the actual, classical position of the particle is "guided" by a function of the classical field emulating the wave function so that the probability distribution for the particle's positions remains what it should be according to quantum mechanics. For example, the laws that guide the actual classical particle must be such that they repel the particle from the interference minima in a double-slit experiment:
The right end of the picture (the photographic plate) shows denser and less dense regions, the interference maxima and minima.
Can you find the appropriate rules for one non-relativistic spinless quantum particle that is able to do it in a way that imitates quantum mechanics? You bet. All the tools are available in conventional quantum mechanics for this system. Recall that in quantum mechanics, \(\rho=|\psi(q,t)|^2\) is the probability density that the particle is sitting near location \(q\) at time \(t\). But quantum mechanics also allows you to define the probability current\[
\bold j = \frac{1}{m} \mathrm{Re}\left ( \psi^*\bold{\hat{p}}\psi \right )
\] Note that it is again sesquilinear (bilinear with one star) in the wave function. We act on the wave function by the momentum operator \(\bold{\hat{p}}=-i\hbar\nabla\), multiply the result by \(\Psi^*\) just like when we calculated the probability density, take the real part, and divide it by the mass \(m\). You see that it only differs from the formula for the probability density by the extra operator \(\bold{\hat{p}}/m\), the operator of the velocity \(\bold{\hat v}\), inserted in the middle. The real part could have been added to the probability density as well because it was real to start with.
At any rate, if you define the probability density and the probability current correctly, they obey the continuity equation\[
\frac{\partial \rho}{\partial t} + \bold \nabla \cdot \bold j = 0.
\] The divergence of the probability current exactly agrees with the decrease of the probability density in the given region. It means that the probability current measures how the probability has to flow into/from a given infinitesimal volume if you want the probability density to change just like it should according to Schrödinger's equation.
Now it's easy to realize that if you define a classical "velocity field"\[
\bold{\hat v} = \frac{\bold{\hat j}}{\rho},
\] it will be very useful for emulating quantum mechanics. It's not hard to prove that if you define Bohmian mechanics as the "classicalized" wave function together with a classical position \(\bold{\hat q}(t)\) that evolves according to the "guiding equation"\[
\ddfrac{\bold{\hat q}}{t} = \bold{\hat v} (\bold{\hat q}(t)),
\] the trajectories of the classical particles will be repelled from the interference minima, attracted to the interference maxima, and will obey a more specific rule: If you imagine that the particles in the initial state are distributed according to the probability distribution given by \(\rho(\bold{\hat q},t)\), it will be true for the final state, too.
This trick may be generalized to the case of \(N\) non-relativistic particles. In this case, the wave function \(\psi\) becomes a classical wave that is a function of the \(3N\)-dimensional configuration space. This configuration space is larger than the ordinary space and is "multi-local" and because we have this "multi-local" old-fashioned classical field, the theory becomes explicitly non-local and a violation of the Lorentz symmetry, at least in principle, is inevitable.
I would like to emphasize that it's no surprise at all that it's possible to find the equation that evolves the probability distribution in the right way. Imagine that you start with a wave function \(\psi(\bold{\hat q})\) at some time \(t_0\). Throw a trillion of dots – particles – to the space that are distributed according to \(\rho = |\psi|^2\). Do the same thing for the final moment \(t_1\) when the wave function is different. You will have two configurations of trillions of particles. It's not shocking that you may "connect the dots" from the initial state to the final state in some way.
A way that is simple enough, one based on the probability current and described above, gives you one of the solutions. But it's not the only solution. In reality, the "initial dots" could be connected with the "final dots" in infinitely many ways (well, a "mere" trillion factorial if you only have a trillion of dots). In the continuous language, you could e.g. make the particles move along spirals inside the cylinders that surround the interference maxima. Is one way to connect the dots better than others?
Of course, it's not. All of them are equally good. Quantum mechanics commands you to learn something about the initial state – some wave function or density matrix that encode the initial probability distribution – and it allows you to predict the probabilities for the final state. But it doesn't tell you which of the initial particles is connected with which final particle, i.e. how to connect the dots. It doesn't inform you about any preferred classical trajectory that connects them (and Feynman's approach orders you to sum over all trajectories). If you could actually "measure" this permutation that determines how the dots are connected, quantum mechanics would be shown incomplete.
However, it's totally obvious that there's no way to measure the trajectories or permutations inside. The particles just don't have well-defined, in principle measurable trajectories between the measurements for the usual Heisenberg uncertainty principle-based reasons. If you tried to measure the trajectory before the final measurement, you would change the experiment and destroy or damage the final interference pattern. So all the precise lines on the "caricature of the double-slit experiment"
are pure fantasy. They're just crutches for the people who need some specific picture of the intermediate states to be drawn. But the specific picture we drew is in no way better than infinitely many other pictures we could draw that would predict the same interference pattern, the same probability distributions for the final state. Everything we added because we wanted the physical system to have objective properties prior to the measurement – because we're bigots who can't accept the fact that classical physics has died – is unphysical. The added value is purely negative. Everything we added to get from proper quantum mechanics to Bohmian mechanics is rubbish. And many things we're forced to lose when we switch from quantum mechanics to Bohmian mechanics are essential.
Because the wave function has a probabilistic interpretation in proper quantum mechanics (it is a ready-to-cook meal from which one may quickly prepare various probability distributions by a calculation), it doesn't matter that it spreads. The spreading of the wave function doesn't make the world more fuzzy. It only makes our knowledge about the world more uncertain. But once we learn the answer to a question – e.g. about the position of a particle – the world fully regains its sharp character it boasted at the beginning. If you only know that the probability of 1,2,3,4,5,6 are 1/6 for some dice in Las Vegas, it doesn't mean that the dice became structureless balls or that the digits written on their sides have become fuzzy or mixed or smeared. It just means that we have one equally sharp cubic die but we just don't know its orientation in space. The uncertainty coming from quantum wave functions are analogous – they only differ from the "classical uncertainty" by their inevitability.
That's not the case of Bohmian mechanics. The wave function is interpreted as a classical field of a sort and it is objectively spreading. So something objective is being diluted all over the Universe. That's terrible because this objectively makes the Universe increasingly more fuzzy and bizarre. The useless parts of the guiding wave – the "classicalized" wave function – should be killed in some way because they became useless. But Bohmian mechanics doesn't imply anything of the sort. If you want to clean the garbage of the no-longer-needed branches of the wave function, you will have to add another independent contrived mechanism. Such a mechanism will be a new source of a violation of the Lorentz invariance.
(You also need a special mechanism that prepares the guiding wave in a certain initial state and one more mechanism that distributes the "actual particle" inside the appropriate distribution with the right odds because these two things don't follow from Bohmian mechanics as we have defined it above, either. Most of these things are ignored by the Bohmists. Note that with the right probabilistic interpretation – quantum mechanics directly connects the knowledge about the past with the knowledge about the future, without any new crutches in between – we don't need to invent any new mechanisms.)
I think that a sane, critically thinking person must be able to realize what he is doing if he is doing such things. He is drawing a ludicrous caricature of Nature – a physical system that is actually governed by the laws of proper quantum mechanics – that reproduces some properties of the correct, quantum theory. The project of drawing the caricature is motivated by the desire to defend a philosophical dogma that the world is fundamentally classical even though it is clearly not. If he has at least some conscience, he must feel analogously as if he were counterfeiting a $100 banknote. He must know that what he is producing isn't the "real thing"; it is just a forgery that can bring him greater personal benefits than the actual banknotes but that's where the advantages stop.
But every change from the proper quantum mechanics to the pilot-wave theory is clearly wrong – the "added value" is unquestionably negative. Because the Bohmists don't like the probabilistic character of the wave function, they turn it into a classical wave – the guiding wave. But a classical wave that spreads objectively makes the world ever more fuzzy. So one has to introduce new tricks to have a chance that this increasing fuzziness doesn't spoil the world. All these tricks – tricks that can't really ever be defined in such a way to imitate quantum mechanics completely accurately – have to be considered and added just in order to mask the fact that the wave function is simply not a classical field.
It's fair to say that the claim by quantum mechanics that the wave function is not an objectively real wave or field that can be in principle measured is something that we have proven by direct experiments. Attempts to pretend that the wave function is a classical wave are just attempts to mask the truth. I am confident that every Bohmist must ultimately realize it is so and he must be dishonest if he claims that his efforts are more justifiable than the efforts of creationists who are trying to obscure the explicit evidence in favor of evolution: they are exactly equally unjustifiable.
Moreover, it's sometimes being said or thought that the perfect emulation of quantum mechanics can be done. Because the invalidated dogma that Nature is fundamentally classical is holy for these bigots, they think that it should be done, too. But the truth is that it can't be done for a general physical system and for a general choice of observables we may measure in actual experiments described by general enough quantum theories.
Try to add the spin to a particle. If the logic of Bohmian mechanics – the wave function "is" a classical field and we should also add some classical values of a maximum set of commuting observables – were universally valid, it's clear that aside from the spinor-valued wave function \((c_{\rm up},c_{\rm down})\), we should also assume that Nature "objectively knows" about the classical bit of information that tells you whether the spin is "actually" up or down.
However, even the Bohmists realize that if every electron "objectively knew" whether its spin is up or down with respect to the \(z\)-axis, then the laws of physics would break the rotational symmetry because the \(z\)-axis would play a privileged role. Roughly speaking, the ferromagnets would always be oriented vertically, to mention an example. If the \(z\)-component of the classical angular momentum is quantized, it's totally obvious that the other components can't be quantized. A nonzero vector can't have integer (or half-integer) coordinates in each (rotated) coordinate system.
Because they sort of realize that the rotational symmetry holds exactly and the hypothesis that the classical value exists with respect to one axis would break the symmetry kind of maximally, they decide that the Bohmian rules must be "skipped" in the case of the spin – they just manually omit some degrees of freedom that should be there according to the general prescription of Bohmian mechanics and hope that the spin measurements are ultimately reduced to position measurements so that it doesn't hurt if some degrees of freedom are not doubled in the usual Bohmian way.
The reason why the case of the spin is obvious even to them is the fact that different components of the spin are non-commuting observables none of which is more "natural" than others. After all, they are exactly equally natural because they are related by the rotational symmetry.
While the spin is an obvious problem, the pathological character of Bohmian mechanics is much more general. Every (qubit-like) discrete information in quantum mechanics – information labeling a finite-dimensional Hilbert space – is incompatible with the Bohmian philosophy. Recall that Bohmian mechanics added "classical trajectories" \(\bold{\hat q}(t)\) and these coordinates were functions of time that evolved according to some differential equations. But that was only possible because the spectrum of the coordinates was continuous. If you think about observables with a discrete spectrum, it just doesn't work because they would have to "jump to a different, sharply separated discrete eigenvalue" at some points and there can't be any deterministic laws that would govern such jumps.
Quantum mechanics tells you that a quantum computer composed of a very large number of qubits may perfectly emulate any quantum system. But that's not the case in Bohmian mechanics. An arbitrarily large quantum computer is composed of qubits, e.g. many electron spins, and because the spin isn't accompanied by a classical bit, Bohmian mechanics is forced to say that an arbitrarily large quantum computer only contains the "classicalized" wave function but no additional classical information analogous to the classical trajectories. So for a quantum computer, the whole "redundant superstructure" (which is how Albert Einstein called these extra coordinates – he was a foe of the pilot-wave theory, despite his being a disbeliever in quantum mechanics) has to be omitted. This is quite an inconsistency in the Bohmian treatment of different quantum systems. The actual reason behind the inconsistency is clear, of course: some physical systems may be caricatured by the pilot-wave trick, others can't. But in Nature, there actually isn't any qualitative difference (in principle observable difference) between these two classes of situations.
I said that Bohmian mechanics doesn't allow you to consistently treat the particles' spin or any other discrete degrees of freedom, for that matter. But the inadequacy of Bohmian mechanics is much worse than that. It really doesn't allow you to correctly deal with most observables in general quantum systems, not even with observables with a continuous spectrum. I have discussed similar problems in Bohmists and the segregation of primitive and contextual observables four years ago.
The problem is that Bohmian mechanics forces you to choose some observables that "really exist" – are encoded in the objective extra coordinates that are supplemented to the "classicalized" wave function. However, quantum mechanics implies that other observables just can't have a well-defined value at the same moment – because they don't commute with the first ones, stupid. That also means that Bohmian mechanics can't have any answers to questions about the value of these observables.
The Bohmian trajectories in the picture above pretend that a particle has an objective position and an objective velocity. But what about the orbital angular momentum \(\bold{\hat L} = \bold{\hat q}\times \bold{\hat p}\)? A basic result of quantum mechanics is that the spectrum of \(\bold{\hat L}_z\) is discrete; the eigenvalues are integer multiples of \(\hbar\). Already this elementary fact in quantum mechanics – even non-relativistic quantum mechanics – is completely inaccessible to Bohmian mechanics. The cross product of the classical position and the classical momentum of the "added Bohmian trajectories" isn't quantized at all. It has really nothing to do with the angular momentum that can be measured.
And be sure that the measurement of the angular momentum is often – e.g. for electrons in atoms – much more natural and "fundamental" than the measurement of the particles' positions or momenta. It's because its eigenstates are much closer to the energy eigenstates and those are the most natural basis of a Hilbert space because they describe stationary – and therefore lasting – states. But such a direct measurement of the discrete orbital angular momentum can't be done in Bohmian mechanics. Instead, Bohmian mechanics tells you that you have to continue the evolution of the wave function according to the laws stolen from proper quantum mechanics up to the moment when you can actually convert the original measurement to a measurement of a location, and hope that Bohmian mechanics knows how to emulate the measurements of positions. It isn't quite the case, either, but even if it were the case, Bohmian mechanics is just bringing an amazing degree of inconsistency into the way how different observables – different functions of the phase space – are treated. A sensible theory should treat all functions of the coordinates and momenta i.e. all functions in the phase space equally, following unified rules. Quantum mechanics obeys this criterion, Bohmian mechanics doesn't. We could say that just like the solipsists say that their own mind is the only physical system that may be claimed to be self-aware, Bohmian mechanics remains silent and reproducing the (accurately emulated) quantum evolution up to the moment when macroscopic positions are apparently being measured (those are the "conscious events" that are supposed to replace quantum mechanics with something else). But in the real world, there's nothing special about the minds of the solipsists (except that they belong to the set of crazy people) and there's also nothing special about the positions of macroscopic objects in comparison with many other observables we may define.
In quantum mechanics, you may directly construct operators for the angular momenta and ask about their possible values, eigenvalues, and about the predicted probabilities that the measured value will be one or the other. It doesn't matter whether the angular momenta belong to large or small or conscious or unconscious objects. Quantum mechanics allows you to deal with all observables equally. In Bohmian mechanics, those things matter. Effectively, any measurement has to be continued up to the moment when it imprints itself into a position of a macroscopic object which Bohmian mechanics claims to reproduce correctly.
A totally new minefield for Bohmian mechanics is relativity. The minimum consistent relativistic theories of quantum particles are quantum field theories (QFTs). They include the spin; I have already discussed the Bohmian problems with the spin. But there are infinitely many similar problems. For example, you may choose many different bases of the QFT Hilbert space. They may be eigenstates of the occupation number operators; eigenstates of field operator distributions \(\hat \phi(\bold{x})\), and so on. It is not clear at all which of these observables are added as the "extra classical trajectories" to Bohmian mechanics. In fact, it is totally obvious that none of the choices will behave correctly in all the experiments that may test a quantum field theory. Also, you can't add many of them or all of them (e.g. both positions and particles and classical values of the fields) because it would be clearly undetermined which of these "added", mutually conflicting classical degrees of freedom defines the "actual reality" that decides about a measurement.
Sometimes, the value of the field at a given point may be measured, especially when the frequencies are low. So it would seem like you need to add a "preferred classical field configuration" to the Bohmian version of a QFT. However, especially for high frequencies, the quantum field manifests itself as a collection of particles so you may want to add the trajectories of the particles instead. Moreover, even if you represent a QFT as a system describing many particles, your Bohmian theory won't be able to deal with the basic and most universal processes that must exist in a QFT or any other relativistic quantum theory such as the pair creation of a particle and an antiparticle and their destruction.
If individual particles evolve according to the "guiding wave" equations we discussed at the beginning, it's simply infinitely unlikely (the probability refers to the selection of the initial positions from the distribution) that they will ever collide with one another. Two random lines in a 3D space simply don't intersect one another. But if they don't directly collide, it means that they can't annihilate! To allow the particles to annihilate (and be pair-created) with the (experimentally proven) nonzero probability, you would need to introduce a totally non-local extra dynamics that sometimes allows the particles to jump to a completely different place; or you would have to allow the annihilation of particle pairs that don't coincide in space. Any such an extra mechanism would force you to change the original laws of physics in a way that would almost certainly contradict some other experiments because the unmodified quantum laws simply work and it was a healthy strategy for you to emulate them "perfectly" at the very beginning. Such modifications would especially contradict some experimental tests of relativity because these modifications are so horribly nonlocal.
So you have no chance to construct an operational Bohmian caricature of a quantum field theory. Needless to say, the problems become even more extreme once you switch to quantum gravity i.e. string theory because many more observables have a discrete spectrum, there are many more ways to choose the bases, the nonzero commutators of various observables are more important than ever before, and Bohmian mechanics just can't prosper in such general quantum situations. On one hand, quantum gravity i.e. string theory is just another quantum theory. On the other hand, it is "more quantum" than all the previous quantum theories simply because the quantum phenomena affect many more questions that could have been thought of in the classical way if you worked with simpler quantum mechanical theories (for example, the spacetime topology – especially the number of Einstein-Rosen bridges in the spacetime – can't even be assigned a linear operator in a quantum gravity theory, as Maldacena and Susskind argued).
The non-local fields, collapses, non-local jumps needed for particle annihilations, and other things represent an inevitable source of non-locality that can, in principle, send superluminal signals and that consequently contradicts the Lorentz symmetry of the special theory of relativity. There's no way out here. If you attempt to emulate a quantum field theory in this Bohmian way, you introduce lots of ludicrous gears and wheels – much like in the case of the luminiferous aether, they are gears and wheels that don't exist according to pretty much direct observations – and they must be finely adjusted to reproduce what quantum mechanics predicts (sometimes) without any adjustments whatsoever. Every new Bohmian gear or wheel you encounter generally breaks the Lorentz symmetry and makes the (wrong) prediction of a Lorentz violation and you will need to fine-tune infinitely many properties of these gears and wheels to restore the Lorentz invariance and other desirable properties of a physical theory (even a simple and fundamental thing such as the linearity of Schrödinger's equation is really totally unexplained in Bohmian mechanics and requires infinitely many adjustments to hold – while it may be derived from logical consistency in quantum mechanics). It's infinitely unlikely that they take the right values "naturally" so the theory is at least infinitely contrived. More likely, there's no way to adjust the gears and wheels to obtain relativistically invariant predictions at all.
I would say that we pretty much directly experimentally observe the fact that the observations obey the Lorentz symmetry; the wave function isn't an observable wave; and lots of other, totally universal and fundamental facts about the symmetries and the interpretation of the basic objects we use in physics. Bohmian mechanics is really trying to deny all these basic principles – it is trying to deny facts that may be pretty much directly extracted from experiments. It is in conflict with the most universal empirical data about the reality collected in the 20th and 21st century. It wants to rape Nature.
A pilot-wave-like theory has to be extracted from a very large class of similar classical theories but infinitely many adjustments have to be made – a very special subclass has to be chosen – for the Bohmian theory to reproduce at least some predictions of quantum mechanics (to produce predictions that are at least approximately local, relativistic, rotationally invariant, unitary, linear etc.). But even if one succeeds and the Bohmian theory does reproduce the quantum predictions, we can't really say that it has made the correct predictions because it was sometimes infinitely fudged or adjusted to produce the predetermined goal. On the other hand, quantum mechanics in general and specific quantum mechanical theories in particular genuinely do predict certain facts, including some very general facts about Nature. If you search for theories within the rigid quantum mechanical framework, while obeying the general postulates, you may make many correct predictions or conclusions pretty much without any additional assumptions.
If you ask any of the hundreds of questions (Is the wave function in principle observable? Are observables with discrete spectra fundamentally less than real than those with continuous spectra? Is there a way to send superluminal signals, at least in principle? And so on) in which proper quantum mechanics differs from Bohmian mechanics, the empirical evidence heavily favors quantum mechanics and Bohmian mechanics can only survive if you adjust tons of parameters to unnatural values (from the viewpoint of Bohmian-like theories) and hope that it's enough (which it's usually not).
In 2013, even more so than in 1927, the pilot-wave theory is as indefensible as a flat Earth theory, geocentrism, the phlogiston, the luminiferous aether, or creationism. In all these cases, people are led to defend such a thing because some irrational dogmas are more important for them than any amount of evidence. That's what we usually refer to as bigotry.
And that's the memo.
Add to Digg this Add to reddit
snail feedback (24) :
reader Dilaton said...
I dont know why this is, but I just cant prevent my mind from thinking Bohemian Rhapsody whenever I see the words Bohmian mechanics ... :-D
Going to read this later, from scrolling througy I see that it obviously contains a lot of nice physics I like.
reader Luboš Motl said...
LOL, I live in Bohemia so I would be distracted all the time if I shared the distractions with you. ;-)
reader Peter F. said...
Freakishly good song, that one. Thanks Dilaton for associating and Lumo for linking! :-)
reader lucretius said...
I agree with almost everything (modulo the epithets). I have always been allergic to Bohm not so much because he was, as you say “media savvy commie”, but because of all the “Eastern mystical gibberish” that his ideas are associated with and which made them so cool with the hippie crowd. In fact last year during a trip to Canta Cruz, California we visited a (actually quite nice) “natural food” restaurant, where the walls and the menu were covered with “quantum mystical” deep thoughts that sounded like stuff out of “Wholeness and the Implicate Order”.
Having lived for over 20 years in Japan I am quite familiar with Buddhist thought and it is indeed true that one can find some curious parallels with modern physics but not really more so than one can find in “De Rerum Natura” authored by my namesake. I would say that the similarities are about 90% coincidence and 10% due to some basic structure of human thought and logic, and, of course, that the both ancient Indian (and Greek) thought and high energy physics are concerned with the same basic issue: the origin and nature of everything.
But Bohm’s contribution to the confusion does not endear him to my heart or mind.
Niether of course, does his being a commie - but if you really view this as a serious charge there will be few of his contemporaries among physicists left unscathed. I am not quite sure if this has any relation to his views of physics and metaphysics. I note however that the strongest supporter of Bohm’s view that I have met, Jean Bricmont, is a leftist raving lunatic and an associate of Chomsky (although he has to be given some credit for co-authoring Fashionable Nonsense (
I am glad that Bohm lived in Israel for only 2 years - otherwise I would have felt compelled to like him more.
I have never quite understood why physicists, at least in the early post-war years, were so much more left-wing than mathematicians. I can name quite a few strongly anti-communist mathematicians (above all John von Neumann and my Stanislaw Ulam - my favourite of that generation, and many others) but among physicists Teller is perhaps the only one who comes to my mind, and the situation does not seem to be very different today.
Finally, it seems to me that, apart from the mysticism, the main thing that attracts people to ideas like Bohm’s is the psychological difficulty many face with accepting probability as something that is part of physical reality rather than a human device invented to cope with ignorance. This is true even of mathematicians who work in probability. I have collaborated with “pure” probabilists and have got the impression that only a minority believe in randomness as a feature of the “real world” although everyone has heard that most quantum physicists claim otherwise. In fact, I find myself frequently hesitating about this issue,depending on my mood. For a mathematician probability theory is just a branch of measure theory and all its interesting results involve limit theorems - whose relation to the physical world appears dubious. It seems to me that many people still feel queasy about randomness in physical laws (the way Einstein felt) and this suggests that attempts to find non-probabilistic interpretations of quantum phenomena will continue to find supporters.
reader Mephisto said...
I must admit that religious mysticism is something of a hobby for me. I studied quite a lot (zen buddhism - Hui Hai, Huang Po, Tibetan buddhism, taoism, christian mysticim - Meister Eckhart, Ramana Maharishi, Jiddu Krishnamurti). Jiddu Krishnamurti was a personal friend of David Bohm and I believe he influenced his views a lot. It is fair to say that Krisnamurti was never interested in physics, he was interested in human consciousness, so the holomovement is the sole creation of Bohm himself
Although I would describe myself as a mystic, I am against mixing mysticism with physics. I read the book by Fritjof Capra (Tao of Physics) and disliked it. Modern variants are various kinds of Akashic fields and stuff like that
It is impossible and unwise to mix religion with science. For mystics and pantheists like me, sciece is just a part of devine reality
reader Justin Glick said...
You seem to think that a particle has a precise position at all times, but we just don't know what it is. QM does not say this.
reader serene deputy said...
I think Lubos just wanted to say that you are still describing the same system, say, one structureless point-like particle, not a field or a ghost (otherwise your starting Hamiltonian and Schrödinger equation would have changed), but whose position becomes fundamentally undetermined to some extent.
reader NumCracker said...
Dear Lubos, excuses for this off-topic comment: would there be a way to experimentaly test other interpretations, not formulations, of QM and QFT as the Many-World one? Have this ever being done? Thanks
reader Dilaton said...
Yeah it would be fun, if Lumo could adapt the whole songtext to this TRF article ... :-D
reader Stephen Paul King said...
Qm, IMHO, demands that Nature does not have a preffered observable.
reader Diana Z. said...
Yay, finally a good explanation without too much technical stuff.
I also have an OT request. Can you explain, in this same understandable style, why time goes slower for objects closer to the source of gravity. I looked, and I couldn't find a proper explanation. I hate it when something starts promising and then you see several pages of formulas that will give anyone a headache. With an added "it all stems from relativity, go read it" at the end. Ugh! It almost makes me believe, the authors themselves don't have a clue.
reader Luboš Motl said...
Right, lucretius. I remember the story from Feynman's book that said, among other things, that some promoters of paranormal phenomena convinced a professor David Bohm that they had supernatural abilities...
reader Luboš Motl said...
Dear Numcracker, there doesn't exist any specific enough formulation of MWI that would give some predictions differing from proper QM (at least in principle) - except for versions that are immediately ruled out even by the simplest experiments.
Just like I said about Bohmian mechanics, the best thing that MWI proponents are hoping - and it's just a hope, and an unjustified one - is that they reproduce the predictions of QM exactly. And they're extremely far from it. But everyone knows that QM with a proper interpretation gives predictions for pretty much everything and pretty much every physicist knows that they're correct so to "emulate them exactly" is the ultimate dream of any other "interpretation". An unachievable dream.
So MWI isn't a real theory that would be used to really do active physics. It's a philosophical declaration that some physicists sometimes endorse at the level of words even though they don't exactly know what this theory is supposed to say.
reader Luboš Motl said...
One doesn't need to "demand" it. Nature obliged well before quantum mechanics - and humans - were born. The chronology is exactly the opposite than you suggest. Nature created a world with many observables none of which is "preferred" and people constructed theories that were selected by the demand that they agree with Nature.
All modern, post-1925 fundamental theories of Nature are demanded to agree at the level of the atomic details, i.e. respect the general rules of quantum mechanics, which also means that they have to agree with the fact about Nature that it doesn't have preferred observables.
reader lucretius said...
This thread made me try to think of any Western physicist of whom I knew that he was definitely not left-wing and then I remembered the following.
The first time I learned about the violations of Bell’s inequalities and its implications was in 1981 by reading an article by Bernard D’Espagnat in “Encounter”. I just searched the web and and found it here:
I have not read it since it first appeared, that is, for over 30 years (it will be interesting to see how current it sounds today) but I always remembered its main point, namely, that the results of experiments showed that concept of “independent reality” had to be abandoned. Trying to understand this induced me to learn some fairly technical physics, which I had not been interested in before.
I am not sure how many people reading this are old enough to grasp the significance of such an article appearing in “Encounter”. "Encounter "was then Europe’s leading intellectual publication whose raison d'être was anti-communism. In fact, it was founded by the Paris based Congress for Culutral Freedom, a center-left organization dedicated to opposing communist influence in the West. It’s founder’s were the poet Stephen Spender and the “father” of American neo-conservatism Irving Kristol. Among its leading contributors were major cultural figures such as Raymond Aron, Ignazio Silone, Arthur Koestler and lots of others. There is a good account of all of it on the Wikipedia.
(I used to be a subscriber and still have all my old copies).
I don’t know anything about D’Espagnat’s politics but the fact that he chose to publish that article in "Encounter" means that at least he was definitely not a communist or a “fellow traveller”. In 1967 it had been discovered that the CIA had been secretly funding the magazine, which of course made it a taboo for anyone on the left. I suspect that publising in "Encounter" must have ruined D’Espagnat’s reputation among leftist physicists.
This article was, if I remember correctly, the only article on physics ever to appear in "Encounter" - which shows how much importance was attached to this matter then. It was followed by a polemic between D’Espagnat’s and the well known conservative philosopher Antony Flew ( Flew was a clever many but like many layman, refused to accept that the idea of “non-existence of independent reality” made any sense.
reader Florin Moldoveanu said...
All known QM interpretations are faulty and the only correct interpretation can come from the project to reconstruct QM from natural principles (see QM also goes beyond the usual C* algebraic formulation into the non-commutative geometry formulation of the SM.
QM and classical mechanics are distinct "fixed points" in a category theory formulation and any attempt to derive one from the other is a fool's errand. Any non-unitary time evolution of QM (e.g. the collapse postulate) is incompatible with QM's framework, but MWI is not the answer. The answer is much more subtle and mathematical sophisticated but for all practical purposes the collapse postulate does the job (using the collapse postulate is like using ict in relativity and ignoring additional mathematical structures - see and for the beginning of the answer).
The wavefunction is neither ontological nor epistemological in the usual sense.
reader Dimension10 said...
Great post...
I am quite confused about why Bohmian mechanics is often listed as an "interpretation" of QM, such as here: (That too, for that document, the authors are like Becker, Styer, etc.).
As you say in the post, Bohmian mechanics can't describe all the QM phenomena exactly, so I don't see how it can be listed as an "interpretation of Quantum Mechanics?
In my opinion, it should be listed as... an "alternative" non-mainstream theory to QM, like how MOND is a non-mainstream alternative to NG.
Also, finally, an off-topic question: "How do you get inline MathJax on your post? The MathJax CDN doesn't seem to allow $...$ and ##...## but only $$...$$ which results in display math. Do you use [itex] ... [/itex] or something like that? .
reader Luboš Motl said...
Dear Dimension10, exactly, Bohmian mechanics is an alternative theory - or a wishful thinking about the existence of an alternative theory, if we want to go beyond the known toy examples - so it's demagogic to sell it as an "interpretation" of QM.
The sequences to write TeX via Mathjax are \(E=mc^2\) and\[
E = mc^2
\] but they don't work in DISQUS. Mathjax allows you to define other sequences that start the displayed and inline math modes, including $...$ and $$....$$. The latter actually does work here as well but I didn't allow the single dollar because I sometimes is the character as a unit of money.
reader Dimension10 said...
I just realised you have the "listen" feaure enabled. It pronounces the equations very nicely : ) .
reader Lisa Korf said...
Hi Luboš, I am curious to know how you would reconcile weak measurement trajectories, as in (illustration of measured average trajectories from the article here: ) with your suggestion that "If you could actually 'measure' this permutation that determines how the dots are connected, quantum mechanics would be shown incomplete. " since these, although averaged and still interpolated, are not arbitrary either, and an interference pattern can still be observed. Thanks. LK
reader Luboš Motl said...
Dear Dr Korf, thanks for your question which however shows that you are confused about the status of various claims and patterns here.
The picture you included is pretty much the very same picture that I included in this very blog post and it wasn't measured. The trajectory of a quantum particle can't be measured without affecting it.
The copy of the picture in the Science Magazine is a result of a "weak measurement" but a "weak measurement" isn't a measurement. What is critical is that a weak measurement doesn't measure a property of the measured object/system only. Instead, it determines some function of the properties of the system and numerous conventions and choices that were used to define a particular weak measurement procedure. See e.g.
So the weak measurement isn't unique in any way and the trajectories on the picture were obtained with one particular prescription for a weak measurement protocol. One could get pretty much any other permutation of the dots - any other picture like that with the same density of lines in each region - if we were drawing these pictures using other weak-measurement protocols. There is nothing physical about the randomly drawn trajectories - any choice is just a convention and all conventions are equally physical at the end.
reader lucretius said...
These sort of things can be produced "ad infinitum" so debunking them one by one is no more productive a way to spend time than trying to work out precisely what is wrong with this:
reader Scott Lahti said...
I thank you for your kind words regarding my 20x expansion, from late 2012, of the Wikipedia entry for Encounter magazine, one of whose seeds and two of whose results appear below:
reader bustemup said... |
5d98d1d50e28e7a7 | Featured Comments: Week of 2013 March 24
There were two posts that got a few comments each, so I will repost most of those.
Review: Linux Mint MATE 201303
Commenter Juan Carlos García Ramírez had this to say: "I still prefer xfce :D so linux mint 14 (Nadia) xfce for me".
Review: Pardus 2013 KDE
Reader Megatotoro shared, "I tested Pardus 2013 as well. In my case, I could test the repositories using Synaptic and could download the localization file for my language. I did notice that even so, some programs were still in Turkish (VLC, Synaptic). I have the Release Candidate installed on my laptop and it is greatly stable."
Commenter Mechatotoro had this bit of support: "@Prashanth, Thank you for your time with Pardus and your review. Good luck going back to school! @Mega, I just came from your blog and recommended you to read this review. I guess you are too fast :-)"
Thanks to all those who commented on those posts. I am back on campus now, and the semester isn't about to wait for me to settle down, so it will be back in full swing any minute now. This means that my post frequency will once again decrease through the rest of the semester. Anyway, if you like what I write, please continue subscribing and commenting!
Review: Pardus 2013 KDE
My spring break is coming to an end (I only have 1.5 more days), so I figured it might be nice to do another review while I still can. Today I'm reviewing Pardus 2013.
Main Screen + KDE Kickoff Menu
Pardus is a distribution developed at least in part by the Turkish military. It used to not be based on any other distribution and used its unique PISI package management system, which featured delta upgrades (meaning that only the differences between package versions would be applied for upgrades, greatly reducing their size). Since then, though, the organization largely responsible for the development of Pardus went through some troubles. One result was the forking of Pardus into PISI Linux to further develop the original alpha release of Pardus 2013. The other result was the rebasing of Pardus on Debian, abandoning PISI in that regard. Now Pardus 2013 is a distribution based on Debian 7 "Wheezy" that uses either KDE 4.8 or GNOME 3 (whatever version is packaged in the latest version of Debian, though I'm not sure what that is).
I reviewed Pardus on a live USB made with MultiSystem. Follow the jump to see what it's like.
Hamiltonian Density and the Stress-Energy Tensor
As an update to a previous post about my adventures in QED-land for 8.06, I emailed my recitation leader about whether my intuition about the meaning of the Fourier components of the electromagnetic potential solving the wave equation (and being quantized to the ladder operators) was correct. He said it basically is correct, although there are a few things that, while I kept in mind at that time, I still need to keep in mind throughout. The first is that the canonical quantization procedure uses the potential $\vec{A}$ as the coordinate-like quantity and finds the conjugate momentum to this field to be proportional to the electric field $\vec{E}$, with the magnetic field nowhere to be found directly in the Hamiltonian. The second is that there is a different harmonic oscillator for each mode, and the number eigenstates do not represent the energy of a given photon but instead represent the number of photons present with an energy corresponding to that mode. Hence, while coherent states do indeed represent points in the phase space of $(\vec{A}, \vec{E})$, the main point is that the photon number can fluctuate, and while classical behavior is recovered for large numbers $n$ of photons as the fluctuations of the number are $\sqrt{n}$ by Poisson statistics, the interesting physics happens for low $n$ eigenstates or superpositions thereof in which $a$ and $a^{\dagger}$ play the same role as in the usual quantum harmonic oscillator. Furthermore, the third issue is that only a particular mode $\vec{k}$ and position $\vec{x}$ can be considered, because the electromagnetic potential has a value for each of those quantities, so unless those are held constant, the picture of phase space $(\vec{A}, \vec{E})$ becomes infinite-dimensional. Related to this, the fourth and fifth issues are, respectively, that $\vec{A}$ is used as the field and $\vec{E}$ as its conjugate momentum rather than using $\vec{E}$ and $\vec{B}$ because the latter two fields are coupled to each other by the Maxwell equations so they form an overcomplete set of degrees of freedom (or something like that), whereas using $\vec{A}$ as the field and finding its conjugate momentum in conjunction with a particular gauge choice (usually the Coulomb gauge $\nabla \cdot \vec{A} = 0$) yields the correct number of degrees of freedom. These explanations seem convincing enough to me, so I will leave those there for the moment.
Another major issue that I brought up with him for which he didn't give me a complete answer was the issue that the conjugate momentum to $\vec{A}$ was being found through \[ \Pi_j = \frac{\partial \mathcal{L}}{\partial (\partial_t A_j)} \] given the Lagrangian density $\mathcal{L} = \frac{1}{8\pi} \left(\vec{E}^2 - \vec{B}^2 \right)$ and the field relations $\vec{E} = -\frac{1}{c}\partial_t \vec{A}$ & $\vec{B} = \nabla \times \vec{A}$. This didn't seem manifestly Lorentz-covariant to me, because in the class 8.033 — Relativity, I had learned that the conjugate momentum to the electromagnetic potential $A^{\mu}$ in the above Lagrangian density would be the 2-index tensor \[ \Pi^{\mu \nu} = \frac{\partial \mathcal{L}}{\partial (\partial_{\mu} A_{\nu})} .\] This would make a difference in finding the Hamiltonian density \[ \mathcal{H} = \sum_{\mu} \Pi^{\mu} \partial_t A_{\mu} - \mathcal{L} = \frac{1}{8\pi} \left(\vec{E}^2 + \vec{B}^2 \right). \] I thought that the Hamiltonian density would need to be a Lorentz-invariant scalar just like the Lagrangian density. As it turns out, this is not the case, because the Hamiltonian density represents the energy which explicitly picks out the temporal direction as special, so time derivatives are OK in finding the momentum conjugate to the potential; because the Lagrangian and Hamiltonian densities looks so similar, it looks like both could be Lorentz-invariant scalar functions, but deceptively, only the former is so. At this point, I figured that because the Hamiltonian and (not field conjugate, but physical) momentum looked so similar, they could arise from the same covariant vector. However, there is no "natural" 1-index vector with which to multiply the Lagrangian density to get some sort of covariant vector generalization of the Hamiltonian density, though there is a 2-index tensor, and that is the metric. I figured here that the Hamiltonian and momentum for the electromagnetic field could be related to the stress-energy tensor, which gives the energy and momentum densities and fluxes. After a while of searching online for answers, I was quite pleased to find my intuition to be essentially spot-on: indeed the conjugate momentum should be a tensor as given above, the Legendre transformation can then be done in a covariant manner, and it does in fact turn out that the result is just the stress-energy tensor \[ T^{\mu \nu} = \sum_{\mu, \xi} \Pi^{\mu \xi} \partial^{\nu} A_{\xi} - \mathcal{L}\eta^{\mu \nu} \] (UPDATE: the index positions have been corrected) for the electromagnetic field. Indeed, the time-time component is exactly the energy/Hamiltonian density $\mathcal{H} = T_{(0, 0)}$, and the Hamiltonian $H = \sum_{\vec{k}} \hbar\omega(\vec{k}) \cdot (\alpha^{\star} (\vec{k}) \alpha(\vec{k}) + \alpha(\vec{k}) \alpha^{\star} (\vec{k})) = \int T_{(0, 0)} d^3 x$. As it turns out, the momentum $\vec{p} = \sum_{\vec{k}} \hbar\vec{k} \cdot (\alpha^{\star} (\vec{k}) \alpha(\vec{k}) + \alpha(\vec{k}) \alpha^{\star} (\vec{k}))$ doesn't look similar just by coincidence: $p_j = \int T_{(0, j)} d^3 x$. The only remaining point of confusion is that it seems like the Hamiltonian and momentum should together form a Lorentz-covariant vector $p_{\mu} = (H, p_j)$, yet if the stress-energy tensor respects Lorentz-covariance, then integrating over the volume element $d^3 x$ won't respect transformations in a Lorentz-covariant manner. I guess because the individual components of the stress-energy tensor transform under a Lorentz boost and the volume element does as well, then maybe the vector $p_{\mu}$ as given above will respect Lorentz-covariance. (UPDATE: another issue I was having but forgot to write before clicking "Publish" was the fact that only the $T_{(0, \nu)}$ components are being considered. I wonder if there is some natural 1-index Lorentz-convariant vector $b_{\nu}$ to contract with $T_{\mu \nu}$ so that the result is a 1-index vector which in a given frame has a temporal component given by the Hamiltonian density and spatial components given by the momentum density.) Overall, I think it is interesting that this particular hang-up was over a point in classical field theory and special relativity and had nothing to do with the quantization of the fields; in any case, I think I have gotten over the major hang-ups about this and can proceed reading through what I need to read for the 8.06 paper.
Schrödinger and Biot-Savart
There were two things that I would like to post here today. The first is something I have been mulling over for a while. The second is something that I thought about more recently.
Time evolution in nonrelativistic quantum mechanics occurs according to the [time-dependent] Schrödinger equation \[ H|\Psi\rangle = i\hbar \frac{\partial}{\partial t} |\Psi\rangle .\] While this at first may seem intractable, the trick is that typically the Hamiltonian is not time-dependent, so a candidate solution could be $|\Psi\rangle = \phi(t)|E\rangle$. Plugging this back in yields time evolution that occurs through the phase $\phi(t) = e^{-\frac{iEt}{\hbar}}$ applied to energy eigenstates that solve \[ H|E\rangle = E \cdot |E\rangle \] and this equation is often called the "time-independent Schrödinger equation". When I was taking 8.04 — Quantum Physics I, I agreed with my professor who called this a misnomer, in that the Schrödinger equation is supposed to only describe time evolution, so what is being called "time-independent" is more properly just an energy eigenvalue equation. That said, I was thinking that the "time-independent Schrödinger equation" is really just like a Fourier transform of the Schrödinger equation from time to frequency (related to energy by $E = \hbar\omega$), so the former could be an OK nomenclature because it is just a change of basis. However, there are two things to note: the Schrödinger equation is basis-independent, whereas the "time-independent Schrödinger equation" is expressed only in the basis of energy eigenstates, and time is not an observable quantity (i.e. Hermitian operator) but is a parameter, so the change of basis/Fourier transform argument doesn't work in quite the same way that it does for position versus momentum. Hence, I've come to the conclusion that it is better to call the "time-independent Schrödinger equation" as the energy eigenvalue equation.
Switching gears, I was thinking about how the Biot-Savart law is derived. My AP Physics C teacher told me that the Ampère law is derived from the Biot-Savart law. However, this is patently not true, because the Biot-Savart law only works for charges moving at a constant velocity, whereas the Ampère law is true for magnetic fields created by any currents or any changing electric fields. In 8.022 — Physics II, I did see a derivation of the Biot-Savart law from the Ampère law, showing that the latter is indeed more fundamental than the former, but it involved the magnetic potential and a lot more work. I wanted to see if that derivation still made sense to me, but then I realized that because magnetism essentially springs from the combination of electricity and special relativity and because the Biot-Savart law relies on the approximation of the charges moving at a constant velocity, it should be possible to derive the Biot-Savart law from the Coulomb law and special relativity. Indeed, it is possible. Consider a charge $q$ whose electric field is \[ \vec{E} = \frac{q}{r^2} \vec{e}_r \] in its rest frame. Note that the Coulomb law is exact in the rest frame of a charge. Now consider a frame moving with respect to the charge at a velocity $-\vec{v}$, so that observers in the frame see the charge move at a velocity $\vec{v}$. Considering only the component of the magnetic field perpendicular to the relative motion, noting that there is no magnetic field in the rest frame of the charge yields, and considering the low-speed limit (which is the range of validity of the Biot-Savart law) $\left|\frac{\vec{v}}{c}\right| \ll 1$ so that $\gamma \approx 1$ yields $\vec{B} \approx -\frac{\vec{v}}{c} \times \vec{E}$. Plugging in $-\vec{v}$ (the specified velocity of the new frame relative to the charge) for $\vec{v}$ (the general expression for the relative velocity) and plugging in the Coulomb expression for $\vec{E}$ yields the Biot-Savart law \[ \vec{B} = \frac{q\vec{v} \times \vec{e}_r}{cr^2}. \] One thing to be emphasized again is that the Coulomb law is exact in the rest frame of the charge, while the Biot-Savart law is always an approximation because a moving charge will have an electric field that deviates from the Coulomb expression; the fact that the Biot-Savart law is a low-speed inertial approximation is why I feel comfortable doing the derivation this way.
Review: Linux Mint MATE 201303
Main Screen + Linux Mint Menu
Time and Temperature are Complex
Nonzero Electromagnetic Fields in a Cavity
The class 8.06 — Quantum Physics III requires a final paper, written essentially like a review article of a certain area of physics that uses quantum mechanics and that is written for the level of 8.06 (and not much higher). At the same time, I have also been looking into other possible UROP projects because while I am quite happy with my photonic crystals UROP and would be pleased to continue with it, that project is the only one I have done at MIT thus far, and I would like to try at least one more thing before I graduate. My advisor suggested that I not do something already done to death like the Feynman path integrals in the 8.06 paper but instead to do something that could act as a springboard in my UROP search. One of the UROP projects I have been investigating has to do with Casimir forces, but I pretty much don't know anything about that, QED, or [more generally] QFT. Given that other students have successfully written 8.06 papers about Casimir forces, I figured this would be the perfect way to teach myself what I might need to know to be able to start on a UROP project in that area. Most helpful thus far has been my recitation leader, who is a graduate student working in the same group that I have been looking into for UROP projects; he has been able to show me some of the basic tools in Casimir physics and point me in the right direction for more information. Finally, note that there will probably be more posts about this in the near future, as I'll be using this to jot down my thoughts and make them more coherent (no pun intended) for future reference.
Anyway, I've been able to read some more papers on the subject, including Casimir's original paper on it as well as Lifshitz's paper going a little further with it. One of the things that confused me in those papers (and in my recitation leader's explanation, which was basically the same thing) was the following. The explanation ends with the notion that quantum electrodynamic fluctuations in a space with a given dielectric constant, say in a vacuum surrounded by two metal plates, will cause those metal plates to attract or repel in a manner dependent on their separation. This depends on the separation being comparable to the wavelength of the electromagnetic field (or something like that), because at much larger distances, the power of normal blackbody radiation (which ironically still requires quantum mechanics to be explained) does not depend on the separation of the two objects, nor does it really depend on their geometries, but only on their temperatures. The explanation of the Casimir effect starts with the notion of an electromagnetic field confined between two infinite perfectly conducting parallel plates, so the fields form standing waves like the wavefunctions of a quantum particle in an infinite square well. This is all fine and dandy...except that this presumes that there is an electromagnetic field. This confused me: why should one assume the existence of an electromagnetic field, and why couldn't it be possible to assume that there really is no field between the plates?
Then I remembered what the deal is with quantization of the electromagnetic field and photon states from 8.05 — Quantum Physics II. The derivation from that class still seems quite fascinating to me, so I'm going to repost it here. You don't need to know QED or QFT, but you do need to be familiar with Dirac notation and at least a little comfortable with the quantization of the simple harmonic oscillator.
Let us first get the classical picture straight. Consider an electromagnetic field inside a cavity of volume $\mathcal{V}$. Let us only consider the lowest-energy mode, which is when $k_x = k_y = 0$ so only $k_z > 0$, stemming from the appropriate application of boundary conditions. The energy density of the system can be given as \[H = \frac{1}{8\pi} \left(\vec{E}^2 + \vec{B}^2 \right)\] and the fields that solve the dynamic Maxwell equations \[\nabla \times \vec{E} = -\frac{1}{c} \frac{\partial \vec{B}}{\partial t}\] \[\nabla \times \vec{B} = \frac{1}{c} \frac{\partial \vec{E}}{\partial t}\] as well as the source-free Maxwell equations \[\nabla \cdot \vec{E} = \nabla \cdot \vec{B} = 0\] can be written as \[\vec{E} = \sqrt{\frac{8\pi}{\mathcal{V}}} \omega Q(t) \sin(kz) \vec{e}_x\] \[\vec{B} = \sqrt{\frac{8\pi}{\mathcal{V}}} P(t) \cos(kz) \vec{e}_y\] where $\vec{k} = k_z \vec{e}_z = k\vec{e}_z$ and $\omega = c|\vec{k}|$. The prefactor comes from normalization, the spatial dependence and direction come from boundary conditions, and the time dependence is somewhat arbitrary. I think this is because the spatial conditions are unaffected by time dependence if they are separable, and the Maxwell equations are linear so if a periodic function like a sinusoid or complex exponential in time satisfies Maxwell time evolution, so does any arbitrary superposition (Fourier series) thereof. That said, I'm not entirely sure about that point. Also note that $P$ and $Q$ are not entirely arbitrary, because they are restricted by the Maxwell equations. Plugging the fields into those equations yields conditions on $P$ and $Q$ given by \[\dot{Q} = P\] \[\dot{P} = -\omega^2 Q\] which looks suspiciously like simple harmonic motion. Indeed, plugging these electromagnetic field components into the Hamiltonian [density] yields \[H = \frac{1}{2} \left(P^2 + \omega^2 Q^2 \right)\] which is the equation for a simple harmonic oscillator with $m = 1$; this is because the electromagnetic field has no mass, so there is no characteristic mass term to stick into the equation. Note that these quantities have a canonical Poisson bracket $\{Q, P\} = 1$, so $Q$ can be identified as a position and $P$ can be identified as a momentum, though they are actually neither of those things but are simply mathematical conveniences to simplify expressions involving the fields; this will become useful shortly.
Quantizing this yields turns the canonical Poisson bracket relation into the canonical commutation relation $[Q, P] = i\hbar$. This also implies that $[E_a, B_b] \neq 0$, which is huge: this means that states of the photon cannot have definite values for both the electric and magnetic fields simultaneously, just as a quantum mechanical particle state cannot have both a definite position and momentum. Now the fields themselves are operators that depend on space and time as parameters, while the states are now vectors in a Hilbert space defined for a given mode $\vec{k}$, which has been chosen in this case as $\vec{k} = k\vec{e}_z$ for some allowed value of $k$. The raising and lowering operators $a$ and $a^{\dagger}$ can be defined in the usual way but with the substitutions $m \rightarrow 1$, $x \rightarrow Q$, and $p \rightarrow P$. The Hamiltonian then becomes $H = \hbar\omega \cdot \left(a^{\dagger} a + \frac{1}{2} \right)$, where again $\omega = c|\vec{k}|$ for the given mode $\vec{k}$. This means that eigenstates of the Hamiltonian are the usual $|n\rangle$, where $n$ specifies the number of photons which have mode $\vec{k}$ and therefore frequency $\omega$; this is in contrast to the single particle harmonic oscillator eigenstate $|n\rangle$ which specifies that there is only one particle and it has energy $E_n = \hbar \omega \cdot \left(n + \frac{1}{2} \right)$. This makes sense on two counts: for one, photons are bosons, so multiple photons should be able to occupy the same mode, and for another, each photon carries energy $\hbar\omega$, so adding a photon to a mode should increase the energy of the system by a unit of the energy of that mode, and indeed it does. Also note that these number eigenstates are not eigenstates of either the electric or the magnetic fields, just as normal particle harmonic oscillator eigenstates are not eigenstates of either position or momentum. (As an aside, the reason why lasers are called coherent is because they are composed of light in coherent states of a given mode satisfying $a|\alpha\rangle = \alpha \cdot |\alpha\rangle$ where $\alpha \in \mathbb{C}$. These, as opposed to energy/number eigenstates, are physically realizable.)
So what does this have to do with quantum fluctuations in a cavity? Well, if you notice, just as with the usual quantum harmonic oscillator, this Hamiltonian has a ground state energy above the minimum of the potential given by $\frac{1}{2} \hbar\omega$ for a given mode; this corresponds to having no photons in that mode. Hence, even an electrodynamic vacuum has a nonzero ground state energy. Equally important is the fact that while the mean fields $\langle 0|\vec{E}|0\rangle = \langle 0|\vec{B}|0\rangle = \vec{0}$, the field fluctuations $\langle 0|\vec{E}^2|0\rangle \neq 0$ and $\langle 0|\vec{B}^2|0 \rangle \neq 0$; thus, the electromagnetic fields fluctuate with some nonzero variance even in the absence of photons. This relieves the confusion I was having earlier about why any analysis of the Casimir effect assumes the presence of an electromagnetic field in a cavity by way of nonzero fluctuations even when no photons are present. Just to tie up the loose ends, because the Casimir effect is introduced as having the electromagnetic field in a cavity, the allowed modes are standing waves with wavevectors given by $\vec{k} = k_x \vec{e}_x + k_y \vec{e}_y + \frac{\pi n_z}{l} \vec{e}_z$ where $n_z \in \mathbb{Z}$, assuming that the cavity bounds the fields along $\vec{e}_z$ but the other directions are left unspecified. This means that each different value of $\vec{k}$ specifies a different harmonic oscillator, and each of those different harmonic oscillators is in the ground state in the absence of photons. You'll be hearing more about this in the near future, but for now, thinking through this helped me clear up my basic misunderstandings, and I hope anyone else who was having the same misunderstandings feels more comfortable with this now.
A Less-Seen View of Angular Momentum
Many people learn in basic physics classes that angular momentum is a scalar quantity that describes the magnitude and direction of rotation, such that its rate of change is equal to the sum of all torques $\tau = \dot{L}$, akin to Newton's equation of motion $\vec{F} = \dot{\vec{p}}$. People who take more advanced physics classes, such as 8.012 — Physics I, learn that in fact angular momentum and torque are vectors; in the case of fixed-axis rotation, the moment of inertia (the rotational equivalent to mass) is a scalar so $\vec{L} = I\vec{\omega}$ means that angular momentum points in the same direction as angular velocity. By contrast, in general rigid body motion, the moment of inertia becomes anisotropic and becomes a tensor, so \[\vec{L} = \stackrel{\leftrightarrow}{I} \cdot \vec{\omega}\] implies that angular momentum is no longer parallel to angular velocity, but instead the components are related (using Einstein summation for convenience) by \[L_i = I_{ij} \omega_{j}.\] This becomes important in the analysis of situations like gyroscopes and torque-induced precession, torque-free precession, and nutation.
There is one problem though: there is nothing particularly vector-like about angular momentum. It is constructed as a vector essentially for mathematical convenience. The definition $\vec{L} = \vec{x} \times \vec{p}$ only works in 3 dimensions. Why is this? Let's look at the definition of the cross product components: in 3 dimensions, the permutation tensor has 3 indices, so contracting it with 2 vectors produces a third vector $\vec{c} = \vec{a} \times \vec{b}$ such that $c_i = \varepsilon_{ijk} a_{j} b_{k}$. One trick that is commonly taught to make the cross product easier is to turn the first vector into a matrix and then perform matrix multiplication with the column representation of the second vector to get the column representation of the resulting vector: the details of this rule are hard to remember, but the source is simple, as it is just $a_{ij} = \varepsilon_{ijk} a_{k}$. Now let us see what happens to angular velocity and angular momentum using this definition. Angular velocity was previously defined as a vector through $\vec{v} = \vec{\omega} \times \vec{x}$. We know that $\vec{x}$ and $\vec{v}$ are true vectors, while $\vec{\omega}$ is a pseudovector (defined by it flipping direction when the coordinate system undergoes reflection), so $\vec{\omega}$ is vector to be made into a tensor. Using the previous definition that in 3 dimensions $\omega_{ij} = \varepsilon_{ijk} \omega_{k}$, then \[v_i = \omega_{ij} x_{j}\] now defines the angular velocity tensor. Similarly, angular momentum is a pseudovector, so it can be made into a tensor through $L_{ij} = \varepsilon_{ijk} L_{k}$. Substituting this into the equation relating angular momenta and angular velocities yields \[L_{ij} = I_{ik} \omega_{kj}\] meaning the matrix representation of the angular momentum tensor is now the matrix multiplication of the matrices representing the moment of inertia and angular velocity tensors.
This has another consequence: the meaning of the components of the angular velocity and angular momentum become much more clear. Previously, $L_{j}$ was the generator of rotation in the plane perpendicular to the $j$-axis, and $\omega_{j}$ described the rate of this rotation: for instance, $L_z$ and $\omega_z$ relate to rotation in the $xy$-plane. This is somewhat counterintuitive. On the other hand, the tensor definitions $L_{ij}$ and $\omega_{ij}$ deal with rotations in the $ij$-plane: for example, $L_{xy}$ generates and $\omega_{xy}$ describes rotations in the $xy$-plane, which seem much more intuitive. Also, with this, $L_{ij} = x_{i} p_{j} - p_{i} x_{j}$ becomes a definition (though there may be a numerical coefficient that I am missing, so forgive me).
The nice thing about this formulation of angular velocities and momenta as tensor quantities is that this is generalizable to 4 dimensions, be it 4 spatial dimensions or 3 spatial and 1 temporal dimension (as in relativity). $L_{\mu \nu} = x_{\mu} p_{\nu} - p_{\mu} x_{\nu}$ now defines the generator of rotation in the $\mu\nu$-plane. Similarly, $\omega_{\mu \nu}$ defined in $L_{\mu \nu} = I_{\mu}^{\; \xi} \omega^{\xi}_{\; \nu}$ describes the rate of rotation in that plane. The reason why these cannot be vectors any more is that the permutation tensor gains an additional index, so contracting it with two vectors yields a tensor with 2 indices; this means that the cross product as laid out in 3 dimensions does not work in any other number of dimensions (except, interestingly enough, for 7, and that is because a 7-dimensional Cartesian vector space can be described through the algebra of octonions which does have a cross product, just as 2-dimensional vectors can be described by complex numbers and 3-dimensional vectors can be described by quaternions).
This has further nice consequence for special relativity. The Lorentz transformation as given in $x^{\mu'} = \Lambda^{\mu'}_{\; \mu} x^{\mu}$ is a hyperbolic rotation through an angle $\alpha$, equal to the rapidity defined as $\alpha = \tanh(\beta)$. A hyperbolic rotation is basically just a normal rotation through an imaginary angle. This can actually be seen by transforming to coordinates with imaginary time (called a Wick rotation, which may come back up in a post in the near future): $x^{\mu} = (ct, x^{j}) \rightarrow (ict, x^{j})$, allowing the metric to change as $\eta_{\mu \nu} = \mathrm{diag}(-1, 1, 1, 1) \rightarrow \delta_{\mu \nu}$. This changes the rapidity to just be a real angle, and the Lorentz transformation becomes a real rotation. Because only the temporal coordinate has been made imaginary while the spatial coordinates have been left untouched, because the Lorentz transformation is now a real rotation, and because angular momentum generates real rotations, then it can be said that the angular momentum components $L_{(0, j)}$ generate Lorentz boosts along the $j$-axis. This fact remains true even if the temporal coordinate is not made imaginary and the metric remains with an opposite sign for the temporal component, though the math of Lorentz boost generation becomes a little more tricky. That said, typically the conservation of angular momentum implies symmetry of the system under rotation, thanks to the Noether theorem. Naïvely, this would imply that conservation of $L_{(0, j)}$ is associated with symmetry under the Lorentz transformation. The truth is a little more complicated (but not by too much), as my advisor and I found from a few Internet searches. Basically, in nonrelativistic mechanics, just as momentum is the generator of spatial translation, position is the generator of (Galilean) momentum boosting: this can be seen in the quantum mechanical representation of momentum in the position basis $\hat{p} = -i\hbar \frac{\partial}{\partial x}$, and the analogous representation of position in the momentum basis $\hat{x} = i\hbar \frac{\partial}{\partial p}$. If the system is invariant under translation, then the momentum is conserved and the system is inertial, whereas if the system is invariant under boosting, then the position is conserved and the system is fixed at a given point in space. In relativity, the analogue to a Galilean momentum boost is exactly the Lorentz transformation, so conservation of $L_{(0, j)}$ corresponds to the system being fixed at its initial spacetime coordinate; this is OK even in relativity because spacetime coordinates are invariant geometric objects, even if their components transform covariantly.
There are a few remaining issues with this analysis. One is that rotations in 3 dimensions are just sums of pairs of rotations in planes, and rotations in 4 dimensions are just sums of pairs of rotations in 3 dimensions. This relates in some way (that I am not really sure of) to symmetries under special orthogonal/unitary transformations in those dimensions. In dimensions higher than 4, things get a lot more hairy, and I'm not sure if any of this continues to hold. Also, one remaining issue is that in special relativity, because the speed of light is fixed and finite, rigid bodies cease to exist except as an approximation, so the description of such dynamics using a moment of inertia tensor generalized to special relativity may not work anymore (though the description of angular momentum as a tensor should still work anyway). Finally, note that the generalization of particle momentum $p_{\mu}$ to a distribution of energy lies in the stress-energy tensor $T_{\mu \nu}$, so the angular momentum of such a distribution becomes a tensor with 3 indices that looks something like (though maybe not exactly like) $L_{\mu \nu \xi} = x_{\mu} T_{\nu \xi} - x_{\nu} T_{\mu \xi}$. In addition, stress-energy tensors with relativistic angular momenta may change the metric itself, so that would need to be accounted for through the Einstein field equations. Anyway, I just wanted to further explore the formulations and generalizations of angular momentum, and I hope this helped in that regard.
Frictions, Subsidies, and Taxes
One of the things I learned in my high school AP Microeconomics class was that a tax causes the supply curve to shift to the left, making the equilibrium quantity decrease and price increase. Consumer and producer surplus both decrease, but while government revenue can account for some of the loss in total welfare, some part of total welfare gets fully lost, and this is what is known as deadweight loss. I didn't have a very good intuition for how this worked at the time (though I was able to get through it on homework, quizzes, tests, and the AP exam). At the same time, though, I thought that a tax should be fully reversible by having the government subsidize producers, and that as this would be the opposite of a tax, supply would shift to the right, the equilibrium quantity would rise and price would fall, and there would be a welfare gain.
Then, when I took 14.01 — Introduction to Microeconomics, we again discussed the situation with a tax. Then we talked about subsidies, but I was confused because the mechanism seemed to be in providing a subsidy to consumers rather than to producers. My intuition at that point was that taxes were creating deadweight loss because producers who wanted to produce and consumers who wanted to consume near the original equilibrium could not do so after the tax, so some transactions were essentially being prohibited. However, I still didn't quite understand why a subsidy would create deadweight loss, because it seemed to me like consumers who wanted to consume more and producers who wanted to produce more than the original equilibrium quantity could now do so, meaning it seemed to me like more transactions were being made possible. That said, I did understand why the government would never subsidize producers: unless the market is perfectly competitive, producers would rather collude and pocket their subsidies while keeping prices high when they can. On the other hand, consumers prefer consuming, so subsidizing consumers is a more surefire way of increasing the equilibrium quantity, even though the price would go up rather than down.
(In 14.04 — Intermediate Microeconomic Theory, we barely touched on deadweight loss in the way that it is covered in more traditional microeconomics classes.) Now, in 14.03 — Microeconomic Theory and Public Policy, I think I better understand the intuition behind deadweight losses stemming from taxes and subsidies, and why a subsidy is not the opposite of a tax. In a tax, the government might try to target some new equilibrium quantity below the original one, so the tax revenue collected, which increases total welfare, is the difference between the willingness of consumers to pay and the willingness of producers to accept at that quantity multiplied by that quantity. Consumer and producer surplus both decrease, and the tax revenue contribution to the increase in total welfare is not enough to offset these two, so there is an overall deadweight loss. A completely isomorphic way of picturing this is by considering the tax falling on consumers so that the demand shifts to the left; in both cases, the equilibrium quantity drops, the government collects its revenue, surpluses drop, so deadweight losses appear.
Meanwhile, for a subsidy, the government might target a higher quantity than the original equilibrium. The spending on that subsidy is the difference between the willingness of producers to accept and the willingness of consumers to pay at that quantity multiplied by that quantity. Consumer and producer surpluses both increase, but together they do not increase enough to offset government spending which is an overall drain on total welfare, so there exists a deadweight loss.
It's interesting that taxes and subsidies are not opposites. The intuition is that for a tax, the revenue is not enough to compensate for the welfare losses of consumers and producers because the new equilibrium quantity is lower. By contrast, for a subsidy, the spending is too high compared to the welfare gains of consumers and producers because the new equilibrium quantity is higher. It looks like it is not possible to spend money given by tax revenue to undo the effects of a tax; instead, the government can only overshoot and overspend. It reminds me very much of how friction works: moving in one direction on a surface with friction causes energy loss, while turning around to move in the other direction on that same surface most certainly does not cause energy gain. Essentially, in this model, the market is frictionless, and the government introduces friction.
Of course, this essentially contradicts Keynesian models of government taxation and spending and their respective effects. That's why care must be taken when putting microeconomic models in a macroeconomic perspective. This also doesn't consider externalities, less than perfectly competitive market structures, et cetera. Anyway, I hope my musings on this may help give other people some intuition on simple issues of deadweight loss in microeconomic theory.
More on 2012 Fall
Last semester, I was taking 8.05, 8.13, 8.231, and 14.04, along with continuing my UROP. I was busy and stressed basically all the time. Now I think I know why: it turns out that the classes I was taking were much closer to graduate classes in material, yet they came with all the trappings of an undergraduate class, like exams (that were not intentionally easy). Let me explain a little more.
8.05 — Quantum Physics II is where the linear algebra formalism and bra-ket notation of quantum mechanics are introduced and thoroughly investigated. Topics of the class include analysis of wavefunctions in 1-dimensional potentials, vectors in Hilbert spaces, matrix representations of operators, 2-state systems, applications to spin, NMR, continuous Hilbert spaces (e.g. position), the harmonic oscillator, coherent & squeezed states as well as the representation of photon states and the electromagnetic field operators forming a harmonic oscillator, angular momentum, addition of angular momenta, and Clebsch-Gordan coefficients. OK, so considering that most of these things are expected knowledge for the GRE in physics, this is probably more like a standard undergraduate quantum mechanics curriculum rather than a graduate-level curriculum. That said, apparently this perfectly substitutes for the graduate-level quantum theory class, because I know of a lot of people who go right from 8.05 to the graduate relativistic quantum field theory class.
8.13 — Experimental Physics I is generally a standard undergraduate physics laboratory class (although it is considered standard in the sense that its innovations have spread far and wide). The care and detail in performing experiments, analyzing data, making presentations, and writing papers seem like fairly obvious previews of graduate life as an experimental physicist.
8.231 — Physics of Solids I might be the first class on this list that actually could be considered a graduate-level class for undergraduates, also because the TAs for that class have said that it is basically a perfect substitute for the graduate class 8.511 — Theory of Solids I, allowing people who did well in 8.231 to take the graduate class 8.512 & mdash; Theory of Solids II immediately after that. 8.231 emphasized that it is not a survey course but intends to go deep into the physics of solids. I would say that it in fact did both: it was both fairly broad and incredibly deep. Even though the only prerequisite is 8.044 — Statistical Physics I with the corequisite being 8.05, 8.231 really requires intimate familiarity with the material of 8.06 — Quantum Physics III, which is what I am taking this semester. 8.06 introduces in fairly simple terms things like the free electron gas (which is also a review from 8.044), the tight-binding model, electrons in an electromagnetic field, the de Haas-van Alphen effect, and the integer quantum Hall effect, and it will probably talk about perturbation theory and the nearly-free electron gas. 8.231 requires a good level of comfort with these topics, as it goes into much more depth with all of these, as well as the basic descriptions of crystals and lattices, reciprocal space and diffraction, intermolecular forces, phonons, band theory, semiconductor theory and doping, a little bit of the fractional quantum Hall effect (which is much more complicated than its integer counterpart), a little bit of topological insulator theory, and a little demonstration on superfluidity and superconductivity.
14.04 &; Intermediate Microeconomic Theory is the other class I can confidently say is much closer to a graduate class than an undergraduate class, because I talked to the professor yesterday and he said exactly this. He said that typical undergraduate intermediate microeconomic theory classes are more like 14.03 &; Microeconomic Theory and Public Policy (which I am taking now), where the constrained optimization problems are fairly mechanical, and there may be discussion on the side of applications to real-world problems. By contrast, 14.04 last semester focused on the fundamentals of abstract choice theory with a lot more elegant mathematical formalism, the application of those first principles to derive all of consumer and producer choice theories, partial and general equilibrium, risky choice theory, subjective risky choice theory and its connections to Arrow-Debreu securities and general equilibrium, oligopoly and game theory, asymmetric information, and other welfare problems. The professor was saying that by contrast to a typical such class elsewhere, 14.04 here is much closer to a graduate microeconomic theory/decision theory class, and the professor wanted to achieve that level of abstract conceptualization while not going too far for an undergraduate audience.
At this point, I'm hoping that the experiences from last semester pay off this semester. It looks like that has been working so far!
More on My Photonic Crystal UROP
In my post at the end of the summer, I talked a bit about what I actually did in that UROP. Upon rereading it, I have come to realize that it is a little jumbled and technical. I'd like to basically rephrase it in less technical terms, along with providing more context on what I did in the 2011 fall semester. Follow the jump to see more. |
fae4768bba8e9186 | Take the 2-minute tour ×
We all learn in grade school that electrons are negatively-charged particles that inhabit the space around the nucleus of an atom, that protons are positively-charged and are embedded within the nucleus along with neutrons, which have no charge. I have read a little about electron orbitals and some of the quantum mechanics behind why electrons only occupy certain energy levels. However...
How does the electromagnetic force work in maintaining the positions of the electrons? Since positive and negative charges attract each other, why is it that the electrons don't collide with the protons in the nucleus? Are there ever instances where electrons and protons do collide, and, if so, what occurs?
share|improve this question
Things don't collide with other things. Collision is due to Pauli exclusion, which only works with identical fermions. The only things that collide in the classical sense of bumping into each other when they are close are identical fermions, other particles just feel a repulsive/attractive force. – Ron Maimon Dec 18 '11 at 7:04
add comment
3 Answers
up vote 14 down vote accepted
In fact the electrons (at least those in s-shells) do spend some non-trivial time inside the nucleus.
The reason they spend a lot of time outside the nucleus is essentially quantum mechanical. To use too simple an explanation their momentum is restricted to a range consistent with begin captured (not free to fly away), and as such there is a necessary uncertainty in their position.
An example of physics arising because they spend some time in the nucleus is so called "beta capture" radioactive decay in which $$ e + p \to n + \nu $$ occurs within the nucleus. The reason this does not happen in most nuclei is also quantum mechanical and is related to energy levels and Fermi-exclusion.
To expand on this picture a little bit, let's appeal to de Broglie and Bohr. Bohr's picture of the electron orbits being restricted to a set of finite energies $E_n \propto 1/n^2$ and frequencies can be given a reasonably natural explanation in terms of de Broglie's picture of all matter as being composed of waves of frequency $f = E/h$ by requiring that a integer number of waves fit into the circular orbit.
This leads to a picture of the atom in which all the electrons occupy neat circular orbits far away from the nucleus, and provides one explanation of why the electrons don't just fall into the nucleus under the electrostatic attraction.
But it's not the whole story for a number of reasons; for our purposes the most important one is that Bohr's model predicts a minimum angular momentum for the electrons of $\hbar$ when the experimental value is 0.
Pushing on, can solve the three dimensional Schrödinger equation in three dimesions for Hydrogen-like atoms:
$$ \left( i\hbar\frac{\partial}{\partial t} - \hat{H} \right) \Psi = 0 $$
for electrons in a $1/r^2$ electrostatic potential to determine the wavefunction $\Psi$. The wave function is related to the probability $P(\vec{x})$ of finding an electron at a point $\vec{x}$ in space by
$$ P(\vec{x}) = \left| \Psi(\vec{x}) \right|^2 = \Psi^{*}(\vec{x}) \Psi(\vec{x}) $$
where $^{*}$ means the complex conjugate.
The solutions are usually written in the form
$$ \Psi(\vec{x}) = Y^m_l(\theta,\phi) L^{2l+1}_{n-l-1}(r) e^{-r/2} * \text{normalizing factors} $$
Here the $Y$'s are the spherical harmonics and the $L$'s are the generalized Laguerre polynomials. But we don't care for the details. Suffice it to say that that these solutions represent a probability density for the electrons that is smeared out over a wide area near around the nucleus. Also of note, for $l=0$ states (also known as s orbitals) there is a non-zero probability at the center, which is to say in the nucleus (this fact arises because these orbital have zero angular momentum, which you might recall was not a feature of the Bohr atom).
share|improve this answer
This seems to me still not enough for this Question, even though, or particularly because, the questioner is new to Physics SE, but this -1 wasn't me. I can't do justice to this Question, so perhaps it's just that I want a really Useful Answer. – Peter Morgan May 3 '11 at 17:08
@Peter: Agree that this is terse, and only minimally informative. Without knowing more about the questioners preparation, it was that or a very long and detailed answer. Maybe I'll have time for the latter later on. – dmckee May 3 '11 at 17:13
Rather extensively update. Without some feedback from voithos, I can't see where else to go with this. – dmckee May 3 '11 at 23:29
@dmckee Wow, that explanation is fantastic, thank you. Honestly, I didn't understand about 90% of it, and the other 10% I couldn't put in context. But I'll try to try to analyze, and perhaps look back on later when I hopefully will have a deeper understanding on this subject. – voithos May 5 '11 at 2:06
@voithos: If you don't get most of it, then I've pitched it at the wrong level. – dmckee May 6 '11 at 15:47
show 4 more comments
This was the basic reason for the invention of quantum mechanics.
Simple mechanics with electromagnetism do not work in atomic dimensions, particularly with the charged electrons. Classical electromagnetism would have the electrons radiate energy away because of the continuous acceleration of a circular path and finally fall in the nucleus.
So the answer is : because in the microscopic world nature follows quantum mechanics equations and not classical mechanics equations. Quantum mechanics equations include electromagnetic fields, and their solutions are stable and allow for the existence of atoms, which is what we experimentally observed to start with.
share|improve this answer
Sorry, Anna, -1. For nearly the same reasons I gave above for downvoting sb1's Answer, though I suspect I would have left yours alone if sb1's were not there goading me. Indeed, after writing the previous sentence I decided to undo the -1, but it took me more than 5 minutes, so it won't let me until you edit your Answer. – Peter Morgan May 3 '11 at 17:04
I think "nature is quantized" is too strong! – user1355 May 3 '11 at 17:35
add comment
An intuitive way is to think of matter waves. If the electron were a point particle, it would have to start from a definite position, say somwewhere on its orbit, and all of it would feel the electric attraction to the nucleus and it would start falling just like a stone. It could not find a stable orbit like the moon does since it is charged and whenever it accelerates it gives off electromagnetic radiation, like in a radio antenna transmitting radio waves. But then it loses energy, and cannot maintain its orbit.
The only solution to this is if the electron can somehow stand still. (Or achieve escape velocity, but of course you are asking about the electrons in the atom, so by hypothesis, they have not got enough energy to achieve escape velocity.) But if it stands still and is a point particle, of course it will head straight to the nucleus because of the attraction.
Answer: matter is not made of point particles, but of matter waves. These matter waves obey a wave equation. The point of any wave equation, such as $${\partial^2f\over \partial t^2} = - k {\partial^2f\over \partial x^2}$$ (this, if $k$ is negative, is the wave equation for a stretched and vibrating string) is that the right hand side is the curvature of the wave at the spot $x$, and the equation says the greater the curvature, the greater is the rate of change of the wave at that spot (or, in this case, the acceleration, but Schrodinger used a slightly different wave equation than de Broglie or Fock), and hence the kinetic energy, too.
There are certain shapes which just balance everything out: for example, the lowest orbital is a humpy shape with centre at the centre of the nucleus, and thinning out in all directions like a bell curve or a hill. Although all the parts of the smeared-out electron might feel attracted to the nucleus, there is a sort of effect which is purely quantum mechanical, a consequence of this wave equation, which resists that: if all parts approached the nucleus, the hump becomes more acute, a sharper, higher peak, but this increases the left hand side of the equation (greater curvature). This would increase the magnitude of the right hand side, and that greater motion tends to disperse the peak again. So the electron wave, in this particular stationary state, stays where it is because this quantum mechanical resistance exactly balances out the Coulomb force.
This is why Quantum Mechanics is necessary in order to explain the stability of matter, something which cannot be understood if everything were made of mass as particles with definite locations.
share|improve this answer
Very interesting. On somewhat of a side note, since you mentioned the lowest orbital, what about the higher orbitals? The lowest orbital works to balance out the Coulomb force, but what causes the existence of the other orbitals? I am aware of the Pauli exclusion principle, but I don't have any intuition as to how it works. – voithos Dec 18 '11 at 6:59
Oh, it just gets more complicated even though the basic principle is the same. At that points, words are not precise enough anymore and one uses anna's approach... The Pauli exclusion principle has nothing to do with it. There are analogues to atoms with a proton as a nucleus, and a charged boson in various orbitals. Bosons do not obey the Pauli exclusion principle but they still obey a wave equation (that of Fock). It is the wave equation that is the whole point. – joseph f. johnson Dec 18 '11 at 7:10
add comment
protected by Qmechanic Feb 4 '13 at 5:59
Would you like to answer one of these unanswered questions instead?
|
0ed17b84289a2b32 | From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Del operator,
represented by
the nabla symbol
Del, or nabla, is an operator used in mathematics, in particular in vector calculus, as a vector differential operator, usually represented by the nabla symbol . When applied to a function defined on a one-dimensional domain, it denotes its standard derivative as defined in calculus. When applied to a field (a function defined on a multi-dimensional domain), it may denote the gradient (locally steepest slope) of a scalar field (or sometimes of a vector field, as in the Navier–Stokes equations), the divergence of a vector field, or the curl (rotation) of a vector field, depending on the way it is applied.
Strictly speaking, del is not a specific operator, but rather a convenient mathematical notation for those three operators, that makes many equations easier to write and remember. The del symbol can be interpreted as a vector of partial derivative operators, and its three possible meanings—gradient, divergence, and curl—can be formally viewed as the product with a scalar, a dot product, and a cross product, respectively, of the del "operator" with the field. These formal products do not necessarily commute with other operators or products. These three uses, detailed below, are summarized as:
• Gradient:
• Divergence:
• Curl:
In the Cartesian coordinate system Rn with coordinates and standard basis , del is defined in terms of partial derivative operators as
In three-dimensional Cartesian coordinate system R3 with coordinates and standard basis or unit vectors of axes , del is written as
Del can also be expressed in other coordinate systems, see for example del in cylindrical and spherical coordinates.
Notational uses[edit]
Del is used as a shorthand form to simplify many long mathematical expressions. It is most commonly used to simplify expressions for the gradient, divergence, curl, directional derivative, and Laplacian.
The vector derivative of a scalar field is called the gradient, and it can be represented as:
It always points in the direction of greatest increase of , and it has a magnitude equal to the maximum rate of increase at the point—just like a standard derivative. In particular, if a hill is defined as a height function over a plane , the 2d projection of the gradient at a given location will be a vector in the xy-plane (visualizable as an arrow on a map) pointing along the steepest direction. The magnitude of the gradient is the value of this steepest slope.
In particular, this notation is powerful because the gradient product rule looks very similar to the 1d-derivative case:
However, the rules for dot products do not turn out to be simple, as illustrated by:
The divergence of a vector field is a scalar function that can be represented as:
The divergence is roughly a measure of a vector field's increase in the direction it points; but more accurately, it is a measure of that field's tendency to converge toward or repel from a point.
The power of the del notation is shown by the following product rule:
The formula for the vector product is slightly less intuitive, because this product is not commutative:
The curl of a vector field is a vector function that can be represented as:
The curl at a point is proportional to the on-axis torque that a tiny pinwheel would be subjected to if it were centred at that point.
The vector product operation can be visualized as a pseudo-determinant:
Again the power of the notation is shown by the product rule:
Unfortunately the rule for the vector product does not turn out to be simple:
Directional derivative[edit]
The directional derivative of a scalar field in the direction is defined as:
This gives the rate of change of a field in the direction of . In operator notation, the element in parentheses can be considered a single coherent unit; fluid dynamics uses this convention extensively, terming it the convective derivative—the "moving" derivative of the fluid.
Note that is an operator that takes scalar to a scalar. It can be extended to operate on a vector, by separately operate on each of its components.
The Laplace operator is a scalar operator that can be applied to either vector or scalar fields; for cartesian coordinate systems it is defined as:
and the definition for more general coordinate systems is given in vector Laplacian.
The Laplacian is ubiquitous throughout modern mathematical physics, appearing for example in Laplace's equation, Poisson's equation, the heat equation, the wave equation, and the Schrödinger equation.
Tensor derivative[edit]
Del can also be applied to a vector field with the result being a tensor. The tensor derivative of a vector field (in three dimensions) is a 9-term second-rank tensor – that is, a 3×3 matrix – but can be denoted simply as , where represents the dyadic product. This quantity is equivalent to the transpose of the Jacobian matrix of the vector field with respect to space. The divergence of the vector field can then be expressed as the trace of this matrix.
For a small displacement , the change in the vector field is given by:
Product rules[edit]
For vector calculus:
For matrix calculus (for which can be written ):
Another relation of interest (see e.g. Euler equations) is the following, where is the outer product tensor:
Second derivatives[edit]
DCG chart: A simple chart depicting all rules pertaining to second derivatives. D, C, G, L and CC stand for divergence, curl, gradient, Laplacian and curl of curl, respectively. Arrows indicate existence of second derivatives. Blue circle in the middle represents curl of curl, whereas the other two red circles (dashed) mean that DD and GG do not exist.
When del operates on a scalar or vector, either a scalar or vector is returned. Because of the diversity of vector products (scalar, dot, cross) one application of del already gives rise to three major derivatives: the gradient (scalar product), divergence (dot product), and curl (cross product). Applying these three sorts of derivatives again to each other gives five possible second derivatives, for a scalar field f or a vector field v; the use of the scalar Laplacian and vector Laplacian gives two more:
These are of interest principally because they are not always unique or independent of each other. As long as the functions are well-behaved, two of them are always zero:
Two of them are always equal:
The 3 remaining vector derivatives are related by the equation:
And one of them can even be expressed with the tensor product, if the functions are well-behaved:
Most of the above vector properties (except for those that rely explicitly on del's differential properties—for example, the product rule) rely only on symbol rearrangement, and must necessarily hold if the del symbol is replaced by any other vector. This is part of the value to be gained in notationally representing this operator as a vector.
Though one can often replace del with a vector and obtain a vector identity, making those identities mnemonic, the reverse is not necessarily reliable, because del does not commute in general.
A counterexample that relies on del's failure to commute:
A counterexample that relies on del's differential properties:
Central to these distinctions is the fact that del is not simply a vector; it is a vector operator. Whereas a vector is an object with both a magnitude and direction, del has neither a magnitude nor a direction until it operates on a function.
For that reason, identities involving del must be derived with care, using both vector identities and differentiation identities such as the product rule.
See also[edit]
• Willard Gibbs & Edwin Bidwell Wilson (1901) Vector Analysis, Yale University Press, 1960: Dover Publications.
• Schey, H. M. (1997). Div, Grad, Curl, and All That: An Informal Text on Vector Calculus. New York: Norton. ISBN 0-393-96997-5.
• Miller, Jeff. "Earliest Uses of Symbols of Calculus".
• Arnold Neumaier (January 26, 1998). Cleve Moler, ed. "History of Nabla". NA Digest, Volume 98, Issue 03.
External links[edit] |
af0dfff2b899432b |
<<< Model motor Chemical reaction >>>
Atomic theory
Atoms are the base unit of the matter surrounding us. The name atom comes from the Greek word for "indivisible". At the end of the 19th and the beginning of the 20th century it was discovered that atoms are nohow indivisible. 1897 while working with cathode rays Joseph John Thomson found out that negative charged electrons are a component of atoms. He postulated that atoms consist of a uniform sea of positive charge with the negative charged electrons distributed through. The charge of the positive sea balances those of the electrons.
Well known is the Rutherford scattering done by Ernest Rutherford in 1909. He used a very thin gold foil being shot by alpha particles (twice positive charged helium nuclei) and found out that the positive charge and almost all of the mass of atoms are concentrated on a small area. This led to the development of the planetary model of the atom, meaning the electrons are moving around the atomic nucleus like planets around the sun shielding the positive charge.
In 1913 the Danish physicist Niels Bohr published his concept of atoms. He suggested that electrons are confined to clearly defined orbits around the atomic nucleus. According to his model electrons can't move in intermediate states between those orbits. The electrons "jump" between the orbits by absorbing or emitting a specific amount of energy (=quantum) given by electromagnetic radiation (=light). With the help of these orbital transitions the appearance of fixed lines in a spectrum could be explained successfully.
With the help of the concept of wave-particle duality (Louis de Broglie, 1924), the Schrödinger equation (Erwin Schrödinger, 1926) and the uncertainty principle (Werner Heisenberg, 1926) the concept of atomic orbitals was developed. The mathematical function of this atomic model calculates the probability of finding any electron of an atom in any specific region around the atom's nucleus.
The last "why?" and "how?" concerning the movement of electrons around the atomic nucleus is still open and an object of research. To understand chemical reactions it is important to know that an atom consist of negative charged electrons moving in quantized orbits around the positive charged atomic nucleus.
Iron atom according to Thomson
Figure 1:
Iron atom according to Thomson:
The negative charged electrons (blue) are embedded to a sea of positive charge (red), similar to a plum pudding. That's why this concept of atoms is known as the plum pudding model.
The chemical properties of atoms are given by the number of protons respectively electrons. Inside of a uncharged, neutral atom the number of electrons in the electron shell equals those of the protons of the atomic nucleus. The charge of a proton is 1,602 176 487(40) * 10-19 C, those of an electron -1,602 176 487(40) * 10-19 C. Because of the high difference of mass between protons (1,672 621 637(83) * 10-27 kg) and electrons (9,109 382 15(45) * 10-31 kg) the movement of the atomic nucleus can be neglected. Neutrons ar one more component of atomic nuclei. Those (outwards) uncharged particles with a mass of 1,674 927 211(84) * 10-27 kg increase the mass of the atomic nucleus. In the majority of cases the number of neutrons equals those of the protons in an atomic nucleus. Variations in the number of neutrons (while the number of protons is constant) in an atomic nucleus are called isotopes. Because of the fact that neutrons don't affect chemical reactions, they will be neglected in the following.
Electron shell
As mentioned above electrons are moving on specific orbits (also called shells) around the atomic nucleus. Each orbit conforms to an energy level and the arrangement of the electrons results in a total energy of the electron shell. The electrons are attracted by the positive charged nucleus. Based on the Bohr model which is used because of it's demonstrative character, there is a minimum distance between electrons and nucleus which can't be under-run. Why? Well there is still no fundamental answer to that question. Besides the attractive force between electron and nucleus there are repellent forces acting between the electrons. Hence not all electrons are arranged in the shell with the minimum distance possible. According to the "Aufbau principle" (German word for building up), each shell consists out of one or more subshells. The shells are labeled with characters starting with K, L, M, N.. or by numbers (1, 2, 3, 4, ...). The subshells are labeled by small characters (s, p, d, f, g, h) and each of them can contain only a fixed number of electrons. The maximal number of electrons in a subshell is as follows: s=2, p=6, d=10, f=14, g=18... .The number of subshells is given by: K=1, L=2, M=3, N=4..., whereby the maximum number of electrons in a shell is given by:K=2, L=8, M=18, N=32... . The shells and subshells are not filled in ascending sequence but (mostly) following the given scheme:
Electron configuration
Figure 2:
Electron configuration of the shells
No rule without exception - slight variations are given amongst others at the atoms copper, chromium , silver, platinum and gold.
Iron Atom according to shell model
Figure 3:
Iron atom at it's ground state according to the shell model:
The negative charged electrons (blue) are moving at circular orbits (shells) around the positive charged nucleus considered to be stationary.
The electron shell of an atom is described by the filled subshells. The labeling is given by the number of the shell, the character of the subshell and the number of electrons inside the subshell as a superscript number. For iron we get the notation: 1s2 2s2 2p6 3s2 3p6 3d6 4s2.
The outermost electrons of an atom are called valence electrons.
Like mentioned at the Bohr model, electrons are moving on fixed orbits around the atomic nucleus. The state with the lowest energy possible is called the ground state. The elecron configuration scheme given above leads to those ground state of the shells. By absorbing energy, electrons can move to higher subshells as long as those subshell is not filled with the maximum number of electrons. If one ore more electrons are in a subshell of a higher energy level, the total energy of the electron shell is higher than the ground state. This state is called excited state. The electrons inside an atom absorb energy in certain portions (quanta) while jumping between different subshells. From the excited state they fall back to their ground state after a small span of time by emitting electromagnetic radiation (light). At the chapter about electric charge we have learned that a certain amount of energy is released if two different charged particles are approaching from infinite to a given distance. Vice versa this energy is needed to move the particles from the given distance to the infinite. In a similar way the electrons inside an atom can be moved to a shell with an infinite diameter so that the electron is separated from the now positively charged restatom. This procedure is called ionization, the remaining positively charged atom is called ion, the required amount of energy is called ionization potential. If there is more than just one electron inside of the atom, they can also be separated from the atomic nucleus. The ionization potential of the second, third or fourth electron is always higher than those of the first one, because the atom is already a one, two or three times positively charged ion.
<<< Model motor Chemical reaction >>>
Google Plus Twitter Facebook YouTube Hackaday Patreon TPO |
57b2b0780ae67154 | Download Basic Electromagnetism and Materials by André Moliton (auth.) PDF
By André Moliton (auth.)
Basic Electromagnetism and Materials is the made of a long time of educating uncomplicated and utilized electromagnetism. This textbook can be utilized to coach electromagnetism to a variety of undergraduate technology majors in physics, electric engineering or fabrics technological know-how. notwithstanding, by way of making lesser calls for on mathematical wisdom than competing texts, and by means of emphasizing electromagnetic homes of fabrics and their functions, this textbook is uniquely fitted to scholars of fabrics technology. Many competing texts concentrate on the examine of propagation waves both within the microwave or optical area, while Basic Electromagnetism and Materials covers the complete electromagnetic area and the actual reaction of fabrics to those waves.
Show description
Read or Download Basic Electromagnetism and Materials PDF
Similar solid-state physics books
Microstructure and Properties of High-Temperature SuperConductors
This publication offers a finished presentation of every kind of HTSC and encompasses a extensive assessment on HTSC laptop simulations and modeling. Especial awareness is dedicated to the Bi-Sr-Ca-Cu-O and Y-Ba-Cu-O households that at the present time are the main point of view for purposes. The publication features a nice variety of illustrations and references.
Time-Dependent Density Functional Theory
Time-dependent density sensible conception (TDDFT) is predicated on a suite of principles and theorems fairly targeted from these governing ground-state DFT, yet emphasizing comparable strategies. this day, using TDDFT is swiftly growing to be in lots of components of physics, chemistry and fabrics sciences the place direct resolution of the Schrödinger equation is just too challenging.
Basic notions of condensed matter physics
Simple Notions of Condensed subject Physics is a transparent creation to a couple of the main major techniques within the physics of condensed topic. the final ideas of many-body physics and perturbation concept are emphasized, delivering supportive mathematical constitution. this is often a spread and restatement of the second one 1/2 Nobel Laureate Philip Anderson’s vintage thoughts in Solids.
Additional info for Basic Electromagnetism and Materials
Sample text
DS . G G G Uv . dS . dS therefore represents the quantity of charge that dt S traverses S per unit time and is the intensity of electric current across the S. G This last equation shows that the intensity appears as a flux of j through S. 2. Comment The density U that is used above corresponds to the algebraic volume mobile charge density (Um) and is different from the total volume density (UT), which is generally zero in a conductor. Thus, UT = Um + Uf , where Um is typically the (mobile) electron volume density and Uf is the volume density of ions sitting at fixed nodes in a lattice.
Rotational sense of B lines for (a) a rectilinear current and (b) a twisting current. 2, the potential vector JG G P0 I dl G JG G G A is carried by the conducting wire ( A // dl ). 22a that the vector rot P A turns G around the vector A . G G For its part, the vector B (or H ) exhibits a twisting character. 22b. Chapter 1. 1. Calculations. A vector given by r = MP has components: P JJJG MP x1 x2 x3 G r x1 - m1 x2 - m2 x3 - m3 m1 m2 m3 M This vector is such that: G r² = (x1 - m1 )² +(x 2 - m2 )² +(x 3 - m3 )² = u(x1 , x 2 , x 3 ) if the calculation for the operator is for point P r = u1/2 = u(m1 , m 2 , m3 ) if the calculation for the operator is for point M Verify the following results: G JJJJG JJJJG r grad P r grad M r , r G JJJJG 1 JJJJG 1 r grad M grad P , r r r3 G G div P r = - div M r = 3 (3D space) , JJG G §1· 0 , and rot M or P r= 0 , ' ¨ ¸ ©r¹ G G r r div( ) = 0 ; what can be said about the flux of the vector ?
1 , it is possible to state that I = ³³ dI 1 S ³³ 1 dl ³³ S VA VB VA VB r R . r Therefore VA - VB = RI . 5. Relaxation of a conductor G G On introducing the relation j VE into the general equation of charge G G wU wU conservation, div j 0 , we find V div E = 0. wt wt wU V U 0. Using the local form of Gauss's theorem gives wt H0 H0 , we obtain U = U0 e- t/W . V In the volume charge density of a conductor, there are both interventions due to free electron charges and charges associated with ions.
Download PDF sample
Rated 4.69 of 5 – based on 18 votes |
349f4cc073a3fba5 | Chemistry Homework Help | Chemistry Assignment Help - Answers
Get Chemistry help at Tutlance. Hire the best Chemistry homework helpers online cheap, easy, and fast. Post your Chemistry homework questions and get answers from qualified Chemistry assignment helpers.
Clear Search
Recently Asked Chemistry Assignment Help, Questions and Answers
We found 78 assignments related to this topic. Please note we do not publish private questions here.
Need help with Chemistry homework? Get Chemistry homework help and answers from the best Chemistry assignment homework helpers. Find Chemistry answers cheap online
Chemistry class I need help with please follow instructions carefully and everything must be perfect
I need someone to do my labs because I don’t have material and stuff. I also I don’t how much I should pay for this one so please let me know ASAP.. I’m looking forward to hear back from you...
The Scientific Method Lab Report
I need this lab report done its a virtual hypothetical lab using the scientific method. Obviously no plagiarism allowed. I need it done within 6 hours. I'm willing to pay anything....
Chemistry bio and organic Exam on proteins
Chemistry bio organic test is on proteins. I’ve submitted the information. Exam due August 2 open 7 p eastern closes at 11p. I have url and login information . Please let me know asap . 50 questions multiple choice. Tutors are usually done in 10-15 min...
Aleks homework for Chemistry 2......
there's 57 chemistry 2 lessons that need to be completed, and 9 quizzes, the number of lessons completed is not that important, it's okay if the amount completed is from 30-40, but all the quizzes need to complete. there are about 2 weeks to complete them...
Chemistry Exam on Friday Morning
CHEMISTRY EXAM Chapter 8 The student will be able to define and distinguish between the different types of mixtures The student will be able to determine if a compound is soluble based on characteristics regarding solutes and solvents The student will be able to calculate solution concentrations in various units and medical dosages The student will be able to calculate dilution concentrations Chapter 9 The student will be able to distinguish between acids and bases The student will be able to calculate pH and concentrations of hydronium and hydroxide The student will be able to complete acid-base neutralization reactions The student will be able to define buffers and how concentrations change upon the addition of base or acid Chapter 10 The student will be able to define hydrocarbons and understand the connectivity of alkanes and cycloalkanes The student will be able to draw and interpret skeletal structures of organic compounds The student will be able to name alkenes and alkanes using the IUPAC naming system The student will be able to name and characterize aromatic hydrocarbons The student will be able to name branched chain hydrocarbons Chapter 11 The ...
Solutions & Solubility Problem
UNIT 4: Solutions & Solubility Problem Set Answer the following questions on a separate sheet of paper. 1 Communication: /7 Consider the following solubility curves: Use the solubility Graph to answer the following a) Classify a solution that contains (50.0 g / 100.0 g H2O) of ammonium chloride, NH4Cl at 70 0C. 1 Mark b) What mass of solute should crystalize from this solution if this solution is cooled to 400C? 2 Marks c) What mass of solute is required to saturate a solution containing (20.0 g / 100.0 g water) of Potassium Chlorate, KClO3, at 600C? 2 Marks d) At what temperature does a solution containing 50.0 g of Potassium Nitrate, K(NO3) become saturated? 2 Marks 2 K/U: / 10 a) What is the Molar Concentration,c, of a solution that contains 70.0 g of Phosphoric acid, H3(PO4), in 275 mL? 5 Marks b) What volume of 8.00 mol/L concentrated solution of sodium Hydroxide, NaOH, is needed to produce 950.0 mL of 0.200 mol/L diluted solution? 5 Marks 3 INQUIRY: / 12 Mark...
Highschool Chemistry Course to be done by June 2nd, 2 am
Finish this chem course for me. It is a two part HS chem course so shouldnt be too much for the tutors on here. For me its hard though. I hope yall will help me in this tough time....
Help with high-school level Chemistry Honors class
series of 13-14 high school level chemistry honors online exams to be completed with no grade lower than an A. Must have good understanding of online courses. Must not upload/share files to or with any other sites...
ALEKS Online Class/Due Today! Will pay up to $200 if done on time!
I need the remainder of my ALEKS Chemistry coursework done. I have completed 136/156 topics and need the other twenty done today. I am willing to pay up to $200 for the remainder of the topics to be completed on time!...
2 Data calculations for chemistry
Just need to complete the calculations of those data with calibration graphs ( there are 2 data for test 1 and only 1 data for test 6 ) i there is any missing information just ask...
You are required to construct and compose a chemistry text book appropriate for a 6th grade audience.
Project Abstract You are required to construct and compose a chemistry text book appropriate for a 6th grade audience. Please understand that students at that level of academic progress are quite capable of handling a few scientific concepts but certainly not at the level that is taught in college or even high school. The idea here is for you to take your newly developed comprehensions of some fairly complex concepts and “translate” them down to a level comprehensible to 11 and 12 year old students. To help you understand what a sixth grader is expected to know in science, please refer to the link to California’s state science literacy standards. Peruse the sections labeled kindergarten to sixth grade to give you an idea of what many states expect their students to know and understand at certain grade levels. Instructions for Content I strongly recommend that you compose this in the format of a narrative, such as a story. You may develop any theme you like as long the theme or characters do not overwhelm and bury the science. I want you to develop chapters that align with several of our major topics/concepts. ...
I need real time messaging for che exam that will last for an hour
I need a real time messaging with an expert for chemistry exam . I attached pic of the topics will be on the exam . The exam will start Thursday 3/4 6:30 pm . I need the expert for less than an hour . 20 -25 questions most of them will be multiple choices...
principales of chemisty exam 20 questions
take online timed exam 1hr 15 min...
Density Lab
The purpose of this lab is to determine the density of ten pennies using different methods to determine volume. The calculated density was compared to a known value....
Chemistry Lab
The purpose of this experiment is to introduce several common pieces of glassware and to illustrate why one piece of glassware may be preferred for a particular measurement using n+mM+uU equation...
chemisty exam
online chem1212 exan mulitple choice...
Chemistry final assignment
Watch a youtube video of students conducting a lab. Write a formal lab report. Answer questions related to the lab. Must get around a 90% mark as this assignment is worth 30% of final grade....
What causes dna contamination and how is it mitigated essay
What causes dna contamination and how is it mitigated essay...
I need help with chemistry homework questions
Wondering where to get your chem homework done? Welcome to – A professional homework marketplace where you can get help with assignments in over 80+ disciplines. Our network works with top tier writers ensuring that you get the best grade for your chemistry assignment. Stop struggling and ask for help with chem assignment, chem exam, or hire someone to do the entire chemistry online class for you. Choose the best expert from over 500 chemistry assignment doers and get your homework done in the shortest time possible. Ready to hire an expert?
Get a Quote Now
tutlance rating is a reputable marketplace to ask for chemistry homework assistance. Click here to find our the price of your chem project.
What is chemistry?
Chemistry is the study of matter and its changes in different physical and chemical states. Although many people confuse chemistry with alchemy, which is concerned with the transmutation of base metals into gold and other magical endeavors.
Chemistry is a very broad science and covers many aspects of matter. Some general areas where chemistry is applied are:
• chemical industry,
• pharmaceuticals,
• food,
• forensics,
• pharmaceuticals.
How chemistry affects lives daily?
One of the many effects of chemistry that greatly affect people's lives is in agriculture. Through advances in chemistry, we are able to produce better yields with less effort which has allowed a lot of places to export food.
Thanks to chemistry and chemical engineering, the world is seeing an increase in healthier foods while improving agricultural productivity.
Chemistry is also used to purify water. One example of this is reverse osmosis, where water is forced through a semipermeable membrane with pores that are too small for impurities to pass through. Chemically treating the water as it passes through will remove contaminants so the purified water on the other side is very pure.
Another use of chemistry in people's' daily lives is in medicine and healthcare. As an example, antiseptics can be considered a form of chemical treatment which have been shown to greatly reduce mortality rates in post-operative environments.
Different forms of anesthesia such as nitrous oxide (an inhaled gas) have allowed many more people to experience surgery than would otherwise be possible without putting them at risk.
Even something as simple as antacids used for heartburn can be considered a form of medicine due to their pharmacological effects.
These are some of the reasons why you should study chemistry. Do you have urgent chemistry questions that you need help with? Ask your questions online.
Branches of chemistry?
College chemistry is a broad subject of study that can be divided into three main branches of chemistry which are: Inorganic chemistry, Organic chemistry, and Physical chemistry which has been highlighted below.
Inorganic Chemistry: Inorganic chemistry is a branch of chemistry that involves the study and principles behind chemical processes where all or most of the reactants are inorganic compounds, such as water and gases.
Organic Chemistry: Organic chemistry is the branch of chemistry concerned with the chemical study and reactions of organic compounds, which are based on carbon compounds such as hydrocarbons (petroleum), fats, waxes and oils in organisms.
Physical Chemistry: Physical chemistry is the study of chemical processes in systems that involve heat, pressure, or flow and is a messy (think air) subset of chemistry.
Physical chemists study things like physical properties (such as particle size), structures and geometries, thermodynamics, phase changes, spectroscopy and electrochemistry. Much of the work involves figuring out how to model chemical phenomena using mathematics, so signal scientists often rely on physical chemists' expertise.
Typical tasks might range from matching up specific chemicals by their spectral patterns to examining detailed models of molecular interactions in order to create new materials or predict substances' reaction rates.
Get Help With Chemistry Homework Assignments and Get a Top Grade.
Chemistry remains one of the most complex subjects that most students dread, and even teachers find challenging. Some students, when given opportunity, waste no time to drop it. If you don’t get the best teacher that inspires hope and confidence, they are likely to discourage you and see it as a source of frustration. But and thank goodness, we are here to help.
Do you have a chemistry homework that is giving you sleepless nights? Well, you don’t have to look any further. Our chemistry help service makes chemistry assignments look as easy as pie. So, drop the guard and let us assist you.
Check Prices Now
Why Pay For Chemistry Assignment Help Online at Tutlance?
Our experts are the best found as they are knowledgeable on the different types of Chemistry, such as; Physical, Analytical, Organic, and Biochemistry.
In chemistry assignment writing service, we offer only the best. Our services are cheap, not to mention, and we have world-class experts with a sound grasp of various chemistry topics.
Ordering our service guarantees you:
• Researched and analyzed content
• Plagiarism free homework.
• Competitive student-friendly price.
• Referenced and cited coursework.
• Structured and formatted assignments.
• 24/7 live chat support.
Are you still struggling with your coursework? Please, don’t! Place your chemistry hw help order now and let our experts work on it. Post your chemistry homework questions and get homework answers from the experts.
Need help with chemistry homework assignments? We are the experts
As you already know, Chemistry is a scientific discipline dealing with the study of elements and compounds, their composition, structure, properties, behavior, and the changes they undergo during a reaction with other substances.
Our college chemistry assignment writing service has experts who understand the concepts of Chemistry well. We, therefore, help you finish your assignment on time and submit it before the deadline date. Plus, we help you:
• Understand the relationship between various compositions of elements.
• Learn Bronsted-Lowry Acid is in Chemistry.
• Learn the concepts of a periodic table.
• Know the various properties of the elements.
• Identify many types of chemical reactions (with examples).
• Understand the differences between Atomic and Ionic Radius.
• Understand what reactivity means in Chemistry.
In Chemistry writing service, we focus on reducing the workload for students and helping them produce well-researched coursework that is satisfactory to the lecturers. We also enable students to maintain high grades in their academics.
Find an Expert Now
We offer college chem hw assistance in over 80 topics
The following are some of the critical topics in Chemistry our homework help service may come in handy:
• Acids, Bases, and pH – These are concepts that apply to aqueous solutions (solutions in water). pH refers to hydrogen ions concentration, while acids and bases reflect the relative availability of hydrogen ions or protons/electron donors or acceptors.
• Atomic Structure – This study involves understanding the composition of atoms, which is composed of protons, neutrons, and electrons.
• Electrochemistry – Primarily covers redox (oxidation-reduction) reactions. Such reactions may be harnessed to produce electrodes and batteries because they produce ions that facilitate the flow of electricity. Electrochemistry decides whether a reaction will occur and in which direction electrons will flow.
• Units and Measurements – Chemistry being a subject that relies on experimentation, it often involves taking measurements and performing calculations based on those measurements. You must be familiar with the units of measurements and the various ways of converting between them.
• Thermochemistry – It relates to thermodynamics and involves the concept of entropy, enthalpy, Gibbs free energy, standard state conditions, and energy diagrams. It also includes the study of temperature, calorimetry, endothermic reactions, and exothermic reactions.
• Chemical Bonding – Atoms and molecules join together through ionic and covalent bonds.
• Periodic Table – The periodic table is a systematic way of organizing the chemical elements. The elements exhibit periodic properties used to determine their characteristics, including the likelihood that they will form compounds and participate in chemical reactions.
• Solutions and mixtures – An essential part of general chemistry is learning about different types of solutions and mixtures and how to calculate concentrations. It covers topics such as colloids, suspensions, and dilutions.
Our chemistry homework problems and solutions service offers admirable outputs to its clients, and the work assignments we produce are usually the most meticulous and authentic. We assure you of quick submission of assignments hence preventing you from having problems with your lecturers. Send us your homework, and with the help of our experts, you’ll strike that elusive A+.
Cheap Chemistry HW Helpers Online
Pay someone to help with your chem school work fast and secure. This is an overview:
1. Submit the instructions,
2. Get quotes from the experts,
3. Hire the best expert.
4. Release the funds when satisfied.
Chem HW Help Resources
Chemistry Homework Answers
Pay for chemistry assignment answers online at affordable prices in a few minutes. Here is an overview of the order process.
• Post chemistry assignment questions online,
• Chat with chemistry assignment helpers,
• Choose the best professional chem expert,
• Make an escrow deposit
• Get your assignment done
• Pay and rate the homework doer
Choose an Expert Now
Best chemistry homework help sites
Tutlance is rated as one of the best chemistry homework help sites online. Ask for professional help from the most renown chemistry help website online.
Chemistry homework help
Cheap chemistry assignment help UK
We are now in UK. Get cheap chemistry assignment help in UK, AU, CA now
Other fields of chemistry covered include:
Here are other branches of chemistry that you may encounter while studying chemistry in college or graduate school. Remember that we can do chemistry homework on any branch that is listed here or not. Post your question to get a quote to get help from chemistry tutors.
Analytical Chemistry
Analytical chemistry is the branch of analytical science and a subject that usually offers the broadest degree program for students with a general educational background in science.
Analytical chemists study how substances react, ranging from individual molecules to large-scale industrial processes, to gain an understanding of their composition and properties. Some analytical chemists specialize in physical methods like chromatography (separating or mixing compounds) while others focus more on chemical methods like spectroscopy (intensively looking at light emissions).
Biochemistry is the study of life or living organisms at the molecular level. It focuses on chemistry's four central principles –energy, temperature, pressure, and matter—in relation to living systems. Biochemistry is a strong foundation for many other related fields such as biotechnology, pharmacology, and toxicology.
Biochemistry studies the interactions between organic molecules and their roles in biological processes. Biochemistry overlaps with chemical biology and molecular biology which also study these areas but from the perspective of chemistry or molecular biology respectively.
Electrochemistry is the study of chemical reactions involving electricity. For example, electrogenerated chemiluminescence (ECL) is a type of electrochemical reaction that produces light in an analytical instrument.
Nuclear chemistry
Nuclear chemistry deals with selective identification and the isolation or concentration of radioactive substances. Nuclear chemists use many techniques such as alpha measurements and Beta-gamma techniques to perform their work.
Pharmaceutical Chemistry
Pharmaceutical chemistry is a branch of chemical science concerned with the research and development of medicines.
It is among the oldest branches of applied chemistry, dating back to antiquity, but in its modern form it has come to encompass many new sciences as well. The discovery and synthesis of various chemical compounds during an empirical search for therapeutic agents was recorded in written history for several millennia, yet very few pharmacologically active natural products were known prior to 1950 due to crude analytic capabilities during the 18th through 20th centuries. In contrast, there exists a plethora of drugs created solely by chemical modification at high synthesis first developed after World War II.
Pharmaceutical chemists are interested in developing topical drugs (a drug applied on a client's skin) or parenteral drugs (a drug delivered through the skin into the bloodstream).
Polymer Chemistry
Polymer chemistry is the study of polymers and their properties. These are large molecules that consist of many individual molecules, called monomers
Polymers have endless applications in every aspect of our everyday lives including plastics, clothing, diapers, food wraps and packaging.
Quantum Chemistry
Quantum chemistry is concerned with the calculations of physical reactions that happen at very small scales.
This type of chemistry incorporates quantum mechanics and, specifically, wave functions to better predict chemical reaction rates than classical physics by using Schrödinger equation.
Quantum chemists study topics such as electron tunnelling, spin, atomic orbitals and molecular symmetry energies.
The types of molecules that they work with range from individual molecules to large-scale industrial processes and can be found in any area of science including drug discovery.
Get Help With College Chemistry Topics
As stated above web can help you with all chemistry topics. Whether you are looking for high school chemistry help, college, undergraduate homework assistance, or graduate chemistry projects - chemistry dissertation and thesis writing services, help is just a click away. Ask your chemistry questions and hire
Acids, bases and salts
Acids which taste sour are called as sour tasting acids i.e., hydrochloric acid (HCl), nitric acid (HNO3) or sulfuric acid (H2SO4). Bases have bitter taste like alkalis that are used for washing clothes ("laundry detergent"). Bases like sodium hydroxide (NaOH) or potassium hydroxide (KOH) react with acids to produce salt water plus some heat (exothermic) i.e., sodium hydrogen carbonate, sodium hydrogen sulphate and potassium chloride.
Bases and acids react with each other to produce salt water + heat or they combine partially producing salts . For e.g., HCl reacts with NaOH to produce NaCl + H2O & heat. Thus, "salt" is actually a chemical combination of an acid and a base.
Different types of salts have different formulas: depending on the acids & bases which are combined in it! That's why we call them "different kinds of salts"!
Metals and non-metals:
Metals:- Metals are dense elements having shiny luster when polished like gold, silver, copper etc. Some of them (e.g., potassium, calcium) are used in batteries to generate electricity (electrical energy).
Non-metals:- generally have dull luster, poor conductivity and very low melting point like water, carbon or plastics.
Elements :- These are substances which cannot be broken down into other chemical compounds called as "elementary substances".
There are more than 100 elements; 92 of them occur naturally on earth! The rest are made artificially in laboratories!
Compounds / Mixtures:
Compound is a material which has two or more different kinds of elements chemically combined together. We can separate those elements by physical methods i.e., distillation or by chemical methods i.e., electrolysis etc.
Mixtures are basically two or more different substances mixed together.
Ionic compounds / ionic bonds:
Some of the elements form only one kind of molecule called as anion, which attracts electrons to itself forming a negative ion and thus produces positive ions (cations). Such compounds where anions & cations exist in equal numbers are called "ionic compounds", because they have small particles carrying either a positive charge (cation) or a negative charge (anion) known as ions ! Some examples of such ionic compounds are NaCl that is made up of sodium cation and chloride anion; CaF2 is made up calcium cation and fluorine anion; etc. Ionic compounds have high melting points and are generally hard substances.
In ionic compounds, cation steals an electron from anion to become positive & anion becomes negative due to which the attraction between both the ions increases immensely! This is called as "ionic bond". In other words, ionic bonds form when a molecule acquires a net electrical charge when two oppositely charged ions attract each other by electrostatic interaction via Coulomb's law.
Covalent compounds / covalent bonds:
Some elements exist as diatomic molecules (two atoms bonded together) like O2, N2, H2 or F2 for e.g., in case of oxygen (O2), oxygen molecule consists of two oxygen atoms bonded together. These types of elements are called as "non-metals" ! Covalent compounds generally have lower melting points and high boiling point compared to ionic bonds . In covalent compounds, sharing of electrons occurs between the atoms i.e., due to attraction between them which is known as "covalent bonding". In other words, covalent bonds form when a molecule acquires a net electrical charge when two non-polar (having no polarity) molecules attract each other by averaging the effect of equal positive & negative charges that cancel each other out! This is called as line bond.
Molecular structure:
According to Dalton's atomic theory , all matter is composed of tiny building blocks called atoms ! Thus, according to this theory all compounds too are made up of these tiny building blocks called atoms . The arrangement of the atoms (called as molecular structure) in a compound is very important for its properties.
For e.g., In case of water H2O, hydrogen atoms attract oxygen atom & thereby forms a covalent bond between them. This attraction increases with the increase in distance between two atoms because it's an electrically charged particle! So, they have low boiling point and high melting point compared to ions (it has higher density also). But at room temperature, there is enough kinetic energy which makes them move faster & thus become gaseous (in gaseous form i.e., in form of steam, water is present as very tiny particles). This means that molecular structure changes with the change in temperature and pressure !
Phase / States of matter:
A phase is a set of states of matter which can't be changed into each other. In other words, different phases have different states & properties which are independent of each other . For e.g., solid, liquid or gaseous states are called as "phases". There can be three possible phases for any substance at any given condition i.e., solid(s), liquid(l) and gas (g).
• Solid: Solid state has higher melting point than liquid & gaseous states have higher boiling point than both ice & liquids . Temperature at which a substance changes into gaseous state is called as "boiling point" and at which a substance change into its solid state i.e., crystal formation is called as "melting point". In case of water, at normal conditions, it exists in three different phases.
• Liquid :- When the temperature increases from -100°C to 0°C, then it starts melting i.e., becomes liquid or moves towards liquid phase where we can see flowing of water-like materials ! Liquid has higher density than solid & therefore has lower boiling point (i.e., at normal conditions, we can see water coming out of taps as liquid) !
• Liquid to gas :- When the temperature increases from 0°C to 100°C (at normal conditions), its molecules will move faster and faster until they start leaving each other resulting in increasing the volume of a substance very rapidly i.e., it transforms into a gaseous phase known as steam . We can see this only when we boil water because above this temperature, steam is very hot & thus invisible even our eyes can't capture them ! Gas has lower density than liquids and higher density than solid(s). So, we don't feel it by touch but if try to capture it by a pipe, then we can see some amount of water flowing out of the pipe because these are gaseous form of water!
For e.g., In case of water, ice has high melting point than liquid and liquid has higher boiling point than gas (steam). This is called as third law- If two different substances have same property at a liquid phase state, they will not have that property in solid phase for e.g., solubility .
When a substance dissolves in another substance is called 'solubility' & this other substance is known as solvent . So depending upon the type of interaction between two substances i.e., whether it's ionic, covalent or a metallic bond, solubility of the substance will be different . For e.g., water is a solvent and sugar & salt are solutes which means that they dissolve in water i.e., when solid dissolves in liquid state to give solution (liquid with dissolved substances). Most of the times we keep materials like sugar, salt etc at our kitchen because these materials have high solubility in water . This property comes very much useful while cleaning purposes also because if we put many such materials in water then it can remove grease from surfaces easily resulting in same cleaning effects!
Mixture is a combination of two or more pure substances without any chemical reaction but still they will behave as a single substance . For e.g., air is mixture of gases(oxygen, nitrogen etc) & earth is mixture of different minerals and rocks . Mixture can be classified in to two types : homogeneous mixtures & heterogeneous mixtures!
• Homogeneous Mixture:- If a whole mixture has same properties like original pure substances then it's called as homogenous mixture i.e., all parts of the substances are uniformly distributed throughout the composition which results in uniform behavior for e.g., water !
• Heterogeneous Mixture:- If a whole mixture does not have same properties like original pure substances then it's called as heterogeneous mixture where different particles do not interact with each other in any way so they will behave differently & thus are called as two different substances. For e.g., sand is heterogeneous mixture of minerals, metals, quartz particles etc
Colligative Properties:
When we consider only physical states of a substance but not their composition (composition = pure quantity or molar amount ), then it's called as colligative properties . There are three types of colligative properties : freezing point depression , boiling point elevation and vapor pressure lowering!
• Freezing Point Depression: If the concentration of solute increases in solvent than its freezing point will decrease . This happens because when the solute is dissolved in water, the energy required for water molecules to separate themselves from each other decreases resulting in decreasing temperature at which water freezes. So, if we add any solute to water then the solution will be very much less dense than pure water resulting in freezing point depression .
• Boiling Point Elevation: If the concentration of a solute increases in solvent , than its boiling point also increases and it goes on increasing as we keep adding more & more of solutes ! The reason for this is that when something dissolves in liquid state, it forms ions with unbalanced charges so these ions have higher mobility which results in increase in temperature at which bubbles can form out of gas phase because they want to move towards regions where other excess ions reside.
• Vapor pressure lowering: If the concentration of a solute increases in solvent , then its vapor pressure will decrease. Since we can't smell salt or sugar but we can definitely smell water , so it means that other solutes with low vapor pressure are some how reducing water's vapor pressure and less the concentration of these substances, more will be the vapor pressure of water.
These are some of the chemistry topics that can't be ignored & if you want to learn more then feel free ask a chemistry question and get it answered by a chemistry expert!
Hire real chem assignment doers. Choose from over 500 chemistry homework helpers now.
Online Chemistry Assignment Help
Ask for help with online chemistry assignment projects here. Start by asking for a quote and get help now. Other related topics include:
Pay For Chemistry Homework Help Online With Confidence
Enough has been written about the quality of the service that you expect when you pay for chemistry assignment services from our professionals. Choose your expert now and get a good grade.
You can also ask for college online assignment help in:
Ask for professional online homework writing services and many more tutoring services.
Ready to buy chemistry homework help services?
Our chemistry helpers are available 24/7 waiting for you to ask chemistry questions and get answers fast. Most experts are able to solve chemistry problems in less than 15 minutes. Click on the link below to get started with our chemistry assignment writing services.
Get a Quote Now
Pay someone to do my chemistry homework for me is the most recommended homework writing company online. Our portal connects students looking for help with chemistry assignments with professional assignment doers in 3 simple steps.
Tutlance is a chemistry forum which can help students to post questions related to their projects, homework & get instructions on how to solve complicated problems easily. Click to get started now. |
f4ffda7b72d6b1fb | Spotted Hyena Optimization (SHO) Algorithm: Mimicked from Hunting Behavior of Hyena
1. Introduction
Spotted Hyena Optimizer is a metaheuristic bio-inspired optimization algorithm developed by Dhiman et al. The fundamental concept of this algorithm is to simulate the social behaviors of spotted hyenas. The main steps of SHO algorithm are inspired by their hunting behavior. Further, the SHO algorithm is tested on real-life constrained engineering design problems with more than four variables [1]. The results reveal that the performance of SHO performs better than the other competitor algorithms for real-life approaches. The motivation behind this work is to propose a novel multi-objective optimization algorithm called Multi-objective Spotted Hyena Optimizer (MOSHO) which is based on Spotted Hyena Optimizer (SHO). The last phase is swarm intelligence-based algorithms that are based on the collective behaviors of social creatures. These collective behaviors are inspired by natural colonies, schools, flock, and herds. The well-known algorithms of swarm intelligence-based techniques are Ant Colony Optimization (ACO), Bat-inspired Algorithm (BA), Hunting Search (HUS), Particle Swarm Optimization (PSO), and Bee Collecting Pollen Algorithm (BCPA). The social relation and hunting behaviors of spotted hyenas are the main inspiration of this algorithm. SHO algorithm mimics the cohesive clusters between the trusted spotted hyenas [2]. The four main steps of SHO are searching, encircling, hunting, and attacking. In SHO algorithm, the hunting behavior is guided by the group of trusted friends (so far solutions) towards the best search agent and saves the best optimal solutions. Optimization is the technique for determining the decision variables of a function to minimize or maximize its values. Most of the real-world problems have nonlinear constraints, high computational cost, are non-convex and complicated, and large number of solution spaces. Therefore, solving such problems incorporating the variety of variables and constraints is very tedious and complex. Secondly, there are many local optimum solutions that do not guarantee the best overall solution using classical numerical methods. To overcome these problems, metaheuristic optimization algorithms are introduced, which are capable of solving such complex problems during the course of iterations. Single-solution based algorithms are those in which a solution is randomly generated and improved until the best result is obtained. Population-based algorithms are those in which a set of solutions is randomly generated in a given search space and solution values are updated during iterations until the best solution is found [3].
2. Inspiration of Spotted Hyena Optimization Algorithm
Fig 1: Inspiration of SHO Algorithm
The adaptive grid mechanism is used to produce the distributed Pareto fronts. The grid has to be recalculated and relocate each individual if the inserted individual into population lies outside the current bounds of the grid [4]. The adaptive grid is a space formed by hyper cubes and is used to distribute in a uniform way. The social relation and hunting behaviors of spotted hyenas are the main inspiration of this algorithm. SHO algorithm mimics the cohesive clusters between the trusted spotted hyenas. The four main steps of SHO are searching, encircling, hunting, and attacking. In SHO algorithm, the hunting behavior is guided by the group of trusted friends towards the best search agent and saves the best optimal solutions.
3. Spotted Hyena Optimizer (SHO) Algorithm
The basic concepts of SHO followed by brief description of multi-objective version of SHO. Social relationships are dynamic in nature. These are affected by the changes in relationship among comprising the network and individual leaving or joining the population [5]. The animal behavior has been classified into three categories.
• The first category includes environmental factors such as resource availability and competition with other animal species.
• The second category focuses on social preferences based on individual behavior [6].
• The third category has less attention from scientists which includes the social relations of species itself.
The social relation between animals is the inspiration of our work and correlates this behavior to spotted hyena which is scientifically named as Crocuta. Hyenas are large dog-like carnivores. They live in savannas, grasslands, sub-deserts, and forests of both Africa and Asia. They live 10-12 years in the wild and up to 25 years in imprisonment. There are four known species of hyena such as spotted, striped, brown, and aardwolf. These differ in size, behavior, and type of diet. All of these species have a bear-like attitude. Hyenas are skillful hunters and largest of three other hyena species (i.e., striped, brown, and aardwolf) [7]. Spotted Hyena is also known as laughing hyena because its sounds are much similar to a human laugh. There are spots on their fur reddish brown in color with black spots. Spotted hyenas are complicated, intelligent, and highly social animals with really dreadful reputation. They have the ability to fight endlessly for territory and food.
In spotted hyenas, female members are dominant and live in their clan. However, male members leave their clan when they become adults and join a new clan. In a new family, they are lowest ranking members to get their share of meal. A male member who has joined the clan always stays with the same members (friends) for a long time. Whereas, a female is always assured of a stable place. An interesting fact about the spotted hyena is that they produce sound to communicate with each other during the searching of food source. According to Ilany et al, spotted hyenas usually rely on a network of trusted friends that have more than 100 members [8]. They usually tie up with another spotted hyena that is a friend of a friend or linked in some way through kinship rather than any unknown spotted hyena. Spotted hyenas are social animals that can communicate with each other through specialized calls such as postures and signals. They use multiple sensory procedures to recognize their kin and other individuals. They can also recognize third party kin and rank the relationships between their clan mates during social decision making. The spotted hyena track prey by sight, hearing, and smell. Cohesive clusters are helpful for an efficient cooperation between spotted hyenas. In this work, the hunting technique and social relation of spotted hyenas are mathematically modeled to design the multi-objective SHO algorithm [9]. Fig. 2 shows the next position of a search agent lies between its current position and the position of the prey which will helpful to meet towards an estimated position of prey.
Fig 2: Spotted Hyena Optimization Algorithm
3.1. Steps for SHO Algorithm
• Encircling behavior
• Hunting
• Attacking behavior
• Search for behavior
3.1.1. Encircling Behavior
The target behavior or objective is considered as the best solution and the other search agents can update their positions with respect to obtained best solution. Spotted hyenas can know where their prey is and surround them [10]. We consider the current best candidate is the spotted hyena closest to the target or prey because of search space not known a priori. The locations of other search agents are updated after the best search solution is defined.
3.1.2. Hunting
The next step of SHO algorithm is the hunting strategy which makes a cluster of optimal solutions against the best search agent and updates the positions of other search agents. In order to mathematically imitate the hunting behavior of spotted hyena, we suppose that the best search agent is optimum, which is consider as the location of prey, the other search agent towards the best search agent, constantly update their positions until to find the best solutions, then save the best solution [11].
3.1.3. Attacking Behavior
The best solution and updates the positions of other search agents on the basis of the position of the best agent, the spotted hyena attack the prey constantly updates their position [12].
3.1.4. Search for Behavior
The searching mechanism describes the exploration capability of an algorithm. The proposed SHO algorithm ensures thus capability using random values which are greater than or less than 1. The vector is also responsible to show the more randomized behavior of SHO and avoid local optimum [13].
3.2. Flow Chart of SHO Algorithm
Fig 3: Flowchart of SHO Algorithm
4. Numerical Expressions of SHO Algorithm
The mathematical model of this behavior is represented by Equations [14],
5. Applications of SHO Algorithm
• Feature selection [15]
• Economic Dispatch Problem
• Fusion reaction
• Power generation [16]
• Power Flow controller
• Machine Learning [17]
Fig 4: Applications of SHO Algorithm
6. Advantages of SHO Algorithm
• To solve economic load power dispatch problem and converge toward the optimum with low computational efforts.
• To evaluate the effectiveness of MOSHEPO, the proposed algorithm has been tested on various benchmark test systems and its performance is compared with other well known approaches [18].
• The more basic SHO is compared to other acclaimed state-of-the-art optimization algorithm, the results show that the proposed algorithm can provide better results [19].
• LI-SHO method for image matching mixed together the advantages of SHO and lateral inhibition mechanism [20].
• The effects of convergence, scalability, and control parameters have been investigated. The statistical significance of the proposed approach has also been examined through ANOVA test.
• There is a lot of interest in developing metaheuristic algorithms that are computationally inexpensive, flexible, and simple by nature [21].
[1] Kaur, A., Kaur, S. and Dhiman, G. (2018). A quantum method for dynamic nonlinear programming technique using Schrödinger equation and Monte Carlo approach. Modern Physics Letters B, 32(30), p.1850374.
[2] Dhiman, G. and Kumar, V. (2018). Multi-objective spotted hyena optimizer: A Multi-objective optimization algorithm for engineering problems. Knowledge-Based Systems, 150, pp.175-197.
[3] Najmi, A., Rashidi, T., Vaughan, J. and Miller, E. (2019). Calibration of large-scale transport planning models: a structured approach. Transportation.
[4] Dhiman, G. and Kaur, A. (2018). Optimizing the Design of Airfoil and Optical Buffer Problems Using Spotted Hyena Optimizer. Designs, 2(3), p.28.
[5] (2019). ED-SHO: A framework for solving nonlinear economic load power dispatch problem using spotted hyena optimizer | Modern Physics Letters A. [online] [Accessed 5 Sep. 2019].
[7] Dhiman, G. (2019). MOSHEPO: a hybrid multi-objective approach to solve economic load dispatch and micro grid problems. Applied Intelligence.
[8] How Effective is Spotted Hyena Optimizer for Training Multilayer Perceptrons. (2019). International Journal of Recent Technology and Engineering, 8(2), pp.4915-4927.
[9] Luo, Q., Li, J. and Zhou, Y. (2019). Spotted hyena optimizer with lateral inhibition for image matching. Multimedia Tools and Applications.
[10] Kumar, V. and Kaur, A. (2019). Binary spotted hyena optimizer and its application to feature selection. Journal of Ambient Intelligence and Humanized Computing.
[11] Jia, H., Li, J., Song, W., Peng, X., Lang, C. and Li, Y. (2019). Spotted Hyena Optimization Algorithm With Simulated Annealing for Feature Selection. IEEE Access, 7, pp.71943-71962.
[12] Sahu, R., Sekhar, G. and Priyadarshani, S. (2019). Differential evolution algorithm tuned tilt integral derivative controller with filter controller for automatic generation control. Evolutionary Intelligence.
[13] Kaur, A., Jain, S. and Goel, S. (2019). SP-J48: a novel optimization and machine-learning-based approach for solving complex problems: special application in software engineering for detecting code smells. Neural Computing and Applications.
[14] Zamani, H., Nadimi-Shahraki, M. and Gandomi, A. (2019). CCSA: Conscious Neighborhood-based Crow Search Algorithm for Solving Global Optimization Problems. Applied Soft Computing, p.105583.
[15] Dhyani, A., Panda, M. and Jha, B. (2018). Moth-Flame Optimization-Based Fuzzy-PID Controller for Optimal Control of Active Magnetic Bearing System. Iranian Journal of Science and Technology, Transactions of Electrical Engineering, 42(4), pp.451-463.
[16] Ismaeel, A., Elshaarawy, I., Houssein, E., Ismail, F. and Hassanien, A. (2019). Enhanced Elephant Herding Optimization for Global Optimization. IEEE Access, 7, pp.34738-34752.
[17] Deb, S., Gao, X., Tammi, K., Kalita, K. and Mahanta, P. (2019). Recent Studies on Chicken Swarm Optimization algorithm: a review (2014–2018). Artificial Intelligence Review.
[18] Dhal, K., Ray, S., Das, A. and Das, S. (2018). A Survey on Nature-Inspired Optimization Algorithms and Their Application in Image Enhancement Domain. Archives of Computational Methods in Engineering.
[19] Ugur, L., Kanit, R., Erdal, H., Namli, E., Erdal, H., Baykan, U. and Erdal, M. (2018). Enhanced Predictive Models for Construction Costs: A Case Study of Turkish Mass Housing Sector. Computational Economics, 53(4), pp.1403-1419.
[20] Dhal, K., Das, A., Ray, S., Gálvez, J. and Das, S. (2019). Nature-Inspired Optimization Algorithms and Their Application in Multi-Thresholding Image Segmentation. Archives of Computational Methods in Engineering.
[21] Yalcin, Y. and Pekcan, O. (2018). Nuclear Fission–Nuclear Fusion algorithm for global optimization: a modified Big Bang–Big Crunch algorithm. Neural Computing and Applications.
Leave a Reply
%d bloggers like this: |
bf6392493b993715 | Quantum trajectories: memory and continuous observation
Alberto Barchielli Politecnico di Milano, Dipartimento di Matematica, Piazza Leonardo da Vinci 32, I-20133 Milano, Italy [ Clément Pellegrini Laboratoire de Statistique et Probabilités, Université Paul Sabatier, 118, Route de Narbonne, 31062 Toulouse Cedex 4, France. Francesco Petruccione University of KwaZulu-Natal, School of Physics and National Institute for Theoretical Physics, Private Bag X54001, Durban 4000, South Africa.
February 17, 2021
Starting from a generalization of the quantum trajectory theory [based on the stochastic Schrödinger equation (SSE)], non-Markovian models of quantum dynamics are derived. In order to describe non-Markovian effects, the approach used in this article is based on the introduction of random coefficients in the usual linear SSE. A major interest is that this allows a consistent theory of quantum measurement in continuous time to be developed for these non-Markovian quantum trajectory models. In this context, the notions of ‘instrument’, ‘a priori’, and ‘a posteriori’ states can be introduced. The key point is that by starting from a stochastic equation on the Hilbert space of the system, we are able to respect the complete positivity of the mean dynamics for the statistical operator and the requirements of the axioms of quantum measurement theory. The flexibility of the theory is next illustrated by a concrete physical model of a noisy oscillator where non-Markovian effects come from the random environment, colored noises, randomness in the stimulating light, and delay effects. The statistics of the emitted photons and the heterodyne and homodyne spectra are studied, and we show how these quantities are sensitive to the non-Markovian features of the system dynamics, so that, in principle, the observation and analysis of the fluorescent light could reveal the presence of non-Markovian effects and allow for a measure of the spectra of the noises affecting the system dynamics.
42.50.Lc, 03.65.Ta, 03.65.Yz
Also at ]Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Milano, and Istituto Nazionale di Alta Matematica (INDAM-GNAMPA).
I Introduction
A first aim of the theory of open quantum systems is the description of the time evolution of a quantum system (the open system) interacting with an environment BreP02 . More precisely, one focuses on the reduced evolution of after tracing out the degrees of freedom of . The resulting evolution is then usually described in terms of generalized master equations for the reduced density matrix .
A particular simple and useful way to describe an open system is provided by the Markovian approximation Carm02 . Essentially, this approach is based on the absence of memory effects in the environment. In this situation, the master equations are linear first-order differential equations with a possibly time dependent generator. The generator takes a particular form, the well known Lindblad form, that guarantees the complete positivity of the dynamics, as required by quantum mechanics.
Unfortunately, this approximation is no longer valid when the memory effects of the environment cannot be neglected. Several physical situations involve long-memory-time effects and lead to a non-Markovian behavior: strong coupling, correlation, and entanglement in the initial - state SBPV , a? system at low temperature Weiss , and a? structured environment Lambro . In this context, the master equations take different forms according to the physical situation, e.g., integro-differential equations BreP02 ; Smirne , time-convolutionless equations BreP02 , and Lindblad rate equations Bu1 . The common point is that they are not in the Lindblad form, which is characteristic of Markovian evolutions.
In both situations (Markovian and non-Markovian) a useful approach to describe concrete physical evolutions is provided by the theory of the stochastic Schrödinger equation (SSE) Bel88 ; Dio88a ; BarB91 ; Carm . A SSE is a nonlinear stochastic differential equation for a wave-function process . The link with the traditional master equation is given by the average property , where denotes the average over the realizations of . To find the SSE providing a given master equation by averaging is called unraveling. The idea of unraveling has been a real breakthrough for simulating master equations; it is at the root of the Monte-Carlo wave function method BreP02 ; MCwfm . Indeed, for huge systems, the description of requires many fewer parameters than the ones needed for .
However, the construction of adequate SSEs has been essential also for a second aspect of the theory of open quantum systems: the description of the monitoring of . In special situations, the SSE can be interpreted in terms of quantum measurements. More precisely, in these cases, the solution is called a quantum trajectory and describes the evolution of an open system undergoing indirect continuous measurement (continuous monitoring) GarZ04 ; WisM10 ; BarG09 ; BarB91 . In particular the noises involved in the SSE, describing jump or diffusion evolutions, can be directly connected with the outputs of measurement apparatuses. Such an interpretation is crucial in the understanding of real quantum optics experiments Carm08 ; BarG09 ; BarB91 ; Bar90 ; Carm such as direct photo-detection, spectral photo-detection, homodyning, and heterodyning. They are also at the cornerstone of modern technology such as feedback control WisM10 ; Bel88 ; WWM . As a consequence an active line of research consists in finding SSEs that can be physically interpreted in terms of continuous monitoring of the system.
In the Markovian case, this link is clearly established for almost all situations GarZ04 ; WisM10 ; BarG09 . Starting from a master equation in Lindblad form, it is known how to construct an appropriate unraveling in terms of a SSE. The (nonlinear) SSE is a stochastic equation for a random normalized vector . It is always possible to construct a linear SSE, driven by Poisson and Wiener noises, for a non-normalized vector , such that . Moreover, the linear and nonlinear versions of the SSE are related by a change of probability measure and it is this link that allows for a measurement interpretation consistent with the postulates of quantum mechanics BarG09 ; BarB91 . In mathematical terms the change of measure is a Girsanov transformation with probability density ; the key point that allows this transformation is the fact that turns out to be a martingale BarG09 . Moreover, these stochastic differential equations can be deduced from purely quantum evolution equations for the measured system coupled with a quantum environment, combined with a continuous monitoring of the environment itself quantum ; Castro .
In the non-Markovian case, to find relevant SSEs, describing both non-Markovian quantum evolutions and continuous monitoring, is a tremendous challenge. In contrast to the Markovian framework, no general theory has been developed. Essentially there exist two strategies.
The first strategy consists in considering a physical model described by a non-Markovian master equation and in finding an appropriate pure-state unraveling. This approach has been successfully applied in various situations. A major common point consists in replacing the memoryless white noises, used in Markovian SSEs, by colored Gaussian noises and in introducing some delay effects Stru1 . This allows the introduction of correlations in time that describe strong memory effects of the environment interaction. In this direction several models, such as, for example, the so-called non-Markovian quantum state diffusion, have been derived Stru3 . Other investigations involving non-Markovian jump type SSEs have been also proposed Piilo1 ; Piilo3 . While such approaches are efficient for simulating relevant non-Markovian evolutions, the measurement interpretation of the underlying SSEs is highly debated Gam4 ; Diosi3 ; Piilo3 and a complete conclusion is still lacking. A principal problem concerns the interpretation of the underlying noises as outputs of continuous-time measurements. For other models such as Lindblad rate equations, different types of jump unravelings have been proposed MoPe . Jump-diffusion generalizations with measurement applications have been derived in Ref. BarPel . In this context limitations also appear in the sense that the types of observables that can be measured must have particular and restrictive forms.
A second strategy is first to generalize directly the Markovian SSE by introducing memory effects. Then, one has to show whether this SSE provides the unraveling of some non-Markovian evolution and whether it has a physical measurement interpretation. To work at the Hilbert space level guarantees automatically the complete positivity of the evolution of the statistical operator. In this paper we propose non-Markovian SSE models with physical measurement interpretations. Our strategy consists in adapting the Markovian approach by replacing white and Poisson noises with non-Markovian noises and by allowing for random coefficients in the equation. First we start with a linear SSE driven by colored noises and involving random operator coefficients; the whole randomness is defined under a reference probability. Next we introduce the physical probability and the nonlinear SSE, which is a stochastic differential equation under the new probability. Finally it is possible to pass to the linear and nonlinear versions of the stochastic master equation (SME), and we show that they determine the dynamics and the continuous measurements without violating the axiomatic structure of quantum mechanics. The general mathematical structure was introduced in BarH95 ; BDPP11 ; in BPP10 we started to show how such a structure allows for the introduction of some colored noises, while in BarG11 we considered memory effects due to feedback with delay. The present article is devoted to the exploration and clarification of physical effects that can be treated within such a theory and to show them on a concrete physical system.
The paper is structured as follows. Section II describes the general theory of the stochastic Schrödinger equation. We present the general mathematical ingredients necessary to develop the generalization of the SSE involving colored noises and random coefficients. We consider first a linear stochastic equation for a non normalized wave function . Then, with the help of a change of measure, determined by the SSE itself, we derive the nonlinear SSE for the wave function . In Sec. III the linear SME and the nonlinear one are introduced and the measurement interpretation is justified by introducing positive operator-valued measures, instruments, a priori states (mean states), and a posteriori states (conditional states). Section IV is devoted to a concrete model, a noisy oscillator absorbing and emitting light, by which physical effects can be discussed and the possibilities of the theory can be explored. Moreover, in this section we study the behavior of the outputs of the oscillator; in particular we study the effects of the non-Markovian terms in the dynamics on the homodyne and the heterodyne spectra of the emitted light and on the statistics of the photons, now analyzed by direct detection. Conclusions are presented in Sec. V.
Ii The stochastic Schrödinger equation
When introducing non-Markovian evolutions for a quantum state, the first problem is to guarantee the complete positivity of the evolution of the state (statistical operator) of the reduced system. Then, if one wants to introduce measurements in continuous time, the second problem is to have equations compatible with quantum measurement theory. Starting from the linear version of the SSE allows memory to be introduced by using random coefficients and colored noises (no problem of complete positivity because we are working at Hilbert space level) and the instruments related to the continuous monitoring to be constructed (no problem with the axioms of quantum theory because we are respecting linearity) BarH95 ; BPP10 ; BDPP11 .
Let us denote by the Hilbert space of the quantum system of interest, a separable complex Hilbert space, by the space of the bounded operators on , by the trace class and by the convex set of the statistical operators.
ii.1 The linear SSE and the reference probability
The starting point of the whole construction is the linear SSE for a stochastic process with values in :
In the Markovian case the are Wiener processes, the are Poisson processes, and all these processes are independent. Moreover, the operators , , and are not random. The key property that allows for a measurement interpretation is that is a mean-1 martingale, and this requirement imposes a link among the operators , , and . The non-Markovian generalization is to take more general processes as driving noises and to allow for random coefficients. Now we illustrate the precise meaning of the various quantities appearing in the SSE (1).
First of all we work in a reference probability space ; is the sample space, the -algebra of events, and a reference probability. The physical probability will appear when the measuring interpretation is constructed in Sec. II.2. Past and present up to time are represented by the events in the sub--algebra ; the family is a filtration of -algebras satisfying the usual hypotheses, i.e., for , with implies , and . In we have continuous, independent, adapted, standard Wiener processes and adapted càdlàg counting process of stochastic intensities , that are càglàd. The French acronym càdlàg means with trajectories continuous from the right and with limits from the left, while càglàd means continuous from the left and with limits from the right. The meaning of stochastic intensity is given by the heuristic conditional expectation
the stochastic intensities determine the probability law of the counting processes DalVJ03 . Assuming the usual hypothesis, that the processes are càdlàg or càglàd, etc., are mathematical regularity requirements useful in a rigorous development of stochastic calculus, what is physically important is to have non anticipating (= adapted) processes.
The continuous processes are given by
where and are complex, adapted càglàd processes such that , with probability 1, and . Some typical choices are given in Sec. IV.
The functions , and , are strongly càglàd, bounded operator-valued adapted processes; to be bounded is a sufficient condition to have a well-defined general equation BarH95 . For physical problems also the unbounded case is important and, indeed, the examples we shall give involve unbounded operators; the case involving unbounded operators, but restricted to a Markovian dynamics, is treated in Hol1 ; Castro . By allowing for random system operators and the general noises (3), it is possible to describe random external forces, random environments, colored baths, stochastic control, adaptive measurements and so on.
The SSE (1) is a linear stochastic differential equation in the Itô sense. The initial condition is taken to be
Note that, by suitably choosing , and , any statistical operator can be represented as . The solution is taken to be càdlàg and it is unique BarH95 . To write means to take the value of just before the possible jump at time due to the counting processes.
By using the explicit expressions for the processes , the linear SSE can be rewritten as
where and
For the physical interpretation in terms of measurements, we need to be a martingale: , for BarG09 . We shall see that this is crucial for defining physical probabilities. To this end we have to compute . Here and in all the formulas involving stochastic differentials, we have to use Itô’s formula and the rules of stochastic calculus, which are summarized by Itô’s table
Then, we get
The martingale property is ensured if we have . By (2) and , we get the restriction
with .
ii.2 The nonlinear SSE and the physical probability
Let us define the quantity
and the normalized version of ,
where is a non random vector with and we denote by the generic sample point in , as usual. Moreover, we introduce the processes
By condition (8), Eq. (7) becomes
As already said, the key property of quantum trajectory theory is that is a mean-1 martingale, which follows from this equation and the normalization (4) of the initial condition (BarH95, , Theorem 2.4, Sec. 3.1).
The physical probability.
Now we introduce the new probability measures, whose physical meaning will be discussed in Sec. III: ,
Owing to the martingale property of the probability density , the probabilities are consistent, in the sense that for , .
The new probability modifies the distribution of the processes and . A very important property is that a Girsanov-type theorem holds (BarH95, , Proposition 2.5, Remarks 2.6 and 3.5).
Girsanov transformation.
Under , in the time interval , the processes
are independent Wiener processes, while the counting processes change their stochastic intensities, which become . The quantities and are defined in Eqs. (11) and (12).
Note that, if for a certain index we have , , then : the process remains a Wiener process also after the change of probability and it is independent from all the other components of . For instance, from Eq. (11) we have for all initial conditions when the operator is self-adjoint for all .
The nonlinear SSE.
Under , in the time interval , the random normalized vector (10) satisfies the stochastic differential equation
with , and
To get this result one needs to compute from Eq. (13) and to express this differential and in terms of the new Wiener processes; the rigorous proof is given in Ref. BarH95 .
At least in the Markov case, it is this equation that is the starting point for powerful numerical methods BreP02 ; MCwfm .
Iii The stochastic master equation
Now that we have presented the theory of the stochastic Schrödinger equation for pure states, we develop the analog for density matrices and we introduce the stochastic master equation.
iii.1 The linear SME
As in the case of the SSE, we start with a linear equation. More precisely, from Eqs. (5) and (8) we can derive the linear SME for the process , :
where is the following Liouville operator:
Let us stress that this operator is random. In particular, this makes the solution non-Markovian since the randomness of the operator introduces a dependence on the past. This fact will be made explicit in the concrete model developed in Sec. IV.
Let us note that the usual master equations (without the driving noises and ), but with stochastic Liouville operators, have already been considered in the literature as models of non-Markovian evolutions. Moreover, these equations have been derived from unitary system-environment dynamics by various techniques and approximations; see, for instance, stochL .
iii.2 The nonlinear SME
Note that the probability density (9) of with respect to can be written as . Then, we normalize by defining the state ; when the denominator vanishes we take for an arbitrary state. It is then possible to show that satisfies the nonlinear SME under the new probability (BarH95, , Remark 3.6):
Everything can be expressed in terms of density matrices as we can write
The nonlinear SME for can also be directly obtained from (16) by remarking that
iii.3 The a priori states and the mean evolution
The mean state, or a priori state, is defined by
By Eqs. (18) and (20) one obtains
A major difference with the usual Markovian situation is that in our case this equation is not closed. In the Markovian case one obtains an equation of the form , but in our situation this is not possible, since the operator is random and contributes to the mean. Formally, a closed equation can be obtained by using projection techniques such as the Nakajima-Zwanzig method. This construction has been derived in BDPP11 , but the final equation is essentially not tractable.
It is then clear that the mean evolution is highly non-Markovian. It is important to notice that our approach ensures that this evolution stays completely positive. We then obtain a completely positive non-Markovian behavior, the memory effect being encoded into the random Liouville operator . In particular when is not random, we recover the usual Markovian framework.
iii.4 Measurement interpretation
In this section, we present the essential ingredients needed in order to describe the measurement interpretation of our theory.
iii.4.1 Observed outputs
Let us consider , , , , which are adapted and càglàd kernels and , which are adapted and càdlàg processes. We can then define the following processes, which represent the outputs of the continuous measurement process:
The idea underlying the construction of these processes is that the instantaneous outputs are the formal derivatives and . The measuring apparatuses have a smoothing effect on the singular instantaneous outputs and can also provide some post-measurement processing of the outputs. These effects are represented by the integrals with the detector response functions and . Moreover, it is possible that the detectors introduce some further noise, for instance of electronic origin, and this is taken into account by the additive noises and and by the fact that response functions can be random.
Let us consider now all the events that can be observed up to time , that is, the events determined by the outputs , up to . Let us denote by the collection of such events. In mathematical terms is the -algebra generated by and , with , , . Because all the processes involved in the definition of the outputs are adapted, we get , for all . Let us stress that in general we do not have , because contains only events that can be observed by the measuring apparatuses, while can contain extra sources of noise, which can affect the system (a noisy environment for instance).
iii.4.2 Feedback
In this formalism we can describe also measurement-based feedback: parts of the outputs are used to control some features of the dynamics or of the measuring apparatus, say through a stimulating laser or through a local oscillator in a homo- or heterodyne detector. When the feedback involves the output in the past, other memory effects are introduced. A typical measurement-based feedback is represented by a Hamiltonian term functionally dependent on some output up to the current time; while in this way it becomes a random Hamiltonian, its contribution is perfectly compatible with the whole formalism. We shall not give examples in this paper; the theory and some applications can be found in (BarH95, , Sec. 4.4) and BarG11 .
iii.4.3 Instruments and a posteriori state
A cornerstone of a consistent measurement interpretation of SME relies on the introduction of the so-called instruments. In order to develop this theory we need to define the propagator of Eq. (18), that is, the random linear map . An essential point is that this application is completely positive and satisfies the composition rule , .
Now for an event , we define
For all , is a completely positive linear map called an instrument. In particular this gives the probability that an event occurs. More precisely, if represents the pre-measurement state, the probability of is given by
and we recover the previous definition of the physical probability.
Then, we can define the a posteriori state by
The state corresponds to the update of the state of the system conditionally on the observation of the outputs up to time .
It is important to notice that in general we can not derive a closed equation for such as the one for . Essentially, it depends whether or not . In the case of the randomness of the operators appearing in will prevent the equation from being closed; again some projection technique could be used to obtain a kind of closed equation, but it would be intractable for practical purposes.
As a conclusion, we can see that this approach allows us to describe non-Markovian evolutions that are generalizations of the Markovian setup. As we shall see, the randomness of the operators and will be used to describe concrete non-Markovian effects such as colored environments and incoherent stimulating light.
The physical model is determined by the physical probability, the nonlinear SME, and the outputs, not by the SSE, which is not unique. Two SSEs giving solutions that differ only by a stochastic phase are physically equivalent; no physical consequence depends on a global phase in or (BarG09, , Sec. 2.5).
Iv A model: a noisy oscillator
Let us present now a mathematically treatable but sufficiently rich and physically interesting model; the aim is to understand what kind of physical phenomena and memory effects can be described by the theory we have presented. To be simple we take a linear system, but we allow for absorption, emission, colored noises acting on the system and on the detection apparatuses, and so on. The general scheme is the following:
1. The quantum system is a single oscillator; to fix the ideas we think of a mode in an optical cavity, but it could be an ion in a trap or some other system in the harmonic approximation. Let and be the usual annihilation and creation operators of quanta in the mode; then, the free Hamiltonian of the oscillator is
2. The system emits and absorbs light; the system-electromagnetic-field interaction is treated in the usual Markov approximation.
1. Some emitted light reaches a photocounter: direct detection. The post-processing of the output is taken into account by a detector response function.
2. Some light reaches a homo- or heterodyne detector. The function describing the local oscillator can be random, a way to model imperfections. We shall show that this fact introduces memory in the detection process, not in the mean dynamics. Moreover, we can have also a detector response function acting as a frequency filter; see Eq. (91).
3. We introduce a stimulating laser; the laser-wave-oscillator interaction is treated in the usual dipole and rotating-wave approximations. The laser wave can be random because the laser is noisy and/or because of feedback. This introduces memory also into the Liouvillian and in the mean dynamics, in spite of the fact that the interaction is without memory.
3. We introduce various kinds of colored environments. According to the choices of the parameters these new terms can describe incoherent light, a squeezed reservoir, a usual (or colored) thermal bath, intermediate situations, and so on.
As in the general part, also in this model are independent standard Wiener processes under the reference probability ; here we shall have . Moreover, we shall introduce diffusive channels and a single jump channel, . Finally, we shall introduce a single diffusive output and a single counting output; according to the notations of Sec. III.4.1 we shall have .
iv.1 Stimulating laser and emitted light detection
As already said we consider the oscillator-electromagnetic-field interaction in the dipole and rotating-wave approximations. We divide the directions of the propagating light into some “channels”. The index 1 labels the “side” channels used to describe the emitted light reaching a photo-counter (direct detection) or a homo- or heterodyne detector. Channels 2 and 3 are the “forward” channels in the direction of the stimulating laser; they describe also losses of light.
iv.1.1 Detection
Direct detection.
We consider only one counter, so that we have . Under the reference probability , the associated counting process is taken to be a Poisson process of intensity . When this channel is not open. The associated operator is
With respect to the general case of Sec. III.4.1, let us consider only deterministic, time invariant, real and continuous detector response functions, so that we have , , , and the output current is
Homodyne or heterodyne detection.
As usual, homo- or heterodyne detection is described in the Markov approximation by a diffusive channel driven by a Wiener process (under the reference probability ) (BarG09, , Sec. 7.2). By particularizing the quantities introduced in Sec. II.1, we have and , which means and . Then we take
is the contribution of the local oscillator, which can be random. Randomness in the local oscillator can be due to imperfections, but it could be due also to the fact that is taken dependent on some of the observed outputs at previous times in order to describe adaptive measurements, as is done in (WisM10, , Sec. 7.9.2).
We consider again a deterministic, time invariant, real and continuous response function ; in terms of the notation of Sec. III.4.1 we take , , and . Then, the output current of the homo- or heterodyne detector is
We assume the response function to be in , so that its Fourier transform exists:
We shall see in Sec. IV.5 that has the role of a linear frequency filter on the output.
Contribution to the linear SSE.
Summarizing, the contributions to the right hand side of the linear SSE (1) or (5) of the two detection channels are
The final linear SSE is given by Eq. (59).
iv.1.2 The forward channels
Channels 2 and 3 represent the forward channel (the direction of the stimulating laser) and the lost light; we can include in these channels other Markovian dissipative contributions. There is no detector associated with these channels, and we choose to put a diffusive component (the Wiener ) in channel 2, while channel 3 is used to complete the Hamiltonian part with the contribution of the stimulating laser. With respect to the symbols used in the linear SSE (1) and in Sec. II.1 we take
with , and , , , which give , ,
Contribution to the linear SSE.
Summarizing, the contributions to the right-hand side of the linear SSE (5) of channels 2 and 3 are
where contains the interaction between the stimulating external laser and the oscillator:
Stimulating laser.
The function represents the laser wave, eventually a laser with imperfections BarG09 . In the case of closed loop control, the laser wave could depend on the observed output BarG11 , but here we disregard the possibility of feedback. Then a good model for a not perfectly coherent stimulating laser is the phase diffusion model KM77 . Let be the carrier frequency of the laser light (in this case is called the detuning) and let be its bandwidth; then
The quantity contains the amplitude and the initial phase of the laser; in principle it could be a random variable, but for simplicity here we take it to be deterministic.
To identify the bandwidth of the laser light , we consider its spectrum. Since is a complex stochastic process, its spectrum is given by the classical definition Howard
By using the autocorrelation function (105) of the process we easily get the Lorentzian spectrum
Homodyne detection.
In this case the local oscillator and the stimulating light are generated by the same laser. A choice, that takes into account the differences in the optical paths, is
we are assuming . The phase and the time shift depend on the physical implementation of the homodyne apparatus and could be random, but, for simplicity, we take both to be deterministic, and .
Heterodyne detection.
The local oscillator and the stimulating wave are produced by different laser sources and the phase difference is not stable; the carrier frequencies are generally different. In this case could depend on the output (another form of closed loop control) or could be described by a phase diffusion model (noise in the local oscillator). In this second case we can take
, , .
iv.1.3 Summary of the contributions to the linear SME
We have already explicitly given the various contributions to the SSE. To understand better the meaning of these terms it is worthwhile to write down how they contribute to the linear SME (18) and to the random Liouville operator (19). Let us consider the free Hamiltonian of the oscillator and all the other terms we have introduced up to now; let us set
and as in Sec. III.1. Then, we have
(the ellipsis stands for further contributions that we shall introduce in Sec. IV.2),
From these equations it is apparent that the electromagnetic interaction has been treated in the usual Markov approximation; we see also that the parameter is the mode width. The only possible sources of memory are (the stimulating laser light) and (the local oscillator). So, up to now the memory is due only to the imperfections inducing randomness in the lasers involved. Remember that we have not included a conceptually very important source of memory, the possibility of feedback.
iv.2 A colored environment
Our aim here is to introduce some sources of colored noise; they could describe physically different scenarios, which we shall discuss at the end of the section.
Let us introduce a complex Gaussian process given by
Here and we set
moreover, we assume the complex functions to be integrable, i.e.
Now, we add two more diffusive channels; with the notations of Sec. II, we take and
that means ,
This gives and, for ,
The contribution of these new terms to the linear SSE (1) turns out to be
iv.2.1 The spectrum of the Gaussian noise
The dynamics of our system involves the differential of the process or, in other terms, its generalized derivative . Like the spectrum of (36), the spectrum of this classical complex process is defined by
when the limit exists.
By construction, is a Gaussian process with zero mean; its second moments, needed in (52), can be easily computed by using the properties of the stochastic integrals (the Itô isometry).
Let us introduce the Laplace transform of
which exists owing to the integrability condition (46).
The spectrum (52) is computed in Appendix A.2 and it is given by
Note that the spectrum of is the sum of the spectra of the components without any interference among them. Each spectral component contains a white-noise contribution () and a regular one (), which interfere (they sum up inside the square modulus). Moreover, let us stress that by this construction it is possible to insert Gaussian noises with given spectra, not only in this model, but even in the general theory.
iv.2.2 Contribution to the linear SME
As was done for the electromagnetic contributions in Eq. (41), it is useful to identify the contributions to the linear SME due to the new noises:
(the ellipsis stands for the contributions already introduced), where
Let us note that the contributions of the classical processes to the dynamics (55) are very reminiscent of the contribution of classical processes in the “adjoint equation” in (GarZ04, , Sec. 3.5). This shows that these contributions, or at least some of them, can come from the interaction of the system with a quantum reservoir and could be derived by using the techniques of the quantum Langevin equation and the adjoint equation (GarZ04, , Secs. 3.1 and 3.5).
Now we can identify the physical meaning of various possible contributions.
Incoherent light.
Consider the index and assume . Then, and this term contributes only with a regular random Hamiltonian term . Its structure is very similar to that of , but the two random processes involved are qualitatively different. The process is the exponential of a Gaussian process and represents a quasi-monochromatic wave (laser light). The process is Gaussian and could represent, for instance, incoherent light with an arbitrary spectrum , for instance, thermal light with a black-body spectrum (Carm02, , Eqs. (1.52) and (7.148))
Another possible choice for incoherent light is an Ornstein-Uhlenbeck process, that means taking
with , , . In this case, from (53) we get
and the contribution to the spectrum (54) is the Lorentzian term
As already seen, also the phase diffusion model (35) of the laser gives a Lorentzian spectrum (37), but, in spite of this, the two cases are completely different. The wave (35) is quasi-coherent, while the Ornstein-Uhlenbeck process represents a Gaussian incoherent wave.
Squeezed reservoir.
Consider now the indices and assume ; then, we get and the contributions of these terms are Markovian. Indeed, by defining |
f46dd45e4ff9bf85 | Orthogonal polynomials from Hermitian matricesOdake, SatoruSasaki, Ryu© 2008 American Institute of Physics. This article may be downloaded for personal use only. <br/>Any other use requires prior permission of the author and the American Institute of Physics. <br/>The following article appeared in JOURNAL OF MATHEMATICAL PHYSICS. 49(5):053503 (2008) and may be found at https://doi.org/10.1063/1.2898695A unified theory of orthogonal polynomials of a discrete variable is presented through the eigenvalue problem of Hermitian matrices of finite or infinite dimensions. It can be considered as a matrix version of exactly solvable Schrödinger equations. The Hermitian matrices (factorizable Hamiltonians) are real symmetric tridiagonal (Jacobi) matrices corresponding to second order difference equations. By solving the eigenvalue problem in two different ways, the duality relation of the eigenpolynomials and their dual polynomials is explicitly established. Through the techniques of exact Heisenberg operator solution and shape invariance, various quantities, the two types of eigenvalues (the eigenvalues and the sinusoidal coordinates), the coefficients of the three term recurrence, the normalization measures and the normalisation constants, etc., are determined explicitly.ArticleJOURNAL OF MATHEMATICAL PHYSICS. 49(5):053503 (2008)AMER INST PHYSICS2008-05engjournal articleAMhttp://hdl.handle.net/10091/18503https://soar-ir.repo.nii.ac.jp/records/11685https://doi.org/10.1063/1.289869510.1063/1.28986950022-2488AA00701758JOURNAL OF MATHEMATICAL PHYSICS49553503https://soar-ir.repo.nii.ac.jp/record/11685/files/Orthogonal_polynomials_from_Hermitian.pdfapplication/pdf488.1 kB2015-09-28 |
abe9d5775e8b25a3 | User Tools
Site Tools
Stan Zurek, Electron, Encyclopedia Magnetica
Electron (e) - a fundamental sub-atomic particle which has the intrinsic property of a negative elementary electric charge.1)
An electron is a part of every atom, with the number of electrons corresponding to the number of protons (atomic number), so that their electrical charges balance out and an atom can be electrically neutral.2)
Electric charge:3)
-1.602 176 634 × 10-19 C
9.109 383 7015 × 10-31 kg
Magnetic moment:5)
-9.284 764 7043 × 10-24 J/T
-1.001 159 652 181 28 μB
Spin: ½
Antiparticle: positron
Electron's mass is only around 1/1836 of proton's, despite both having equal but opposite electric charge.6) For this reason, electrons contribute to less than 0.1% of the mass of atoms.
Prof. Frank Wilczek:7)
So, what is an electron? An electron is a particle and a wave; it is ideally simple and unimaginably complex; it is precisely understood and utterly mysterious; it is rigid and subject to creative disassembly. No single answer does justice to reality.
Electric and magnetic properties of electrons, as well as their electromagnetic interactions dictate many properties of matter, obviously electrical, electronic and magnetic, but also chemical properties.
The small size of electrons allows obtaining much finer resolution of an electron microscope than it is possible for an optical microscope.
The name “electron” was proposed by G.J. Stone in 1894, and the electron was discovered by J.J. Thompson in 1897. Electron's mass and charge were measured by R.A. Millikan and H. Fletcher in 1909.8)
* Helpful page? Support us!
All we need is just $0.25 per month. Come on… ;-)
Microscopic properties
Microscopic properties of an electron have been extensively studied since its discovery. However, because of its very small size there are no experimental techniques which allow direct “probing” or visualising in the same sense as it is possible to observe some small structures under an optical microscope.9)
Many properties of electrons, such as electric charge or spin are detectable or measurable. It is possible to describe the rules by which they are bound, but it is not possible to explain the reason for their existence, and therefore they are assumed to be “fundamental” particles, with fundamental properties.
Electron size
Karim (2020)10) ~1 × 10-36 m
Mac Gregor (1992)11) < 1 × 10-18 m
Dhobi et al. (2020)12) 1 × 10-15 m
Coey (2010)13) 3 × 10-15 m
Mac Gregor (1992)14) 5 × 10-13 m
Wilczek (2013)15) 2 × 10-12 m
Dhobi et al. (2020)16) 2 × 10-12 m
Size of a particle has meaning in classical physics. However, at very small scales the quantum effects begin to play a significant role and it difficult to define the meaning of “size” of the assumed spherical object. It is not straightforward to agree even on methodology which should be used for definition, calculation or measurement.17)18) Measurements at decreasing scales require larger energies, which can produce additional particles and thus confuse the outcome of the measurement.
Wilczek (2013):19)
Attempts to pin down an electron's position more accurately than this require, according to the uncertainty principle, injecting the electron with so much energy that extra electrons and anti-electrons are produced, confusing the identity of the original electron.
Depending on the approach there can be several radius definitions for the electron:20)
• quantum-mechanical Compton radius
• QED-corrected quantum-mechanical Compton radius
• electric charge radius
• observed QED charge distribution for a bound electron
• magnetic field radius
By using known physical constants and experimental data, the calculations based on these different approaches can give estimates which differ by several orders of magnitude.21)22)23)
Consequently, the question “What is the size of the electron?” remains one of the unanswered questions in physics.24)
Also, the internal structure of electrons is unknown. It is generally accepted that it is a point-like particle. However, there are many alternative theories, for example such that propose an electron to be composed of two massless particles orbiting each other at the speed of light.25) Some recent experiments appear to indicate more complex structure of electrons.26) The understanding of the internal structure of protons is also evolving with new experimental data.27)
Electric charge
See also the main article: Electric charge.
Schematic representation of electrostatic field of a stationary negative charge, by using electric field lines
Scientists can describe, but still cannot explain what exactly is electric charge. However, it is sufficient for such a basic property that it exists, it has some physical meaning and is measurable within the given system of units.28)
An electron possesses an elementary amount of negative electric charge e. Its value is a physical constant, expressed in the SI system precisely (zero uncertainty) as:29) -1 e = -1.602 176 634 × 10-19 C, and electric charge of other bodies can be expressed as integer multiples of it.30)
Only such sub-atomic particles like quarks are thought to have electric electric charge in non-integer quantities e.g. -1/3 e or +2/3 e, but they only exists in configurations which add up to integer values of charge. For example, proton comprises three quarks (up, up, down), which add up to +1 e. Therefore, in any macroscopic application the charge is always quantised by the elementary amount of 1 e.31)32)
An electron in isolation is an electric monopole - it is a source of electric field. By convention, it is assumed that that imaginary electric field lines begin at positive charges and terminate at negative charges.33)
Like charges repel, opposite charges attract, causing mechanical forces which act on the charged bodies. This can be referred to as the electrostatic force, because it exists always, even if the charges remain stationary. This is different from magnetic or electromagnetic forces, which arise when the electric charges are in motion.
A neutral body can become polarised in electric field, by means of electrostatic induction, without the need for the charges to exchange between the bodies.
Opposite charges attract, like charges repel, neutral bodies generate no force (grey) but neutral bodies in the presence of other charges can become locally polarised due to electrostatic induction such that some force will occur
In an atom, the nucleus comprises protons and neutrons, held together by the strong force. Electrons are bound to the nucleus by the electrostatic force. The electrons are organised in shells, subshells, and orbitals.
Diagram of electron structure in an atom: shells, subshells and orbitals, with an example of orbital occupancy for iron
Subshells are grouped in shells, with number 1 being the innermost (or letter K, depending on the nomenclature, both naming conventions are in use). The numbering is linked to the energy levels.
In some literature the names “shell” and “subshell” are used interchangeably, or the distinction between shells and subshells is not explicitly made (but there is an implicit assumption that they exist).34)35)
Electron is excited to a higher energy state (higher subshell or shell) by absorbing a photon, and photon is released when electron drops to a lower energy level
Electrons can transition to a higher energy level (higher subshell or shell) by absorbing a photon (a quantum of electromagnetic radiation). Conversely, if there is an empty position on a lower energy level, an electron can jump down, by emitting a photon.36)
The lowest energy state (ground state) is when all the electrons are at the lowest possible orbitals. An atom will de-excite itself to the ground state if no energy is supplied to it, by emitting photons.
Heat represents energy, which excites atoms above the ground state. So any atom in a temperature higher than absolute zero (0 K) continuously gets excited and emits photos, producing photons of different wavelengths, corresponding to the excitation energy and energy of transition between the internal energy levels. Low energy corresponds to long wavelength (infrared), high energy to short (visible light, ultraviolet). Therefore, all matter emits radiation as a function of temperature, which is the basis for pyrometry.
Quantum restrictions dictate that there can be no two particles with the same set of quantum numbers in the same region of space (Pauli exclusion principle).
Orbitals overlap and penetrate each other, forming a spherical shape
Therefore, each orbital can contain at most two electrons, because they can have different spin values (-1/2 and +1/2).
The orbitals are organised in subshells denoted with letters: s, p, d, f, etc., such that a given subshell contains a full set of orbitals, as dictated by the given set of quantum numbers. Higher-order subshells can hold more electrons, such that: s = 2, p = 6, d = 10, f = 14, and so on.
The orbitals within a given subshell overlap and penetrate each other so that their probabilities add up to a spherical shape.
The binding energy is the strongest for the innermost subshells and shells, and it is said that these shells are filled with electrons first.
The higher subshells (and shells) are not filled in a linear order, because there are numerous interactions which take place: electrostatic repulsion, interaction of spins, spin-orbit coupling, and so on. The interactions are very complex, and it is not possible to solve the Schrödinger equation analytically for a general case. Numerical methods are employed instead.37)
For example, the 4s subshell is filled before the 3d shell, as dictated by the energy conditions (see also Hund's rules). One of the rules is such that in a given subshell each orbital is filled first with a single electron, and only when all orbitals have at least one electron the second electron is added. The opposing spins in such pairs precisely compensate out each other and they do not contribute to the magnetic moment of the whole atom. Only the unpaired electrons are significant magnetically.38)
Quantum mechanics is complex and the various quantum phenomena are usually introduced with illustration of analogies involving some classical physics, for the ease of understanding. The sequence of analogies often follows the way the understanding of the inside of the atom was developed over the years.
In a simple Bohr atom model, the negatively charged electrons are point-like particles which orbit positively charged nucleus, in a similar way as planets orbit around the Sun. However, circular orbits would require continuous acceleration of a particle requiring radiation of electromagnetic energy, and such orbiting electron would very quickly lose all the energy and collapse onto the nucleus. This is one of the reasons why the name orbital was introduced (to distinguish it from orbit).
Therefore, all such simplified illustrations should be used used only as an aid for explanation and do not represent what actually happens inside an atom.
The exact mechanics of how electrons move around the nucleus remains unknown. From experiments and calculations it is now understood that electron presence is spread over a volume of space called orbital. There is no well-defined movement involved, it is only said that at a given point in space there is certain probability of finding an electron.39)
Atom of helium: blue - spherical orbitals of electrons (size around 100 pm), red - protons, grey - neutrons (size of nucleus is around 1 fm) 40)41) atom_helium_magnetica.jpg
The probability distribution, for electrons in an atom, can be calculated from the Schrödinger equation $HΨ = EΨ$42), which for example for spherical coordinates can take quite a complicated form:43)
$$ \left[ -\frac{{\hbar}^2}{2m_e} \left( \frac{∂^2}{∂ r^2} + \frac{2}{r}\frac{∂}{∂r} - \frac{1}{{\hbar}^2 r^2} \boldsymbol{{\hat{l}}^2} \right) - \frac{Z e^2}{4π ε_0 r} \right] \psi_i = ε_i \psi_i $$
(where the symbols are defined as in eq. (4.4) in Coey (2010)44); this equation is shown here only as an example). Computer software can be used for calculation and visualisation of the results.45)
The low-order orbitals are spherical, but higher quantum numbers produce increasingly complex three-dimensional shapes. If the data is plotted as calculated, then such probability distributions produce fuzzy images, which are difficult to visualise and interpret. The probability does not stop at a specific distance, but the function extends to infinity, reducing the probability in some non-linear way.46)
For this reason, a number of simplifications is used in order to increase clarity of images. For example, planes, cones or spheres can be used to indicate locations of “nodes” (places where function reduces to zero). Cross-section view can be employed as well.
However, the simplest method appears to be to use “hard shape” with a specific limit. For example, the volume were probability is greater than 90% is plotted with full opacity, and all other is shown as completely transparent. Also, there could be some additional scaling factors which can make easier to indicate intricate details of a given shape.47)48)
The red and blue colours in the images denote the positive and negative phase of the function. Any other colours can be used the represent the same information.
An example of a 4d orbital. The probability distribution is smeared over space so a 2D image is fuzzy and difficult to interpret. Cones represent “nodes” with zero probability, “solid” shapes are used for better visualisation of the shapes of orbitals, but their appearance depends on scaling factors even though all represent the same input data.49)50)
Complexity of orbitals increases for higher order orbitals (just some typical examples are shown here for illustration)51)52)
Demonstration of standing waves on vibrating circular plate, with radial nodes (circles) at lower frequencies, and angular modes (with “spokes”) at higher frequencies CNX_Chem_06_01_ Frequency by P. Flowers, W.R. Robinson, R. Langley, K. Theopold,, CC-BY-4.0
The complex shape of orbitals can be explained with an analogy to vibrations of a body, with higher harmonics forming more complex shapes. An example is shown with a plate which can be made vibrating at different frequencies.
At lower frequency of vibration only concentric rings are present, representing standing waves, with clearly visible “nodes” (no displacement). But the higher the frequency the more complex shapes are created, including “spokes” forming at equally spaced angles.53)
Similar vibration patterns can be expected for a spherical body, but with the standing waves extending over thee-dimensional space.
Orbital magnetic moment
Magnetic moment of an electric current flowing in a loop can be expressed as the product of the amplitude of the current and the area of the loop.
In an atom, an electron orbiting around the nucleus represents a moving electric charge which is equivalent to electric current, but it must be remembered that by convention, the direction of electric current (blue arrow of $I$ in the image) is opposite to the direction in which the electron moves (green arrow of $v$). For this reason the vector of magnetic dipole moment of an electron points in the opposite direction to the angular moment.54)
The orbit would represent a circle with some area. Therefore, in from the classical physics viewpoint there would be a magnetic moment associated with the orbital motion of an electron, in a loop without resistance,55) (inside the atom, electron moves effectively in vacuum so it can move freely).
The analogy of orbital moment is an electron orbiting the nucleus on a circular orbit (left) and for spin the sphere spins around its own axis (right)
The orbital magnetic moment for the so-called first Bohr orbit can be calculated as: $μ_{orb} = \frac{e·h}{4·π·m}$ = 9.274 × 10-24 A·m2 ≡ J/T (where: e - electron charge, h - Planck constant, m - electron mass), which is exactly equal to Bohr magneton μB.56)
Such orbital movement would have a “mechanical” angular momentum associated with it. A linear or angular momentum is a conserved quantity and energy must be exchanged in the system for it to change. Momentum is a product of mass and speed (rotational speed), and it is related to inertia of the body.57)
The angular momentum due to orbital movement of an electron is $\boldsymbol{l} = m · \boldsymbol{r} × \boldsymbol{v}$, where: m - electron mass, r - orbit radius, v - velocity.
There is a fixed proportionality between the electron's charge e, orbital magnetic moment $μ_{orb}$ and angular momentum $\boldsymbol{l}$, such that: $μ_{orb} = - \frac{e}{2·m} · \boldsymbol{l}$. Because only physical constants are involved in the above equation, it can be written that $μ_{orb} = γ·\boldsymbol{l}$, where γ is the gyromagnetic ratio, equal to 1.761 x 1011 Hz/T.58) (Gyromagnetic ratio is a quantity different from g-factor, which is unitless. For orbital motion the g-factor is exactly 1, but for spin it greater.59)60))
The orbital angular momentum is quantised, in units of $\boldsymbol{l}$ (for orientation) or units of $\hbar$ (for value of component along the acting magnetic field).61)
However, electrons do not follow circular orbits, and orbitals which can take quite complex 3D shapes especially for higher orders, as described above.
In chemical compounds there is electrostatic interaction between the ions in molecules, and the contribution of orbital momentum is much smaller. This phenomenon is called quenching of the orbital angular momentum.62) Magnetic properties of matter are dictated mostly by the contribution of the spin moments.
Spin magnetic moment
Rotating sphere as an analogy of electron spin63)
An electron possesses a fundamental property called spin, and an angular momentum as well as magnetic moment associated with it. Both of these values are physical constants.64)
Spin is a quantum property and does not have a direct equivalent in classical physics. However, because of the difficulty of explaining the concept, an analogy is typically used, in which an electron is portrayed as a sphere spinning around its own axis. Such spinning movement would also have a “mechanical” angular momentum associated with it.
Spin magnetic moment is also explained conceptually by the analogy of a spinning sphere. If the surface of the sphere has electric charge distributed on its surface, then as the sphere is spinning the surface electric charges rotate with it. This is equivalent to charge moving in a circular pattern which is equivalent to electric current in a loop, and therefore there would be also magnetic dipole moment associated with such a structure.65)66) However, such analogy should not be used for any quantitative calculations, because the size and the distribution of charge of the electron are unknown.67)
Electron's angular momentum is $L = h/(4·π)$ = 5.27 × 10-35 J/Hz, where h is the Planck’s constant.68) The Planck constant is often used in its “reduced” version represented by the “h-bar” such that $\hbar = h/(2·π)$.69)
Therefore, the angular momentum can be written as $L = \hbar/2$.70) This “divide by 2” factor denotes that the electron spin moment $m_s$ is only allowed to take two values -1/2 and +1/2 (it is quantised), typically referred to as the spin pointing “up” or “down”, respectively.71)
The spin magnetic moment $μ_{spin}$ is directly related to the angular momentum $m_s$ and Bohr magneton $μ_B$ such that $μ_{spin} = -g_e · μ_B · m_s $, where $g_e$ = 2.002319 (unitless constant). Therefore, $μ_{spin} \approx μ_B$.72)
Calculation of particle momentum (and therefore spin magnetic moment) involves mass in the denominator. Therefore, the contribution of magnetic moments of protons and neutrons is mostly negligible, because of the mass being larger by more than 3 orders of magnitude that it is the case for electron.73)
Chemical properties
Atoms can form multi-atom molecules of chemical compounds by forming bonds. All chemical bonds are electromagnetic in nature, and they arise because of the activity of the electrons on the outermost shells.74)
Atomic subshells have a preference to be fully occupied, and an atom with fully occupied outermost subshell is inert chemically (He, Ne, Ar, etc.) On the other hand, if an atom has just a single electron in the outermost shell then it is very reactive chemically (H, Li, Na, etc.) Some atoms are reactive enough that in the absence of other types of atoms they can form bonds between themselves. For example, in common air, both oxygen and nitrogen occur predominantly in diatomic configuration: O2 and N2.75)76)
Depending on the exact details energetic conditions the bonds can be broadly classified as covalent or ionic.77)
Antimatter is a type of matter which has similar properties to normal matter. However, some of its property are exactly opposite, like for instance electric charge. Should a matter particle and its antimatter equivalent come in contact they will annihilate completely, producing a burst of electromagnetic radiation.
An antimatter equivalent of electron is called anti-electron or positron. It has the same mass and size, but positive electric charge, and therefore also the spin direction is reversed.
Positrons are generated in some radioactive processes. For example, unstable isotopes with shortage of neutrons decay with beta decay (β+), by emitting a positron, such that a proton becomes a neutron and the atomic number changes.78)
This positron-electron annihilation process is used for example in positron emission tomography for medical diagnostic purposes. A suitable radioactive chemical (e.g. fluorine-18, half-life of just 2h) which undergoes the β+ decay is introduced into the human body, where it can be disproportionately absorbed in some abnormal tissue. The emitted positron immediately encounters normal electron (in the surrounding tissue) and they annihilate, producing two high-energy gamma photons travelling at 180° trajectories to each other. These two photons are detected and by precise timing it is possible to compute their origin, thereby allowing non-invasive 3D imaging. 79)
Macroscopic phenomena
Electrons are involved in microscopic (atom-level) phenomena which control a lot of macroscopic behaviour of materials, such as electric and magnetic properties.
Periodic table of elements, with magnetic properties80) (at very low temperatures, and also high pressure, many elements become superconducting and hence strongly diamagnetic)
Electricity and electric current
Electricity is related to the presence (electrostatics) and movement (electric current) of electric charges. The properties of electricity have been harnessed for generation, storage, transmission, and utilisation of energy, on a local and global scale.
Electric energy is based on the flow of electric charges, which is equivalent to electric current. In ordinary metal conductors the electrons are free to move, and even small electric field applied across a conductor can results in a significant current flow.81)
In liquids and gasses the electric field can separate electrons from atoms, thus forming positively charged ions (cations), which can also move. Such positive charges move in the same direction as the assumed convention of electric current (from plus to minus). However, metals in liquid form remain relatively good conductors, for example mercury, which is liquid at room temperature. In such metals the electrons are the main carried of electric current.
Depending on the mobility of electrons and the associated resistivity, materials can be broadly classified into four groups, significant from engineering viewpoint: insulators, semiconductors, conductors and superconductors.82)
Material type Typical resistivity range (Ω·m)83)
Insulators 109 - 1024 (and higher)
Semiconductors 10-6 - 106
Conductors 10-2 - 10-8 (and lower)
Superconductors zero
There are many materials and substances which can have resistivity values in-between these ranges, for various reasons (temperature, moisture content, etc.) In general, higher temperature adds energy to the system and increases quantity or mobility of electrons in insulators and semiconductors.
Resistivity of materials at room temperature spans over more than 30 orders of magnitude (superconductors have zero resistivity and cannot be represented on a logarithmic scale, but they would lie to the left of conductors)
Electric wire engineered to have copper conductor inside, and insulator outside stranded_wire_magnetica.jpg
Electric insulators are all materials, in which the electrons remain bound to the atoms. Therefore, there are few free electrons to sustain the current flow. Such materials are used to insulate a given electrical conductor, so the current flows in the intended path, not leaking away in an uncontrolled way.
However, there are no perfect insulators, because some electrons are always present and will move under the applied electric field. Also, when the voltage across an insulator is increased to very large values the electrons can be ripped away from the atoms (ionisation), and gain enough energy from the electric field to cause an avalanche effect. This creates a low-resistance path and a violent discharge through the material (electric breakdown). Some of the ionised electrons return back to the atoms, releasing photons, which is the reason why an electric arc emits visible light.84)
Electrical insulators typically degrade over time (their resistivity decreases by increasing mobility of electrons), especially if they remain energised. The flow of electricity through an insulator is very small but it can be measured by very sensitive devices. For example, the state of electrical insulation can be verified by using devices such us insulator resistance tester. A voltage is applied to an insulator, typically with a value equal or greater than the nominal operating voltage of the system. The small resulting current is measured and the resistance is calculated. Industrial testers can measure resistance from MΩ to tens of TΩ85), and laboratory ones even higher86).
Higher energy of the system frees up more electrons, so resistivity decreases with increasing temperature. Also, higher temperature reduces insulating properties in terms of lifetime, and around a room temperature an increase by 10°C reduces the insulation resistance (and useful life of insulation) roughly by half.87)
Once a breakdown of solid insulation happens, then typically an irreversible damage occurs, for example by creating carbonised path, which serves as a low-resistance path for the electrons.
Perfect vacuum itself does not conduct electricity, therefore in a theoretical sense it has infinite resistivity. However, in practice, vacuum must be contained in some matter, and therefore usual limitations apply, because sufficiently high electric field can extract electrons from atoms, and once free they will travel unimpeded through vacuum, as space current. The insulating property of vacuum is used for example in vacuum relays.
Electrons travelling in vacuum were utilised extensively in cathode-ray tube displays (CRT) in TV sets and oscilloscopes popular in XX century, as well as other vacuum tubes. Electrons were emitted from a heated cathode, and accelerated by electric field due to high voltage (typically between 10 kV and 35 kV) towards the display covered with a luminescent layer.
The position of the beam of electrons hitting the display was controlled by deflection coils, whose magnetic field was rapidly modifying the trajectory of electrons, due to Lorentz force. The luminescent layer was required to convert the energy of electrons (invisible) to the spectrum of light visible to human eye.
Modern electronics relies on semiconductors Robivy64, Public domain
In semiconducting materials the electrons require less energy to leave atoms, and their mobility can be controlled by various means, like increasing temperature, alloying with more conductive materials, applying electric field, and many more.
Technical semiconductors are sophisticated materials, whose performance is fine-tuned to specific application. For example, pure silicon is not conductive enough to be useful in its raw form. In order to obtain the required performance it is doped with other atoms, such as phosphorus (donating one extra electron, n-type) or boron (creating a shortage of one electron, called hole, p-type). Therefore, the electrical current can flow as a results of the excess of electrons, or the electrons jumping from a hole to hole, which is equivalent to the hole moving in the other direction, as if a positive charge was moving instead of a hole.88)
The difference in mobility and behaviour of these electrons or electron holes is the basis for the widely useful electronics technology. The word electronics comes directly from electron.
In semiconductors typically mobility of electrons increases with increasing temperature and thus resistivity reduces accordingly. However, there are additional effects resulting from the mobility of electrons and holes, and the interaction between them, that highly non-linear effects can become more important. For example, a combination of p-type and n-type semiconductors, the p-n junction is the basis for a diode, which conducts the current in one way, and blocks in the other way. Changes in temperature of such junction just change the leakage current (in the blocking or reverse direction), or change the voltage drop across the p-n junction (in the forward or conducting direction).
The changing mobility of electrons with temperature is used in variable-resistance devices such as thermistors. Typically, they exhibit an exponential relationship: a negative temperature coefficient (NTC) means that their resistance decreases with temperature, whereas for positive (PTC) it increases.89)
Magnetic semiconductors
Physicist work on combining the magnetic and semiconducting effects, in which both the movement of electrons (electric current) and their magnetic spins are utilised. These phenomena give rise to new classes of materials, with names such as: magnetic semiconductor, diluted magnetic semiconductors and magnetic insulators. They are the subject of the science branch called spintronics90) (as an analogy to electronics).
In metals, which are good conductors, the atoms are packed so close to each other that the electrons can freely interact with other atoms. In effect, an internal electron gas91) is formed and application of electric field results in a flow of electrons forming an electric current with a significant magnitude even for small applied voltage.
Inside an unenergised conductor, the electrons move freely and randomly, with very large speed of around 105 m/s. Application of voltage across a conductor makes the electrons to drift towards the positive electrode, but the average drift speed is very low (e.g. 0.03 mm/s). However, due to very big number of electrons per volume (1023 per mm3) the resulting current is still relatively high, and because all electrons drift at the same speed the current flows throughout the whole wire. The movement of electrons is due to the electric field at the surface wire.92)
At increased temperature the atoms vibrate more vigorously and the movement of electrons is scattered more, so the resistivity of metals generally increases with increasing temperature.
The electrons can also hit the ends of the conductor and thus generate electrical noise (Johnson's noise, shot noise, etc.), which is also related to the temperature (thermal noise).
Faraday cage
An isolated object can be charged electrostatically by depositing electric charges on its surface.
The surface of insulators can be charged by rubbing other insulating materials against them, which builds up the electrostatic charges due to triboelectric effect.
In conductors it is sufficient to touch the surface with other charged body and the charges will equalise between the two systems.93) But these surfaces charges (electrons) repel each other and they will tend to occupy the farthest possible distance from one another. On a hollow conductor all charges will remain only on the outer surface, leaving the inside of it with zero electric field. Similar applies even if the construction is made from a mesh, rather than solid surface. Such a conducting “cage” is known as Faraday cage and it is used widely for shielding of electric fields.
Niobium-titanium superconducting cable94) niobium-titanium_superconductor_cern.jpg Copyright © CERN
The resistivity of metals decreases with lowering temperature, because the atoms vibrate less, and therefore there is less scattering of the electron movement.
H.K. Onnes carried out experiments on mercury in 1911, and discovered that its resistivity was reducing at lower temperatures, but then vanished below 4.2 K, becoming superconducting.95) Several other materials were found to be superconducting at very low temperatures, and interestingly these materials are not the best conductors are room temperature (e.g. lead). All the superconductors operate at very low temperatures so appropriate cooling is required (liquid nitrogen or liquid helium). In 2020 it was reported that superconductivity was attained at +15°C, but at extremely high pressure, not useful for practical applications.96)
In superconductors the electrons can move without electrical resistance, and without the energy loss associated with it. Therefore, in a closed superconducting loop, once a DC current is induced it can flow indefinitely (there is no energy loss), producing DC magnetic field around itself. This behaviour is used in superconducting electromagnets which can operate in persistent mode.97)
For the type I superconductors (with sharp transition of critical field) the theoretical explanation is that electrons form pairs (Cooper pairs) whose movement is mediated coherently by the lattice vibrations. Understanding of the electron mobility in type II and high-temperature superconductors is still incomplete.98)
Magnetic field
Any moving charge creates magnetic field around itself (velocity field). The individual fields from each electron in a current-conducting wire overlap and create a macroscopic magnetic field around such wire.
The wires can be wound in coils or windings to shape or direct the global field in the desired manner so that operation of many devices is possible: generator, transformer, motor, sensor, antenna, etc.
Magnetic field around a moving electron (because of the convention the electron moves in the opposite direction to electric current)99)
Electric current I generates magnetic field strength H whose vector is always perpendicular to the direction of I, according to the right-hand rule electric_current_generates_magnetic_field_magnetica.jpg
Magnetic field lines of a solenoid (cross-section view)
All materials respond to magnetic field to some extend, including vacuum (which is a reference point for the magnetic constant)100), but some with stronger interaction than others. The response is dictated by the configuration of electron spin moments in the atoms.
There are three main types of magnetic responses, or types of magnetism: 101)
• Diamagnetism - all electrons are paired in all orbitals. As a result there is no net spin moment. Application of magnetic field to such materials introduces changes to the shape of orbitals, similar to a current induced in a loop, in the direction opposing the applied field. Thus diamagnets have permeability lower than vacuum and are repelled from magnetic field. However, this effect is so small that in everyday applications they are simply classified as non-magnetic materials.
• Paramagnetism - some atoms have at least one unpaired electron, and its spin can respond to the applied field. The more the spin can be oriented with the field the larger the permeability. Paramagnets are attracted to magnetic field, but the effect is also very weak (non-magnetic).
• Ordered magnetism - the atoms have unpaired electrons and they are positioned such that they can interact with each other, which leads to spontaneous magnetisation, high permeability and strong magnetic forces (magnetic materials). Depending on the type of ordering there can be:
All magnetic materials (exhibiting ordered magnetism) become paramagnetic at sufficiently high temperatures (above Curie temperature), because the thermal agitation of atoms can overcome magnetic ordering of electron spins. Conversely, paramagnets increase their permeability with lowering temperature, such that some become ferromagnetic.102)
Electron microscopy
Scanning electron micrograph of a single N. meningitidis cell, with resolutions far exceeding 200 nm available from the optical microscopes neisseria_meningitidis_c_orszag_2018.jpg by Arthur Charles-Orszag, CC-BY-SA-4.0
The resolution of ordinary microscopes is limited by the wavelengths of visible light, so objects smaller than around 200 nm cannot be resolved.103)
However, using techniques such as scanning electron microscopy (SEM), the resolution can be improved by up to three orders of magnitude, so that features around 0.2 nm in size can be resolved.104)
In SEM, a beam of electrons is generated from a heated cathode and accelerated with high voltage towards the sample. Magnetic lenses are used to focus and direct the electron beam (in some sense similar to CRT displays).
The high-energy electrons impact the sample and cause secondary electrons to be scattered - these can be detected and translated into useful information, which can be then translated into useful images.
However, there are some additional conditions which have to be met, for instance the sample must be conductive, or be prepared by applying some conductive coating (e.g. gold or palladium), and the observation is carried out in vacuum.105)
See also
48), 50), 52) D. Mantley, Orbital Viewer [user guide], 2004
electron.txt · Last modified: 2021/08/30 16:22 by stan_zurek
Privacy and cookie policy (GDPR, etc.) |
a77ca6f1979da392 | All Issues
Volume 10, 2021
Volume 9, 2020
Volume 8, 2019
Volume 7, 2018
Volume 6, 2017
Volume 5, 2016
Volume 4, 2015
Volume 3, 2014
Volume 2, 2013
Volume 1, 2012
Evolution Equations & Control Theory
June 2018 , Volume 7 , Issue 2
Select all articles
On state-dependent sweeping process in Banach spaces
Dalila Azzam-Laouir and Fatiha Selamnia
2018, 7(2): 183-196 doi: 10.3934/eect.2018009 +[Abstract](5147) +[HTML](294) +[PDF](393.79KB)
In this paper we prove, in a separable reflexive uniformly smooth Banach space, the existence of solutions of a perturbed first order differential inclusion governed by the proximal normal cone to a moving set depending on the time and on the state. The perturbation is assumed to be separately upper semicontinuous.
The recovery of a parabolic equation from measurements at a single point
Amin Boumenir, Vu Kim Tuan and Nguyen Hoang
2018, 7(2): 197-216 doi: 10.3934/eect.2018010 +[Abstract](4366) +[HTML](232) +[PDF](5897.52KB)
By measuring the temperature at an arbitrary single point located inside an unknown object or on its boundary, we show how we can uniquely reconstruct all the coefficients appearing in a general parabolic equation which models its cooling. We also reconstruct the shape of the object. The proof hinges on the fact that we can detect infinitely many eigenfunctions whose Wronskian does not vanish. This allows us to evaluate these coefficients by solving a simple linear algebraic system. The geometry of the domain and its boundary are found by reconstructing the first eigenfunction.
Michele Colturato
2018, 7(2): 217-245 doi: 10.3934/eect.2018011 +[Abstract](4325) +[HTML](216) +[PDF](545.67KB)
We consider a singular phase field system located in a smooth bounded domain. In the entropy balance equation appears a logarithmic nonlinearity. The second equation of the system, deduced from a balance law for the microscopic forces that are responsible for the phase transition process, is perturbed by an additional term involving a possibly nonlocal maximal monotone operator and arising from a class of sliding mode control problems. We prove existence and uniqueness of the solution for this resulting highly nonlinear system. Moreover, under further assumptions, the longtime behavior of the solution is investigated.
Robust Stackelberg controllability for linear and semilinear heat equations
Víctor Hernández-Santamaría and Luz de Teresa
2018, 7(2): 247-273 doi: 10.3934/eect.2018012 +[Abstract](4227) +[HTML](204) +[PDF](506.51KB)
In this paper, we present a Stackelberg strategy to control a semilinear parabolic equation. We use the concept of hierarchic control to combine the concepts of controllability with robustness. We have a control named the leader which is responsible for a controllability to trajectories objective. Additionally, we have a control named the follower, that solves a robust control problem. That means we solve for the optimal control in the presence of the worst disturbance case. In this way, the follower control is insensitive to a broad class of external disturbances.
On the lifespan of strong solutions to the periodic derivative nonlinear Schrödinger equation
Kazumasa Fujiwara and Tohru Ozawa
2018, 7(2): 275-280 doi: 10.3934/eect.2018013 +[Abstract](4254) +[HTML](192) +[PDF](322.42KB)
An explicit lifespan estimate is presented for the derivative Schrödinger equations with periodic boundary condition.
On the viscoelastic equation with Balakrishnan-Taylor damping and acoustic boundary conditions
Tae Gab Ha
2018, 7(2): 281-291 doi: 10.3934/eect.2018014 +[Abstract](4150) +[HTML](213) +[PDF](354.74KB)
In this paper, we consider the viscoelastic equation with Balakrishnan-Taylor damping and acoustic boundary conditions. This work is devoted to prove, under suitable conditions on the initial data, the global existence and uniform decay rate of the solutions when the relaxation function is not necessarily of exponential or polynomial type.
Asymptotic behavior of a hierarchical size-structured population model
Dongxue Yan and Xianlong Fu
2018, 7(2): 293-316 doi: 10.3934/eect.2018015 +[Abstract](4244) +[HTML](274) +[PDF](730.44KB)
We study in this paper a hierarchical size-structured population dynamics model with environment feedback and delayed birth process. We are concerned with the asymptotic behavior, particularly on the effects of hierarchical structure and time lag on the long-time dynamics of the considered system. We formally linearize the system around a steady state and study the linearized system by \begin{document} $C_0-{\rm{semigroup}}$ \end{document} framework and spectral analysis methods. Then we use the analytical results to establish the linearized stability, instability and asynchronous exponential growth conclusions under some conditions. Finally, some examples are presented and simulated to illustrate the obtained results.
Optimal nonlinearity control of Schrödinger equation
Kai Wang, Dun Zhao and Binhua Feng
2018, 7(2): 317-334 doi: 10.3934/eect.2018016 +[Abstract](4053) +[HTML](205) +[PDF](452.32KB)
We study the optimal nonlinearity control problem for the nonlinear Schrödinger equation \begin{document} $iu_{t} = -\triangle u+V(x)u+h(t)|u|^α u$ \end{document}, which is originated from the Fechbach resonance management in Bose-Einstein condensates and the nonlinearity management in nonlinear optics. Based on the global well-posedness of the equation for \begin{document} $0<α<\frac{4}{N}$ \end{document}, we show the existence of the optimal control. The Fréchet differentiability of the objective functional is proved, and the first order optimality system for \begin{document} $N≤ 3$ \end{document} is presented.
2020 Impact Factor: 1.081
5 Year Impact Factor: 1.269
2020 CiteScore: 1.6
Email Alert
[Back to Top] |
715d31deebc77238 | Hamiltonian (quantum Mechanics)
Get Hamiltonian Quantum Mechanics essential facts below. View Videos or join the Hamiltonian Quantum Mechanics discussion. Add Hamiltonian Quantum Mechanics to your PopFlock.com topic list for future reference or share this resource on social media.
Hamiltonian Quantum Mechanics
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.
The Hamiltonian is named after William Rowan Hamilton, who developed a revolutionary reformulation of Newtonian mechanics, known as Hamiltonian mechanics, which was historically important to the development of quantum physics. Similar to vector notation, it is typically denoted by , where the hat indicates that it is an operator. It can also be written as or .
The Hamiltonian of a system represents the total energy of the system; that is, the sum of the kinetic and potential energies of all particles associated with the system. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one.
Schrödinger Hamiltonian
One particle
By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form
is the potential energy operator and
is the kinetic energy operator in which is the mass of the particle, the dot denotes the dot product of vectors, and
is the momentum operator where a is the del operator. The dot product of with itself is the Laplacian . In three dimensions using Cartesian coordinates the Laplace operator is
Although this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes. Combining these yields the familiar form used in the Schrödinger equation:
which allows one to apply the Hamiltonian to systems described by a wave function . This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics.
One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields.
Many particles
The formalism can be extended to particles:
is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and
is the kinetic energy operator of particle , is the gradient for particle , and is the Laplacian for particle n:
Combining these yields the Schrödinger Hamiltonian for the -particle case:
For interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle.
For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle,[1] that is
The general form of the Hamiltonian in this case is:
where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation--in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below.
Schrödinger equation
The Hamiltonian generates the time evolution of quantum states. If is the state of the system at time , then
This equation is the Schrödinger equation. It takes the same form as the Hamilton-Jacobi equation, which is one of the reasons is also called the Hamiltonian. Given the state at some initial time (), we can solve it to obtain the state at any subsequent time. In particular, if is independent of time, then
The exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in . One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient.
By the *-homomorphism property of the functional calculus, the operator
is a unitary operator. It is the time evolution operator or propagator of a closed quantum system. If the Hamiltonian is time-independent, form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance.
Dirac formalism
However, in the more general formalism of Dirac, the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way:
The eigenkets (eigenvectors) of , denoted , provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted , solving the equation:
Since is a Hermitian operator, the energy is always a real number.
From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation.[clarification needed]
Expressions for the Hamiltonian
Following are expressions for the Hamiltonian in a number of situations.[2] Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function--importantly space and time dependence. Masses are denoted by , and charges by .
General forms for one particle
Free particle
The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension:
and in higher dimensions:
Constant-potential well
For a particle in a region of constant potential (no dependence on space or time), in one dimension, the Hamiltonian is:
in three dimensions
This applies to the elementary "particle in a box" problem, and step potentials.
Simple harmonic oscillator
For a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to:
where the angular frequency , effective spring constant , and mass of the oscillator satisfy:
so the Hamiltonian is:
For three dimensions, this becomes
where the three-dimensional position vector using Cartesian coordinates is , its magnitude is
Writing the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction:
Rigid rotor
For a rigid rotor--i.e., system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom, say due to double or triple chemical bonds), the Hamiltonian is:
where , , and are the moment of inertia components (technically the diagonal elements of the moment of inertia tensor), and and are the total angular momentum operators (components), about the , , and axes respectively.
Electrostatic or coulomb potential
The Coulomb potential energy for two point charges and (i.e., those that have no spatial extent independently), in three dimensions, is (in SI units--rather than Gaussian units which are frequently used in electromagnetism):
However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For charges, the potential energy of charge due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges):[3]
where is the electrostatic potential of charge at . The total potential of the system is then the sum over :
so the Hamiltonian is:
Electric dipole in an electric field
For an electric dipole moment constituting charges of magnitude , in a uniform, electrostatic field (time-independent) , positioned in one place, the potential is:
the dipole moment itself is the operator
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
Magnetic dipole in a magnetic field
For a magnetic dipole moment in a uniform, magnetostatic field (time-independent) , positioned in one place, the potential is:
For a spin-12 particle, the corresponding spin magnetic moment is:[4]
where is the spin gyromagnetic ratio (a.k.a. "spin g-factor"), is the electron charge, is the spin operator vector, whose components are the Pauli matrices, hence
Charged particle in an electromagnetic field
For a particle with mass and charge in an electromagnetic field, described by the scalar potential and vector potential , there are two parts to the Hamiltonian to substitute for.[1] The canonical momentum operator , which includes a contribution from the field and fulfils the canonical commutation relation, must be quantized;
where is the kinetic momentum operator. The quantization prescription reads
so the corresponding kinetic energy operator is
and the potential energy, which is due to the field, is given by
Casting all of these into the Hamiltonian gives
Energy eigenket degeneracy, symmetry, and conservation laws
In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the direction is a different state from one propagating in the direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate.
It turns out that degeneracy occurs whenever a nontrivial unitary operator commutes with the Hamiltonian. To see this, suppose that is an energy eigenket. Then is an energy eigenket with the same eigenvalue, since
Since is nontrivial, at least one pair of and must represent distinct states. Therefore, has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape.
The existence of a symmetry operator implies the existence of a conserved observable. Let be the Hermitian generator of :
It is straightforward to show that if commutes with , then so does :
In obtaining this result, we have used the Schrödinger equation, as well as its dual,
Thus, the expected value of the observable is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum.
Hamilton's equations
Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states , which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e.,
Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time.
The instantaneous state of the system at time , , can be expanded in terms of these basis states:
The coefficients are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole.
The expectation value of the Hamiltonian of this state, which is also the mean energy, is
where the last step was obtained by expanding in terms of the basis states.
Each actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use and its complex conjugate . With this choice of independent variables, we can calculate the partial derivative
By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to
Similarly, one can show that
If we define "conjugate momentum" variables by
then the above equations become
which is precisely the form of Hamilton's equations, with the s as the generalized coordinates, the s as the conjugate momenta, and taking the place of the classical Hamiltonian.
See also
2. ^ Atkins, P. W. (1974). Quanta: A Handbook of Concepts. Oxford University Press. ISBN 0-19-855493-1.
3. ^ Grant, I. S.; Phillips, W. R. (2008). Electromagnetism. Manchester Physics Series (2nd ed.). ISBN 978-0-471-92712-9.
4. ^ Bransden, B. H.; Joachain, C. J. (1983). Physics of Atoms and Molecules. Longman. ISBN 0-582-44401-2.
External links
Music Scenes |
5d74197cc439492d | You will be redirected to the script in
Kalvisolai.Info: May 2012
உங்க கருத்தை பதிவு செய்யுங்கள்
Sunday, May 20, 2012
Direct Recruitment of Post Graduate Assistant in Govt.Higher Secondary School 2011 -12 | List of Eligible Candidates Subjectwise
horizontal rule
I. List of Eligible Candidates Subjectwise
To Know your Eligibility , enter your Application No. (eg.0135001)
(for all the candidates who have applied for Examination)
App No.
Date of Examination : 27/05/2012 Timing: 10.00 A.M to 01:00 P.M
Before generating the Hall Ticket, kindly make sure that both Top and Bottom margins of the print area will have only maximum of 5 mm. and set the Page Size as 'A4' so as to generate the Hall Ticket in a single A4 size paper. This can be adjusted using File->Page Setup option of the browser.
Enter Online Application Number :DEM12
Enter Date of Birth :
Saturday, May 19, 2012
Albert Einstein
Albert Einstein ; 14 March 1879 – 18 April 1955) was a German theoretical physicist who developed the theory of general relativity, effecting a revolution in physics. For this achievement, Einstein is often regarded as the father of modern physics. While best known for his mass–energy equivalence formula E = mc (which has been dubbed "the world's most famous equation"), he received the 1921 Nobel Prize in Physics "for his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect". The latter was pivotal in establishing quantum theory within physics.
Einstein published more than 300 scientific papers along with over 150 non-scientific works. His great intelligence and originality have made the word "Einstein" synonymous with genius.
Early life and education
Einstein at the age of three in 1882
Albert Einstein in 1893 (age 14)
The Einsteins were non-observant Jews. Albert attended a Catholic elementary school from the age of five for three years. Later, at the age of eight, Einstein was transferred to the Luitpold Gymnasium where he received advanced primary and secondary school education until he left Germany seven years later. Although it has been thought that Einstein had early speech difficulties, this is disputed by the Albert Einstein Archives, and he excelled at the first school that he attended.
His father once showed him a pocket compass; Einstein realized that there must be something causing the needle to move, despite the apparent "empty space". As he grew, Einstein built models and mechanical devices for fun and began to show a talent for mathematics. When Einstein was ten years old Max Talmud (later changed to Max Talmey), a poor Jewish medical student from Poland, was introduced to the Einstein family by his brother, and during weekly visits over the next five years he gave the boy popular books on science, mathematical texts and philosophical writings. These included Immanuel Kant's Critique of Pure Reason and Euclid's Elements (which Einstein called the "holy little geometry book").
In late summer 1895, at the age of sixteen, Einstein sat the entrance examinations for the Swiss Federal Polytechnic in Zurich (later the Eidgenössische Polytechnische Schule). He failed to reach the required standard in several subjects, but obtained exceptional grades in physics and mathematics. On the advice of the Principal of the Polytechnic, he attended the Aargau Cantonal School in Aarau, Switzerland, in 1895-96 to complete his secondary schooling. While lodging with the family of Professor Jost Winteler, he fell in love with Winteler's daughter, Marie. (His sister Maja later married the Wintelers' son, Paul.) In January 1896, with his father's approval, he renounced his citizenship in the German Kingdom of Württemberg to avoid military service. In September 1896 he passed the Swiss Matura with mostly good grades (gaining maximum grade 6 in physics and mathematical subjects, on a scale 1-6), and though still only seventeen he enrolled in the four year mathematics and physics teaching diploma program at the Zurich Polytechnic. Marie Winteler moved to Olsberg, Switzerland for a teaching post.
Einstein's future wife, Mileva Marić, also enrolled at the Polytechnic that same year, the only woman among the six students in the mathematics and physics section of the teaching diploma course. Over the next few years, Einstein and Marić's friendship developed into romance, and they read books together on extra-curricular physics in which Einstein was taking an increasing interest. In 1900 Einstein was awarded the Zurich Polytechnic teaching diploma, but Marić failed the examination with a poor grade in the mathematics component, theory of functions. There have been claims that Marić collaborated with Einstein on his celebrated 1905 papers, but historians of physics who have studied the issue find no evidence that she made any substantive contributions.
Marriages and children
Main article: Einstein family
In early 1902, Einstein and Mileva Marić (Милева Марић) had a daughter they named Lieserl in their correspondence, who was born in Novi Sad where Marić's parents lived. Her full name is not known, and her fate is uncertain after 1903.
Einstein and Marić married in January 1903. In May 1904, the couple's first son, Hans Albert Einstein, was born in Bern, Switzerland. Their second son, Eduard, was born in Zurich in July 1910. In 1914, Einstein moved to Berlin, while his wife remained in Zurich with their sons. Marić and Einstein divorced on 14 February 1919, having lived apart for five years.
Einstein married Elsa Löwenthal (née Einstein) on 2 June 1919, after having had a relationship with her since 1912. She was his first cousin maternally and his second cousin paternally. In 1933, they emigrated permanently to the United States. In 1935, Elsa Einstein was diagnosed with heart and kidney problems and died in December 1936.
After graduating, Einstein spent almost two frustrating years searching for a teaching post, but a former classmate's father helped him secure a job in Bern, at the Federal Office for Intellectual Property, the patent office, as an assistant examiner. He evaluated patent applications for electromagnetic devices. In 1903, Einstein's position at the Swiss Patent Office became permanent, although he was passed over for promotion until he "fully mastered machine technology".
Einstein's official 1921 portrait after receiving the Nobel Prize in Physics.
During 1901, the paper "Folgerungen aus den Kapillarität Erscheinungen" ("Conclusions from the Capillarity Phenomena") was published in the prestigious Annalen der Physik. On 30 April 1905, Einstein completed his thesis, with Alfred Kleiner, Professor of Experimental Physics, serving as pro-forma advisor. Einstein was awarded a PhD by the University of Zurich. His dissertation was entitled "A New Determination of Molecular Dimensions". That same year, which has been called Einstein's annus mirabilis (miracle year), he published four groundbreaking papers, on the photoelectric effect, Brownian motion, special relativity, and the equivalence of matter and energy, which were to bring him to the notice of the academic world.
By 1908, he was recognized as a leading scientist, and he was appointed lecturer at the University of Bern. The following year, he quit the patent office and the lectureship to take the position of physics docent at the University of Zurich. He became a full professor at Karl-Ferdinand University in Prague in 1911. In 1914, he returned to Germany after being appointed director of the Kaiser Wilhelm Institute for Physics (1914–1932) and a professor at the Humboldt University of Berlin, with a special clause in his contract that freed him from most teaching obligations. He became a member of the Prussian Academy of Sciences. In 1916, Einstein was appointed president of the German Physical Society (1916–1918).
Travels abroad
I consider this the greatest day of my life. Before, I have always found something to regret in the Jewish soul, and that is the forgetfulness of its own people. Today, I have been made happy by the sight of the Jewish people learning to recognize themselves and to make themselves recognized as a force in the world..
Love of music
Einstein developed an appreciation of music at an early age. His mother played the piano reasonably well and wanted her son to learn the violin, not only to instill in him a love of music but also to help him assimilate within German culture. According to conductor Leon Botstein, Einstein is said to have begun playing when he was five, but didn't enjoy trying to learn it at that age.
When he turned thirteen, however, he discovered the violin sonatas of Mozart. "Einstein fell in love" with Mozart's music, notes Botstein, and learned to play music more willingly. According to Einstein, he taught himself to play by "ever practicing systematically," adding that "Love is a better teacher than a sense of duty." At age seventeen, he was heard by a school examiner in Aarau as he played Beethoven's violin sonatas, the examiner stating afterward that his playing was "remarkable and revealing of 'great insight.'" What struck the examiner, writes Botstein, was that Einstein "displayed a deep love of the music, a quality that was and remains in short supply. Music possessed an unusual meaning for this student."
Botstein notes that music assumed a pivotal and permanent role in Einstein's life from that period on. Although the idea of becoming a professional was not on his mind at any time, he did play chamber music with others, and performed for private audiences and friends. Chamber music also became a regular part of his social life while living in Bern, Zurich, and Berlin, where he played with Max Planck and his son, among others. Near the end of his life, while living in Princeton, the young Juilliard Quartet visited him and he joined them playing his violin, although they slowed the tempo to accommodate his lesser abilities. However, notes Botstein, the quartet was "impressed by Einstein's level of coordination and intonation."
In 1933, Einstein decided to emigrate to the United States due to the rise to power of the Nazis under Germany's new chancellor, Adolf Hitler. While visiting American universities in April, 1933, he learned that the new German government had passed a law barring Jews from holding any official positions, including teaching at universities. A month later, the Nazi book burnings occurred, with Einstein's works being among those burnt, and Nazi propaganda minister Joseph Goebbels proclaimed, "Jewish intellectualism is dead." Einstein also learned that his name was on a list of assassination targets, with a "$5,000 bounty on his head." One German magazine included him in a list of enemies of the German regime with the phrase, "not yet hanged".
Einstein was undertaking his third two-month visiting professorship at the California Institute of Technology when Hitler came to power in Germany. On his return to Europe in March 1933 he resided in Belgium for some months, before temporarily moving to England.
He took up a position at the Institute for Advanced Study at Princeton, New Jersey, an affiliation that lasted until his death in 1955. He was one of the four first selected (two of the others being John von Neumann and Kurt Gödel). At the institute, he soon developed a close friendship with Gödel. The two would take long walks together discussing their work. His last assistant was Bruria Kaufman, who later became a renowned physicist. During this period, Einstein tried to develop a unified field theory and to refute the accepted interpretation of quantum physics, both unsuccessfully.
World War II and the Manhattan Project
In 1939, a group of Hungarian scientists that included emigre physicist Leó Szilárd attempted to alert Washington of ongoing Nazi atomic bomb research. The group's warnings were discounted. Einstein and Szilárd, along with other refugees such as Edward Teller and Eugene Wigner, "regarded it as their responsibility to alert Americans to the possibility that German scientists might win the race to build an atomic bomb, and to warn that Hitler would be more than willing to resort to such a weapon." In the summer of 1939, a few months before the beginning of World War II in Europe, Einstein was persuaded to lend his prestige by writing a letter with Szilárd to President Franklin D. Roosevelt to alert him of the possibility. The letter also recommended that the U.S. government pay attention to and become directly involved in uranium research and associated chain reaction research.
For Einstein, "war was a disease . . . [and] he called for resistance to war." But in 1933, after Hitler assumed full power in Germany, "he renounced pacifism altogether . . . In fact, he urged the Western powers to prepare themselves against another German onslaught." In 1954, a year before his death, Einstein said to his old friend, Linus Pauling, "I made one great mistake in my life — when I signed the letter to President Roosevelt recommending that atom bombs be made; but there was some justification — the danger that the Germans would make them..."
Einstein became an American citizen in 1940. Not long after settling into his career at Princeton, he expressed his appreciation of the "meritocracy" in American culture when compared to Europe. According to Isaacson, he recognized the "right of individuals to say and think what they pleased", without social barriers, and as result, the individual was "encouraged" to be more creative, a trait he valued from his own early education. Einstein writes:
As a member of the National Association for the Advancement of Colored People (NAACP) at Princeton who campaigned for the civil rights of African Americans, Einstein corresponded with civil rights activist W. E. B. Du Bois, and in 1946 Einstein called racism America's "worst disease". He later stated, "Race prejudice has unfortunately become an American tradition which is uncritically handed down from one generation to the next. The only remedies are enlightenment and education".
After the death of Israel's first president, Chaim Weizmann, in November 1952, Prime Minister David Ben-Gurion offered Einstein the position of President of Israel, a mostly ceremonial post. The offer was presented by Israel's ambassador in Washington, Abba Eban, who explained that the offer "embodies the deepest respect which the Jewish people can repose in any of its sons". However, Einstein declined, and wrote in his response that he was "deeply moved", and "at once saddened and ashamed" that he could not accept it:
The New York World-Telegram announces Einstein's death on 18 April 1955.
On 17 April 1955, Albert Einstein experienced internal bleeding caused by the rupture of an abdominal aortic aneurysm, which had previously been reinforced surgically by Dr. Rudolph Nissen in 1948. He took the draft of a speech he was preparing for a television appearance commemorating the State of Israel's seventh anniversary with him to the hospital, but he did not live long enough to complete it. Einstein refused surgery, saying: "I want to go when I want. It is tasteless to prolong life artificially. I have done my share, it is time to go. I will do it elegantly." He died in Princeton Hospital early the next morning at the age of 76, having continued to work until near the end.
During the autopsy, the pathologist of Princeton Hospital, Thomas Stoltz Harvey, removed Einstein's brain for preservation without the permission of his family, in the hope that the neuroscience of the future would be able to discover what made Einstein so intelligent. Einstein's remains were cremated and his ashes were scattered at an undisclosed location.
Throughout his life, Einstein published hundreds of books and articles. In addition to the work he did by himself he also collaborated with other scientists on additional projects including the Bose–Einstein statistics, the Einstein refrigerator and others.
1905 - Annus Mirabilis papers
Main articles: Annus Mirabilis papers, Photoelectric effect, Special theory of relativity, and Mass–energy equivalence
The Annus Mirabilis papers are four articles pertaining to the photoelectric effect (which gave rise to quantum theory), Brownian motion, the special theory of relativity, and E = mc that Albert Einstein published in the Annalen der Physik scientific journal in 1905. These four works contributed substantially to the foundation of modern physics and changed views on space, time, and matter. The four papers are:
Title (translated) Area of focus Received Published Significance
On a Heuristic Viewpoint Concerning the Production and Transformation of Light Photoelectric effect 18 March 9 June Resolved an unsolved puzzle by suggesting that energy is exchanged only in discrete amounts (quanta). This idea was pivotal to the early development of quantum theory.
On the Electrodynamics of Moving Bodies Special relativity 30 June 26 Sept Reconciled Maxwell's equations for electricity and magnetism with the laws of mechanics by introducing major changes to mechanics close to the speed of light, resulting from analysis based on empirical evidence that the speed of light is independent of the motion of the observer. Discredited the concept of an "luminiferous ether."
Does the Inertia of a Body Depend Upon Its Energy Content? Matter–energy equivalence 27 Sept 21 Nov Equivalence of matter and energy, E = mc (and by implication, the ability of gravity to "bend" light), the existence of "rest energy", and the basis of nuclear energy.
Thermodynamic fluctuations and statistical physics
Main articles: Statistical mechanics, thermal fluctuations, and statistical physics
General principles
Theory of relativity and
Main article: History of special relativity
Consequences of this include the time-space frame of a moving body appearing to slow down and contract (in the direction of motion) when measured in the frame of the observer. This paper also argued that the idea of a luminiferous aether – one of the leading theoretical entities in physics at the time – was superfluous.
In his paper on mass–energy equivalence Einstein produced E = mc from his special relativity equations. Einstein's 1905 work on relativity remained controversial for many years, but was accepted by leading physicists, starting with Max Planck.
Photons and energy quanta
Main articles: Photon and Quantum
Quantized atomic vibrations
Main article: Einstein solid
Adiabatic principle and action-angle variables
Main article: Old quantum theory
Wave–particle duality
Einstein at the Solvay Conference in 1911
Main article: Wave–particle duality
Theory of critical opalescence
Main article: Critical opalescence
Zero-point energy
Main article: Zero-point energy
Einstein's physical intuition led him to note that Planck's oscillator energies had an incorrect zero point. He modified Planck's hypothesis by stating that the lowest energy state of an oscillator is equal to ⁄2hf, to half the energy spacing between levels. This argument, which was made in 1913 in collaboration with Otto Stern, was based on the thermodynamics of a diatomic molecule which can split apart into two free atoms.
General relativity and the Equivalence Principle
Main article: History of general relativity
See also: Principle of equivalence, Theory of relativity, and Einstein field equations
Eddington’s photograph of a solar eclipse.
Hole argument and Entwurf theory
Main article: Hole argument
In June, 1913 the Entwurf ("draft") theory was the result of these investigations. As its name suggests, it was a sketch of a theory, with the equations of motion supplemented by additional gauge fixing conditions. Simultaneously less elegant and more difficult than general relativity, after more than two years of intensive work Einstein abandoned the theory in November, 1915 after realizing that the hole argument was mistaken.
Main article: Cosmology
Einstein in his office at the University of Berlin.
Modern quantum theory
Main article: Schrödinger equation
Einstein was displeased with quantum theory and mechanics, despite its acceptance by other physicists, stating "God doesn't play with dice." As Einstein passed away at the age of 76 he still would not accept quantum theory. In 1917, at the height of his work on relativity, Einstein published an article in Physikalische Zeitschrift that proposed the possibility of stimulated emission, the physical process that makes possible the maser and the laser. This article showed that the statistics of absorption and emission of light would only be consistent with Planck's distribution law if the emission of light into a mode with n photons would be enhanced statistically compared to the emission of light into an empty mode. This paper was enormously influential in the later development of quantum mechanics, because it was the first paper to show that the statistics of atomic transitions had simple laws. Einstein discovered Louis de Broglie's work, and supported his ideas, which were received skeptically at first. In another major paper from this era, Einstein gave a wave equation for de Broglie waves, which Einstein suggested was the Hamilton–Jacobi equation of mechanics. This paper would inspire Schrödinger's work of 1926.
Bose–Einstein statistics
Main article: Bose–Einstein condensation
Energy momentum pseudotensor
Main article: Stress-energy-momentum pseudotensor
Unified field theory
Main article: Classical unified field theories
Main article: Wormhole
Einstein–Cartan theory
Main article: Einstein–Cartan theory
Equations of motion
Main article: Einstein–Infeld–Hoffmann equations
The theory of general relativity has a fundamental law – the Einstein equations which describe how space curves, the geodesic equation which describes how particles move may be derived from the Einstein equations.
Other investigations
Main article: Einstein's unsuccessful investigations
Einstein conducted other investigations that were unsuccessful and abandoned. These pertain to force, superconductivity, gravitational waves, and other research. Please see the main article for details.
Collaboration with other scientists
In addition to long time collaborators Leopold Infeld, Nathan Rosen, Peter Bergmann and others, Einstein also had some one-shot collaborations with various scientists.
Einstein–de Haas experiment
Main article: Einstein–de Haas effect
Schrödinger gas model
Einstein refrigerator
Main article: Einstein refrigerator
Bohr versus Einstein
Main article: Bohr–Einstein debates
Einstein and Niels Bohr, 1925
Einstein–Podolsky–Rosen paradox
Main article: EPR paradox
Political and religious views
Main articles: Albert Einstein's political views and Albert Einstein's religious views
Albert Einstein's political views emerged publicly in the middle of the 20th century due to his fame and reputation for genius. Einstein offered to and was called on to give judgments and opinions on matters often unrelated to theoretical physics or mathematics (see main article).
Einstein's views about religious belief have been collected from interviews and original writings. These views covered Judaism, theological determinism, agnosticism, and humanism. He also wrote much about ethical culture, opting for Spinoza's god over belief in a personal god.
Non-scientific legacy
In popular culture
Main article: Albert Einstein in popular culture
Einstein has been the subject of or inspiration for many novels, films, plays, and works of music. He is a favorite model for depictions of mad scientists and absent-minded professors; his expressive face and distinctive hairstyle have been widely copied and exaggerated. Time magazine's Frederic Golden wrote that Einstein was "a cartoonist's dream come true".
Awards and honors
Main article: Einstein's awards and honors |
ad51aa96619e8cc9 | Science Videos Events Forum About Research Courses BECOME A MEMBER Login
A Brief History of the Electron
Image source: exciton’s probability cloud showing where the electron is most likely to be found around the hole.
By Inés Urdaneta, Physicist at Resonance Science Foundation
Whereas our direct experience with protons in everyday life is not evident at all, our experience with electrons is quite different. Many of us are probably familiar with the phenomenon of static electricity that bristles our skin when we rub certain materials. We are also probably used to the notion of electricity as a current or flow of electrons that can light a bulb, turn on an electrical device, or even electrocute someone if not handled properly. We are probably also aware that matter is composed of atoms, and that atoms are composed mainly of protons and electrons. Most of our daily experience is governed by electrons and their interactions with light. Electrons also govern the physico-chemical properties of atoms. Interestingly, the inference and discovery of the electron predates the discovery of the atom itself.
The first recorded observation of the phenomenon of static electricity is believed to be that of Greek pre-socratic philosopher Thales de Mileto, who noticed it when he rubbed materials such as wood or silk with a piece of amber. This effect became associated with the word electron, which allegedly comes from the Greek word for amber, ἤλεκτρον. While electricity and electromagnetism had been widely explored by many visionaries since the 1600s (including William Gilbert, Otto von Guericke, Robert Boyle, Alessandro Volta, Hans Christian Ørsted, André-Marie Ampère, Michael Faraday, Georg Ohm and James Clerk Maxwell, to name the most well-known), it was the Irish physicist George Stoney who first introduced the concept of a fundamental unit of electricity. Then, in 1874, he coined the term Electrine, an atom of electricity. In 1881, Stoney adopted the name “electron” for this unit of charge. He made significant contributions not only to the conception and calculation of this unit, but also to cosmological and gas theory physics. His work laid the foundation for the discovery of an electron outside of matter performed by J.J. Thomson in 1879 at the Cavendish Laboratory in Cambridge University.
George Stoney and his daughters.
The discovery and description of the electron inside the atom (also called a bounded electron) involved decades of research by many physicists and chemists. But here, we’ll just say that it was a combination of discoveries that brought us the current model of an atom composed of a nucleus (made of protons and neutrons) surrounded by clouds of electrons. These include: 1) the inference of the atom years before by English chemist John Dalton in 1800, through experiments proving a law of multiple proportions; 2) the discovery of radioactivity by Becquerel in 1896, and later, the experiments of Pierre and Marie Curie (along with experiments by Rutherford and Geiger); 3) the discovery of the proton by Rutherford in his famous gold foil experiment in 1909 4) the discovery of the neutron by the physicist James Chadwick in 1932, which defined isotopes as elements whose nuclei have the same number of protons but different numbers of neutrons, among many other studies.
Prior to our current atomic model, the first well-established model for the atom was Niels Bohr’s model, proposed in 1913, and it was the first quantized depiction of the atomic structure. This model evolved progressively into what is now described by quantum mechanics as wavefunction, a solution to the famous Schrödinger equation that describes the energy and evolution in time, position, and velocity of fundamental subatomic particles like electrons.
As we have addressed in The Origin of Quantum Mechanics I and II, quantum mechanics emerged gradually from theories that tried to explain observations which could not be reconciled with classical physics.
The novel and unintuitive interpretations provided by quantum mechanics, include a feature of the subatomic world called the wave-particle duality, in which both light and subatomic particles are perceived as particles and/or waves, depending on the experimental setup and conditions. Quantum theory has branched into different theories, such as Quantum field theory (QFT), quantum electrodynamics (QED), quantum chromodynamics (QCD) and more, which attempt to address all fundamental particles and fields that govern the interactions between particles.
From the perspective of quantum mechanics, particles are believed to have extremely low mass, so gravity is considered negligible at this scale and all interactions between these particles would be governed either by the strong force (holding quarks together in a proton), the nuclear force (holding protons together in a nuclei of an atom) or the electromagnetic or color force (between charges). The problem of how to include gravity in quantum theory (mainly because the total energy-mass of the sub atomic particle is unacknowledged) makes it impossible for it to resolve the missing link between relativity (the physics of the macroscopical and cosmological scale, in which the mass of the celestial bodies, and hence, their gravity, plays a crucial role) and quantum mechanics. This missing link is quantum gravity.
The origin of our current Electron model and the atomic spectra of elements
The electron is considered a fundamental particle in the sense that it has not been shown to have an inner structure. QED (quantum electrodynamics), the field of quantum mechanics describing electrons and their interactions with photons, notoriously describes the electron as a zero-dimensional point particle with no volume, so there is no definitive description of the structure of either electrons or photons. We have a very clear idea of their effects and the interactions between them, but very little is known about their nature.
One of the most precise values we have for the electron is its mass, which is determined using penning traps. These measurements are extremely precise, with a relative uncertainty on the order of 10--11. The standard theoretical value given for the mass of the electron confirms the measured CODATA 2018 value based on the definition of the mass of the electron given by the following expression:
where R is the Rydberg constant, h is Planck’s constant, c is light speed and α is the fine structure constant. For more details on this derivation, please read the RSF article by Dr. Amira Val Baker called What is an Electron
The definition of Eq. (1) shows the combination of fundamental constants that are used to calculate the mass of the electron, and its derivation started with the model for the hydrogen atom (H) proposed by Danish physicist Niels Bohr in 1913. Bohr's atomic model is the result of his studies of the empirical relationships between spectral emission lines (in other words, the light emission at different frequencies or colors) of the H atom, as measured by Balmer and Rydberg. He found that when he multiplied the frequencies of the lines in the hydrogen spectral series (called Balmer series, Figure below) by Planck’s constant h, he could calculate the gaps (the scientific term is energy levels) between the various possible frequencies (or colors) of the hydrogen atom. In other words, Bohr found that the lines in the Figure below fall at frequencies that are multiples or integer numbers of Planck’s Constant h. The emission spectrum is the digital print of an atom.
The emission spectrum of an element is usually given in terms of wavelength, which is the inverse of the frequency.
Figure: Balmer series for Hydrogen, showing the emission spectra of H atom, which is the fingerprint of the atom. Every element in the periodic table has a particular emission spectra. Notice the separation between the lines; Bohr found that these lines fall in frequencies that are multiple or integer numbers of Planck Constant h
Based on this information, Bohr proposed an atomic model consisting of the electron with a negative charge which is attracted to the positive charge of the proton in the nucleus because of the electrostatic force defined by Coulomb's law. Instead of falling into the positive charge, the electron is held in orbit by the centrifugal force created by the rotation of the electron around the nucleus. The only assumption he made was that the mass of the electron was much smaller than that of the proton, and he found that the angular momentum (the speed of the electron’s rotation around the nucleus) was also quantized by Planck's constant h, producing stable electron orbits, or shells.
In Bohr's model of concentric electron shells, electrons are seen as tiny particles that jump from one orbital to another, as seen in the figure below, where an electron jumps from the third orbital (n = 3) to the second orbital (n = 2), emitting one photon (red curved arrow) with frequency f = v. In this model, n is always an integer, and it identifies the numerical order of the electron shells, as well as the number of photons exchanged. Hence, the term quantization applies; the energy exchange within the atom or between the electron and light happens in integer numbers of hf. For example, the change in energy (written as ΔE) of an electron jumping from orbital n = 3 to orbital n = 2 is ΔE = (3-2)hv . Since 3-2 = 1, then only one photon is emitted; on the other hand, for the electron to jump from an inner shell (n = 2) to an outer shell (n = 3), it needs to absorb one photon instead of emitting it. Jumps can be nonlinear (jumps over more than 1 orbital and emitting or absorbing multiple photons), but this requires very intense interaction with light. Such scenarios happen in high-energy situations, like in stars, for instance, or during experiments using laser fields at high intensity. Nonlinear interactions are very important and difficult to describe.
Since the atom is neutral (has no net charge), the number of protons (positive charges) in the nuclei, Z, equals the number of electrons (negative charges) in the atom. The electrons are placed in orbitals which are stable. For the H atom, Z = 1, meaning there is only one proton (positive charge +1e) and hence, only one electron with charge -e (the green point). In this figure, an electron jumps from orbital n = 3 to orbital n = 2, emitting one photon (red curved arrow) with frequency f = v. Image from:
Bohr's model of the atom predicted a radius for the H atom with the electron in the fundamental state (n = 1), and it gained credibility in 1913 with a paper predicting that some anomalous lines in stellar spectra were due to ionized helium, not hydrogen, which astronomy spectroscopist Alfred Fowler quickly confirmed. Thanks to Dirac’s, Heisenberg’s, Schrodinger’s and many other physicists’ developments, Bohr’s model has gone through substantial changes and improvements, refining this semi-classical atom model which is now described entirely by the current quantum mechanics theory that gives the final expression for the electron mass in Eq. 1. Nevertheless, Bohr’s model is a great illustration of the basic principles of the atom structure and its quantization. The Bohr radius (a0) is considered a physical constant, equal to the most probable distance between the nucleus and the electron in a hydrogen atom in its ground state, and its value is 5.29177210903(80)×10−11 m.
The Rydberg constant R was first determined empirically in 1888 by the Swedish physicist Johannes Rydberg as an appropriate parameter for the hydrogen spectral series. Later, in 1913, Niels Bohr showed that in the case of lighter atoms, the Rydberg constant value could be calculated from more fundamental first principles by utilizing his Bohr model. For this reason, his model was rapidly adopted.
On the other hand, the nature of fine structure α constant found in Eq. (1) is a mystery. We could even say it is the most fundamental constant as it doesn’t depend on the units used; on the contrary, it seems like all physical properties depend on it. Its value is very close to 1/137 and it can be derived in different contexts. An interesting derivation of α in the context of studying unified physics is that of the electron charge e divided by the Planck charge ql:
α = (e / ql)2
Physics Nobel laureate Richard Feynman says about the fine structure:
"... It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it. Immediately you would like to know where this number for a coupling comes from: is it related to Pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the "hand of God" wrote that number, and "we don't know how He pushed his pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out, without putting it in secretly!" ― Richard P. Feynman, QED: The Strange Theory of Light and Matter
Richard Feynman at Caltech. Image courtesy of the American Institute of Physics
From all the above, we can appreciate how the exploration and discovery of the atomic structure and its interaction with light has led the way to what is considered one of the most precise theoretical equations encountered in physics. However, Eq (1) is not entirely derived from first principle notions, and thus it provides little to no insight into what the electron is.
Also, in the standard approach, quantum chromodynamics (QCD), nuclear masses like the proton are calculated by considering not only the quark masses but, more importantly, the dynamics of the system. These dynamics are complex to describe due to the number of different interactions involved, which results in a non-linear description of both the nuclear force and the confining force or color force (the forces that hold the protons and its constituents together). For this reason, exact calculations of the properties of nucleons and of their constituent parts are extremely difficult and thus rely on computational techniques in which probability amplitudes are assigned to each Feynman diagram (interactions diagrams) and Monte Carlo simulations (or other similar iterative perturbative methods) determine the best fit. Also, the equations from QCD and QED use at least 17 free-adjusting parameters. The absence of a complete analytical solution required the development of sophisticated computational techniques to attempt a precise description of interactions in the nucleon. However, despite the development of ever faster supercomputers, QCD calculations have been unable to successfully predict the mass of the proton and the Higgs mechanism can only account for some 2% of the total mass.
RSF in perspective:
Nassim Haramein’s Generalized Holographic Model [1,2] proves from first principles and without free parameters, that the remaining 98% of mass is accounted for by the energy of quantum vacuum fluctuations. The Generalized Holographic Model is a solution to quantum gravity that is based on a fundamental surface-to-volume holographic ratio ɸ that explains the origin of mass and its connection to energy and forces.
“Defining the fundamental characteristics of particles from first principles is of great importance because it provides information not only about the structure of subatomic particles but also about the source of mass and the nature of spacetime itself.” – Dr. Amira Val Baker
Starting with the premise that an electron cloud can be considered an ‘electron’ coherent field of information, we must look at the microstructure of the electron system from a generalized holographic approach, which, in previous work, successfully computes the mass of the proton and the precise charge radius of the proton [1,2] in agreement with the latest electronic measurements [3].
Utilizing the generalized holographic approach, Val Baker et al. demonstrate the electron mass solution in terms of the surface-to-volume entropy measured as Planck oscillator information bits. The value obtained agrees with the measured CODATA 2018 value. In this novel first principles derivation of the mass of the electron, the mass is defined in terms of its holographic surface-to-volume ratio 𝛷 and the relationship between the electric charge at the Planck scale and that at the electron scale.
The new derivation for the mass of the electron extends the holographic mass solution to the hydrogen Bohr atom and to all known elements. As a result, we can now see that atomic structure, mass, and charge emerge from the electromagnetic fluctuations of the Planck quantum vacuum. This new approach generates an accurate value of the mass of the electron and offers an understanding of the physical structure of spacetime at the quantum scale, yielding significant insights into the formation and source of the material world.
Very important parameters of the atom like the fine structure constant, Rydberg constant and the proton to electron mass ratio, are also outputted from the model.
[1] N. Haramein, Phys. Rev. Res. Int. 3, 270 (2013)
[2] N. Haramein, e-print (2013)
[3] A. Antognini, F. Nez, K. Schuhmann, F. D. Amaro, F. Biraben, J. M. R. Cardoso, D. S. Covita, A. Dax, S. Dhawan, M. Diepold, L. M. P. Fernandes, A. Giesen, A. L. Gouvea, T. Graf, T. W. Hänsch, P. Indelicato, L. Julien, C-Y. Kao, P. Knowles, F. Kottmann, E-O. Le Bigot, Y-.W Liu, J. A. M. Lopes, L. Ludhova, C. M. B. Monteiro, F. Mulhauser, T. Nebel, P. Rabinowitz, J. M. F. dos Santos, L. A. Schaller, C. Schwob, D. Taqqu, J. F. C. A. Veloso, J. Vogelsang, and R. Pohl, Science 339, 417 (2013)
50% Complete
Two Step
|
19deabca18f30a6b | Skip to main content
Open questions on emergence in chemistry
Strong emergence is the main form of emergence that has been defended with respect to chemistry, and in particular molecular structure. Here, the author spells out this form of emergence, proposes new ways in which one can further explore the question of emergence, and explains why investigating emergence should be of interest not only to philosophers but to chemists as well.
Are chemical substances, clouds, tigers, humans, tables just the result of interactions between fundamental physical particles? Is there anything else to these things beyond the physical stuff that make them up? In response to this conundrum, emergence is the idea that there is something more to things than what we can say about them by looking only at their constituents. Emergentism was proposed as an alternative to reductionism; namely the view that everything around us is composed and completely determined by the interactions between fundamental physical particles. Reductionism became particularly popular during the highly fruitful development of physics in the 20th century. In this context, quantum mechanics and statistical mechanics became paradigmatic examples of the success of reduction as they illustrated that it is possible to describe, explain and predict the behaviour of macroscopic matter in terms of the interactions of its constitutive physical particles.
Emergence in chemistry: the case of molecular structure
Returning to emergence, a variety of case studies from the natural sciences have been invoked in support of and against this idea, including from chemistry. In fact, chemistry is a particularly helpful case through which one can investigate and understand emergence. First, this is because chemistry has a relatively uncontested set of well-established and empirically supported theories and descriptions of phenomena. Secondly, through the formulation of quantum chemistry it has established a well-defined and explicit scientific connection to quantum mechanics, thus providing philosophers robust material that they can use in order to spell out its connection to fundamental physics. Thirdly, unlike biology, chemistry is not burdened with difficult questions around life or the nature of consciousness, thus allowing philosophers to focus on the ways in which chemical entities relate to interactions between fundamental physical particles. (Special thanks to James Ladyman for bringing this last point to my attention.)
Nevertheless, the matter of emergence in chemistry is far from settled. As philosophers of chemistry have shown, there are specific examples of chemical properties whose connection to fundamental physics cannot be straightforwardly understood in terms of either reduction or emergence. The most debated case is that of molecular structure. When one describes a molecule’s structure through quantum mechanics- i.e., by solving the relevant molecular Schrödinger equation- one cannot do so unless she presupposes certain facts about the examined structure. Specifically, one needs to apply -among other things- the Born Oppenheimer approximation which involves the assumption that the molecule has determinate nuclear positions. While this is justified scientifically by the fact that “the ratio of electronic to nuclear mass is sufficiently small”, philosophers have taken this move as an indication of a deeper problem1. Specifically, it has been argued that this supports the non-reducibility of chemistry. Quantum mechanics -on its own and without presupposing facts about the examined system- cannot derive a description of the system’s chemical properties.
Various views have been proposed that attempt to explain this alleged failure of reduction. The one which is of interest for the present purposes is that of emergence. For the case of molecular structure this has been most successfully spelled out by Robin Hendry who argues that molecular structure strongly emerges2. On this view, the reason why structure cannot be identified from quantum mechanics (i.e., without making prior assumptions about the molecule’s structure) is because its structure does not exist at the scale which is described by quantum mechanics. While structure is partially determined by how the subatomic particles interact with each other, there is something ‘over and above’ those interactions.
Does this mean that emergence posits mysterious powers that determine a molecule’s structure? No. Emergence does not posit mysterious or additional forces other than the four fundamental forces postulated by physics. It does not even deny that there is a close relation between the chemical properties and the subatomic particles of a molecule (this relation is usually called supervenience). Nevertheless, strong emergence maintains that molecular structure is not fully determined by the interactions of the molecule’s physical parts. This is because structure itself partly determines how the system’s quantum mechanical entities will behave. Put differently, the way a molecule is structured is part of the cause of how its subatomic particles interact. This idea is called downward causation as it posits a causal relation between a molecule’s structure and the interactions of its subatomic particles.
It is in this sense that molecular structure is standardly understood as emergent. And indeed, the empirical evidence that has been brought forward for its support is not to be disregarded. This evidence primarily focuses on the quantum mechanical description of isomers. It has been pointed out that even if one solved the Schrödinger equation from first principles, the relevant ground state of an isomer would in that case correspond not to a specifically observed isomeric structure but to a superposition of all its possible isomeric structures. This is taken to suggest that there is an apparent mismatch between what we empirically observe and what quantum mechanics predicts.
Strong emergence explains this by the fact that a specific isomeric structure is not reduced to the sum of interactions of the molecule’s subatomic particles but rather emerges at a different scale. However, Franklin and Seifert have offered an alternative explanation. They argue that this mismatch is an instance of the measurement problem in quantum mechanics, namely the problem that arises due to the incompatibility between unique determinate outcomes and the fact that quantum mechanics “tells us that physical systems are sometimes well described by superposition states in the basis corresponding to some observable quantities”3.
Based on this, Franklin and Seifert claim that structure has to be re-examined from a completely different perspective: i.e., one that takes into account different interpretations to quantum mechanics. This is because each interpretation offers a different understanding of what superpositions are and different solutions to the measurement problem. However, this does not necessitate the rejection of emergence. There are existing views about emergence that have been formulated in relation to particular interpretations to quantum mechanics4. So there is still room to defend emergence, though admittedly in a way that is more closely informed by how one deals with foundational problems in quantum mechanics.
Open questions about emergence
Investigating foundational issues in quantum mechanics is not the only way to further explore chemistry’s emergence. Chemistry postulates various entities, properties and processes that go beyond atoms and molecules. As such, it offers case studies which have been so far neglected and which could be used to further understand emergence. One such example are chemical reactions. In the philosophy of chemistry, chemical reactions have mostly been examined with respect to the form of explanations they offer to chemists. Interestingly, they have not been extensively examined philosophically in terms of how they relate to fundamental physics. In part this is because of the implicit assumption that should we understand how atoms and molecules relate to their quantum mechanical constituents, then there is no need to examine how chemical reactions - as processes among atoms or molecules- relate to physics. But this is not obviously so. While chemical reactions are indeed in one sense just descriptions of chemical transformations among atoms and molecules, they can also be considered representations of processes that encompass rich and diverse information from chemistry, quantum mechanics, and thermodynamics. How do all these sciences come together and what does this mean for the relation between chemistry and physics? Raising such questions could lead to novel ways of understanding both reduction and emergence in chemistry.
Another way in which emergence in chemistry can be further explored is by investigating different and previously unexplored forms of emergence5. In philosophy of science, many different forms of emergence have been proposed that either have to do with how different theories or representations relate to each other, or with different ways in which entities in the world relate. Applying such understandings of emergence and investigating them from the perspective of chemistry can be beneficial to better understanding not only the relation of chemistry to physics but also the relations of other sciences as well.
Conclusion: should chemists care about the question of emergence?
A standard challenge raised against philosophical questions such as that of emergence, is that they have no bearing onto the work done in chemistry. And indeed it seems that whatever philosophers say about the nature of molecular structure or chemistry’s relation to quantum physics, little will change in how chemists go about with their work. To some extent this is how things should be. But it is worth pointing out that there is value for chemists to think of emergence and of other philosophical ideas around chemistry. This is for three reasons. First, philosophical investigations around science can provide novel and deeper insight into how science is done and what science tells us about the world. For example, investigating reduction and emergence in chemistry can be the means to appreciate the conceptual intricacies that are involved in correctly delineating chemistry’s relation with the other sciences. Secondly, philosophical questions can be invoked to motivate or support the investigation of novel research questions in science. For example, the philosophical implications of the Born-Oppenheimer approximation (mentioned above) can be used to justify scientific studies that focus on whether and how quantum mechanics describes chemical phenomena without applying this approximation. Thirdly, what is often disregarded but is particularly important is the impact of philosophy on chemical education. It has been argued that chemical education can greatly benefit from an informed philosophical analysis of chemistry6. In the context of inter-theory relations this is quite evident: background assumptions regarding chemistry’s relation to fundamental physics can greatly alter the manner in which one teaches the nature of chemical entities and properties.
1. IUPAC. Compendium of Chemical Terminology: Gold Book, Version 2.3.3, p.179 (2014).
2. Hendry, R. F. in Philosophical and Scientific Perspectives on Downward Causation (eds Paoletti, M. P. and Orilia, F.) 146–163 (Routledge, 2017)
3. Franklin, A., & Seifert, V. A. The problem of molecular structure just is the measurement problem. Br. J. Philos. Sci. 10.1086/715148 (2020).
4. Wallace, D. The emergent multiverse: quantum theory according to the Everett interpretation (Oxford University Press, 2012).
5. Wilson, J. Metaphysical emergence: weak and strong. Metaphysics Contemp. Phys. 251, 306 (2015).
Google Scholar
6. Scerri, E. R. The new philosophy of chemistry and its relevance to chemical education. Chem. Educ. Res. Pract. 2, 165–170 (2001).
CAS Article Google Scholar
Download references
I am grateful to many people who have helped me understand the idea of emergence both within philosophy and with respect to chemistry. I would like to especially thank James Ladyman, Tuomas Tahko, Robin Hendry, Alexander Franklin, Toby Friend, Francesca Bellazzi, Samuel Kimpton-Nye, Nicos Stylianou and Karim Thebault.
Author information
Corresponding author
Correspondence to Vanessa A. Seifert.
Ethics declarations
Competing interests
The author declares no competing interests.
Additional information
Rights and permissions
Reprints and Permissions
About this article
Verify currency and authenticity via CrossMark
Cite this article
Seifert, V.A. Open questions on emergence in chemistry. Commun Chem 5, 49 (2022).
Download citation
• Received:
• Accepted:
• Published:
• DOI:
Quick links
Nature Briefing
|
8d7baec32d89aa08 | @article{1902, abstract = {In the 1960s-1980s, determination of bacterial growth rates was an important tool in microbial genetics, biochemistry, molecular biology, and microbial physiology. The exciting technical developments of the 1990s and the 2000s eclipsed that tool; as a result, many investigators today lack experience with growth rate measurements. Recently, investigators in a number of areas have started to use measurements of bacterial growth rates for a variety of purposes. Those measurements have been greatly facilitated by the availability of microwell plate readers that permit the simultaneous measurements on up to 384 different cultures. Only the exponential (logarithmic) portions of the resulting growth curves are useful for determining growth rates, and manual determination of that portion and calculation of growth rates can be tedious for high-throughput purposes. Here, we introduce the program GrowthRates that uses plate reader output files to automatically determine the exponential portion of the curve and to automatically calculate the growth rate, the maximum culture density, and the duration of the growth lag phase. GrowthRates is freely available for Macintosh, Windows, and Linux.We discuss the effects of culture volume, the classical bacterial growth curve, and the differences between determinations in rich media and minimal (mineral salts) media. This protocol covers calibration of the plate reader, growth of culture inocula for both rich and minimal media, and experimental setup. As a guide to reliability, we report typical day-to-day variation in growth rates and variation within experiments with respect to position of wells within the plates.}, author = {Hall, Barry and Acar, Hande and Nandipati, Anna and Barlow, Miriam}, journal = {Molecular Biology and Evolution}, number = {1}, pages = {232 -- 238}, publisher = {Oxford University Press}, title = {{Growth rates made easy}}, doi = {10.1093/molbev/mst187}, volume = {31}, year = {2014}, } @inproceedings{1903, abstract = {We consider two-player zero-sum partial-observation stochastic games on graphs. Based on the information available to the players these games can be classified as follows: (a) general partial-observation (both players have partial view of the game); (b) one-sided partial-observation (one player has partial-observation and the other player has complete-observation); and (c) perfect-observation (both players have complete view of the game). The one-sided partial-observation games subsumes the important special case of one-player partial-observation stochastic games (or partial-observation Markov decision processes (POMDPs)). Based on the randomization available for the strategies, (a) the players may not be allowed to use randomization (pure strategies), or (b) they may choose a probability distribution over actions but the actual random choice is external and not visible to the player (actions invisible), or (c) they may use full randomization. We consider all these classes of games with reachability, and parity objectives that can express all ω-regular objectives. The analysis problems are classified into the qualitative analysis that asks for the existence of a strategy that ensures the objective with probability 1; and the quantitative analysis that asks for the existence of a strategy that ensures the objective with probability at least λ (0,1). In this talk we will cover a wide range of results: for perfect-observation games; for POMDPs; for one-sided partial-observation games; and for general partial-observation games.}, author = {Chatterjee, Krishnendu}, location = {Budapest, Hungary}, number = {PART 1}, pages = {1 -- 4}, publisher = {Springer}, title = {{Partial-observation stochastic reachability and parity games}}, doi = {10.1007/978-3-662-44522-8_1}, volume = {8634}, year = {2014}, } @article{1904, abstract = {We prove a Strichartz inequality for a system of orthonormal functions, with an optimal behavior of the constant in the limit of a large number of functions. The estimate generalizes the usual Strichartz inequality, in the same fashion as the Lieb-Thirring inequality generalizes the Sobolev inequality. As an application, we consider the Schrödinger equation with a time-dependent potential and we show the existence of the wave operator in Schatten spaces.}, author = {Frank, Rupert and Lewin, Mathieu and Lieb, Élliott and Seiringer, Robert}, journal = {Journal of the European Mathematical Society}, number = {7}, pages = {1507 -- 1526}, publisher = {European Mathematical Society}, title = {{Strichartz inequality for orthonormal functions}}, doi = {10.4171/JEMS/467}, volume = {16}, year = {2014}, } @article{1905, abstract = {The unprecedented polymorphism in the major histocompatibility complex (MHC) genes is thought to be maintained by balancing selection from parasites. However, do parasites also drive divergence at MHC loci between host populations, or do the effects of balancing selection maintain similarities among populations? We examined MHC variation in populations of the livebearing fish Poecilia mexicana and characterized their parasite communities. Poecilia mexicana populations in the Cueva del Azufre system are locally adapted to darkness and the presence of toxic hydrogen sulphide, representing highly divergent ecotypes or incipient species. Parasite communities differed significantly across populations, and populations with higher parasite loads had higher levels of diversity at class II MHC genes. However, despite different parasite communities, marked divergence in adaptive traits and in neutral genetic markers, we found MHC alleles to be remarkably similar among host populations. Our findings indicate that balancing selection from parasites maintains immunogenetic diversity of hosts, but this process does not promote MHC divergence in this system. On the contrary, we suggest that balancing selection on immunogenetic loci may outweigh divergent selection causing divergence, thereby hindering host divergence and speciation. Our findings support the hypothesis that balancing selection maintains MHC similarities among lineages during and after speciation (trans-species evolution).}, author = {Tobler, Michael and Plath, Martin and Riesch, Rüdiger and Schlupp, Ingo and Grasse, Anna V and Munimanda, Gopi and Setzer, C and Penn, Dustin and Moodley, Yoshan}, journal = {Journal of Evolutionary Biology}, number = {5}, pages = {960 -- 974}, publisher = {Wiley-Blackwell}, title = {{Selection from parasites favours immunogenetic diversity but not divergence among locally adapted host populations}}, doi = {10.1111/jeb.12370}, volume = {27}, year = {2014}, } @article{1906, abstract = {In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.}, author = {Arikan, Murat and Preiner, Reinhold and Scheiblauer, Claus and Jeschke, Stefan and Wimmer, Michael}, journal = {IEEE Transactions on Visualization and Computer Graphics}, number = {9}, pages = {1280 -- 1292}, publisher = {IEEE}, title = {{Large-scale point-cloud visualization through localized textured surface reconstruction}}, doi = {10.1109/TVCG.2014.2312011}, volume = {20}, year = {2014}, } @inproceedings{1907, abstract = {Most cryptographic security proofs require showing that two systems are indistinguishable. A central tool in such proofs is that of a game, where winning the game means provoking a certain condition, and it is shown that the two systems considered cannot be distinguished unless this condition is provoked. Upper bounding the probability of winning such a game, i.e., provoking this condition, for an arbitrary strategy is usually hard, except in the special case where the best strategy for winning such a game is known to be non-adaptive. A sufficient criterion for ensuring the optimality of non-adaptive strategies is that of conditional equivalence to a system, a notion introduced in [1]. In this paper, we show that this criterion is not necessary to ensure the optimality of non-adaptive strategies by giving two results of independent interest: 1) the optimality of non-adaptive strategies is not preserved under parallel composition; 2) in contrast, conditional equivalence is preserved under parallel composition.}, author = {Demay, Grégory and Gazi, Peter and Maurer, Ueli and Tackmann, Björn}, booktitle = {IEEE International Symposium on Information Theory}, location = {Honolulu, USA}, publisher = {IEEE}, title = {{Optimality of non-adaptive strategies: The case of parallel games}}, doi = {10.1109/ISIT.2014.6875125}, year = {2014}, } @article{1908, abstract = {In large populations, multiple beneficial mutations may be simultaneously spreading. In asexual populations, these mutations must either arise on the same background or compete against each other. In sexual populations, recombination can bring together beneficial alleles from different backgrounds, but tightly linked alleles may still greatly interfere with each other. We show for well-mixed populations that when this interference is strong, the genome can be seen as consisting of many effectively asexual stretches linked together. The rate at which beneficial alleles fix is thus roughly proportional to the rate of recombination and depends only logarithmically on the mutation supply and the strength of selection. Our scaling arguments also allow us to predict, with reasonable accuracy, the fitness distribution of fixed mutations when the mutational effect sizes are broad. We focus on the regime in which crossovers occur more frequently than beneficial mutations, as is likely to be the case for many natural populations.}, author = {Weissman, Daniel and Hallatschek, Oskar}, journal = {Genetics}, number = {4}, pages = {1167 -- 1183}, publisher = {Genetics Society of America}, title = {{The rate of adaptation in large sexual populations with linear chromosomes}}, doi = {10.1534/genetics.113.160705}, volume = {196}, year = {2014}, } @article{1909, abstract = {Summary: Phenotypes are often environmentally dependent, which requires organisms to track environmental change. The challenge for organisms is to construct phenotypes using the most accurate environmental cue. Here, we use a quantitative genetic model of adaptation by additive genetic variance, within- and transgenerational plasticity via linear reaction norms and indirect genetic effects respectively. We show how the relative influence on the eventual phenotype of these components depends on the predictability of environmental change (fast or slow, sinusoidal or stochastic) and the developmental lag τ between when the environment is perceived and when selection acts. We then decompose expected mean fitness into three components (variance load, adaptation and fluctuation load) to study the fitness costs of within- and transgenerational plasticity. A strongly negative maternal effect coefficient m minimizes the variance load, but a strongly positive m minimises the fluctuation load. The adaptation term is maximized closer to zero, with positive or negative m preferred under different environmental scenarios. Phenotypic plasticity is higher when τ is shorter and when the environment changes frequently between seasonal extremes. Expected mean population fitness is highest away from highest observed levels of phenotypic plasticity. Within- and transgenerational plasticity act in concert to deliver well-adapted phenotypes, which emphasizes the need to study both simultaneously when investigating phenotypic evolution.}, author = {Ezard, Thomas and Prizak, Roshan and Hoyle, Rebecca}, journal = {Functional Ecology}, number = {3}, pages = {693 -- 701}, publisher = {Wiley-Blackwell}, title = {{The fitness costs of adaptation via phenotypic plasticity and maternal effects}}, doi = {10.1111/1365-2435.12207}, volume = {28}, year = {2014}, } @article{1910, abstract = {angerhans cells (LCs) are a unique subset of dendritic cells (DCs) that express epithelial adhesion molecules, allowing them to form contacts with epithelial cells and reside in epidermal/epithelial tissues. The dynamic regulation of epithelial adhesion plays a decisive role in the life cycle of LCs. It controls whether LCs remain immature and sessile within the epidermis or mature and egress to initiate immune responses. So far, the molecular machinery regulating epithelial adhesion molecules during LC maturation remains elusive. Here, we generated pure populations of immature human LCs in vitro to systematically probe for gene-expression changes during LC maturation. LCs down-regulate a set of epithelial genes including E-cadherin, while they upregulate the mesenchymal marker N-cadherin known to facilitate cell migration. In addition, N-cadherin is constitutively expressed by monocyte-derived DCs known to exhibit characteristics of both inflammatory-type and interstitial/dermal DCs. Moreover, the transcription factors ZEB1 and ZEB2 (ZEB is zinc-finger E-box-binding homeobox) are upregulated in migratory LCs. ZEB1 and ZEB2 have been shown to induce epithelial-to-mesenchymal transition (EMT) and invasive behavior in cancer cells undergoing metastasis. Our results provide the first hint that the molecular EMT machinery might facilitate LC mobilization. Moreover, our study suggests that N-cadherin plays a role during DC migration.}, author = {Konradi, Sabine and Yasmin, Nighat and Haslwanter, Denise and Weber, Michele and Gesslbauer, Bernd and Sixt, Michael K and Strobl, Herbert}, journal = {European Journal of Immunology}, number = {2}, pages = {553 -- 560}, publisher = {Wiley-Blackwell}, title = {{Langerhans cell maturation is accompanied by induction of N-cadherin and the transcriptional regulators of epithelial-mesenchymal transition ZEB1/2}}, doi = {10.1002/eji.201343681}, volume = {44}, year = {2014}, } @article{1911, abstract = {The topological Tverberg theorem has been generalized in several directions by setting extra restrictions on the Tverberg partitions. Restricted Tverberg partitions, defined by the idea that certain points cannot be in the same part, are encoded with graphs. When two points are adjacent in the graph, they are not in the same part. If the restrictions are too harsh, then the topological Tverberg theorem fails. The colored Tverberg theorem corresponds to graphs constructed as disjoint unions of small complete graphs. Hell studied the case of paths and cycles. In graph theory these partitions are usually viewed as graph colorings. As explored by Aharoni, Haxell, Meshulam and others there are fundamental connections between several notions of graph colorings and topological combinatorics. For ordinary graph colorings it is enough to require that the number of colors q satisfy q>Δ, where Δ is the maximal degree of the graph. It was proven by the first author using equivariant topology that if q>Δ 2 then the topological Tverberg theorem still works. It is conjectured that q>KΔ is also enough for some constant K, and in this paper we prove a fixed-parameter version of that conjecture. The required topological connectivity results are proven with shellability, which also strengthens some previous partial results where the topological connectivity was proven with the nerve lemma.}, author = {Engström, Alexander and Noren, Patrik}, journal = {Discrete & Computational Geometry}, number = {1}, pages = {207 -- 220}, publisher = {Springer}, title = {{Tverberg's Theorem and Graph Coloring}}, doi = {10.1007/s00454-013-9556-3}, volume = {51}, year = {2014}, } @article{1912, abstract = {Kupffer's vesicle (KV) is the zebrafish organ of laterality, patterning the embryo along its left-right (LR) axis. Regional differences in cell shape within the lumen-lining KV epithelium are essential for its LR patterning function. However, the processes by which KV cells acquire their characteristic shapes are largely unknown. Here, we show that the notochord induces regional differences in cell shape within KV by triggering extracellular matrix (ECM) accumulation adjacent to anterior-dorsal (AD) regions of KV. This localized ECM deposition restricts apical expansion of lumen-lining epithelial cells in AD regions of KV during lumen growth. Our study provides mechanistic insight into the processes by which KV translates global embryonic patterning into regional cell shape differences required for its LR symmetry-breaking function.}, author = {Compagnon, Julien and Barone, Vanessa and Rajshekar, Srivarsha and Kottmeier, Rita and Pranjic-Ferscha, Kornelija and Behrndt, Martin and Heisenberg, Carl-Philipp J}, journal = {Developmental Cell}, number = {6}, pages = {774 -- 783}, publisher = {Cell Press}, title = {{The notochord breaks bilateral symmetry by controlling cell shapes in the Zebrafish laterality organ}}, doi = {10.1016/j.devcel.2014.11.003}, volume = {31}, year = {2014}, } @article{1913, abstract = {Deposits of phosphorylated tau protein and convergence of pathology in the hippocampus are the hallmarks of neurodegenerative tauopathies. Thus we aimed to evaluate whether regional and cellular vulnerability patterns in the hippocampus distinguish tauopathies or are influenced by their concomitant presence. Methods: We created a heat map of phospho-tau (AT8) immunoreactivity patterns in 24 hippocampal subregions/layers in individuals with Alzheimer's disease (AD)-related neurofibrillary degeneration (n = 40), Pick's disease (n = 8), progressive supranuclear palsy (n = 7), corticobasal degeneration (n = 6), argyrophilic grain disease (AGD, n = 18), globular glial tauopathy (n = 5), and tau-astrogliopathy of the elderly (n = 10). AT8 immunoreactivity patterns were compared by mathematical analysis. Results: Our study reveals disease-specific hot spots and regional selective vulnerability for these disorders. The pattern of hippocampal AD-related tau pathology is strongly influenced by concomitant AGD. Mathematical analysis reveals that hippocampal involvement in primary tauopathies is distinguishable from early-stage AD-related neurofibrillary degeneration. Conclusion: Our data demonstrate disease-specific AT8 immunoreactivity patterns and hot spots in the hippocampus even in tauopathies, which primarily do not affect the hippocampus. These hot spots can be shifted to other regions by the co-occurrence of tauopathies like AGD. Our observations support the notion that globular glial tauopathies and tau-astrogliopathy of the elderly are distinct entities.}, author = {Milenković, Ivan and Petrov, Tatjana and Kovács, Gábor}, journal = {Dementia and Geriatric Cognitive Disorders}, number = {5-6}, pages = {375 -- 388}, publisher = {Karger}, title = {{Patterns of hippocampal tau pathology differentiate neurodegenerative dementias}}, doi = {10.1159/000365548}, volume = {38}, year = {2014}, } @article{1914, abstract = {Targeting membrane proteins for degradation requires the sequential action of ESCRT sub-complexes ESCRT-0 to ESCRT-III. Although this machinery is generally conserved among kingdoms, plants lack the essential ESCRT-0 components. A new report closes this gap by identifying a novel protein family that substitutes for ESCRT-0 function in plants.}, author = {Sauer, Michael and Friml, Jirí}, journal = {Current Biology}, number = {1}, pages = {R27 -- R29}, publisher = {Cell Press}, title = {{Plant biology: Gatekeepers of the road to protein perdition}}, doi = {10.1016/j.cub.2013.11.019}, volume = {24}, year = {2014}, } @article{1915, abstract = {ROPs (Rho of plants) belong to a large family of plant-specific Rho-like small GTPases that function as essential molecular switches to control diverse cellular processes including cytoskeleton organization, cell polarization, cytokinesis, cell differentiation and vesicle trafficking. Although the machineries of vesicle trafficking and cell polarity in plants have been individually well addressed, how ROPs co-ordinate those processes is still largely unclear. Recent progress has been made towards an understanding of the coordination of ROP signalling and trafficking of PIN (PINFORMED) transporters for the plant hormone auxin in both root and leaf pavement cells. PIN transporters constantly shuttle between the endosomal compartments and the polar plasma membrane domains, therefore the modulation of PIN-dependent auxin transport between cells is a main developmental output of ROP-regulated vesicle trafficking. The present review focuses on these cellular mechanisms, especially the integration of ROP-based vesicle trafficking and plant cell polarity.}, author = {Chen, Xu and Friml, Jirí}, journal = {Biochemical Society Transactions}, number = {1}, pages = {212 -- 218}, publisher = {Portland Press}, title = {{Rho-GTPase-regulated vesicle trafficking in plant cell polarity}}, doi = {10.1042/BST20130269}, volume = {42}, year = {2014}, } @article{1916, abstract = {Hereditary spastic paraplegias (HSPs) are neurodegenerative motor neuron diseases characterized by progressive age-dependent loss of corticospinal motor tract function. Although the genetic basis is partly understood, only a fraction of cases can receive a genetic diagnosis, and a global view of HSP is lacking. By using whole-exome sequencing in combination with network analysis, we identified 18 previously unknown putative HSP genes and validated nearly all of these genes functionally or genetically. The pathways highlighted by these mutations link HSP to cellular transport, nucleotide metabolism, and synapse and axon development. Network analysis revealed a host of further candidate genes, of which three were mutated in our cohort. Our analysis links HSP to other neurodegenerative disorders and can facilitate gene discovery and mechanistic understanding of disease.}, author = {Novarino, Gaia and Fenstermaker, Ali and Zaki, Maha and Hofree, Matan and Silhavy, Jennifer and Heiberg, Andrew and Abdellateef, Mostafa and Rosti, Başak and Scott, Eric and Mansour, Lobna and Masri, Amira and Kayserili, Hülya and Al Aama, Jumana and Abdel Salam, Ghada and Karminejad, Ariana and Kara, Majdi and Kara, Bülent and Bozorgmehri, Bita and Ben Omran, Tawfeg and Mojahedi, Faezeh and Mahmoud, Iman and Bouslam, Naïma and Bouhouche, Ahmed and Benomar, Ali and Hanein, Sylvain and Raymond, Laure and Forlani, Sylvie and Mascaro, Massimo and Selim, Laila and Shehata, Nabil and Al Allawi, Nasir and Bindu, Parayil and Azam, Matloob and Günel, Murat and Caglayan, Ahmet and Bilgüvar, Kaya and Tolun, Aslihan and Issa, Mahmoud and Schroth, Jana and Spencer, Emily and Rosti, Rasim and Akizu, Naiara and Vaux, Keith and Johansen, Anide and Koh, Alice and Megahed, Hisham and Dürr, Alexandra and Brice, Alexis and Stévanin, Giovanni and Gabriel, Stacy and Ideker, Trey and Gleeson, Joseph}, journal = {Science}, number = {6170}, pages = {506 -- 511}, publisher = {American Association for the Advancement of Science}, title = {{Exome sequencing links corticospinal motor neuron disease to common neurodegenerative disorders}}, doi = {10.1126/science.1247363}, volume = {343}, year = {2014}, } @article{1917, abstract = {Auxin-binding protein 1 (ABP1) was discovered nearly 40 years ago and was shown to be essential for plant development and morphogenesis, but its mode of action remains unclear. Here, we report that the plasma membrane-localized transmembrane kinase (TMK) receptor-like kinases interact with ABP1 and transduce auxin signal to activate plasma membrane-associated ROPs [Rho-like guanosine triphosphatases (GTPase) from plants], leading to changes in the cytoskeleton and the shape of leaf pavement cells in Arabidopsis. The interaction between ABP1 and TMK at the cell surface is induced by auxin and requires ABP1 sensing of auxin. These findings show that TMK proteins and ABP1 form a cell surface auxin perception complex that activates ROP signaling pathways, regulating nontranscriptional cytoplasmic responses and associated fundamental processes.}, author = {Xu, Tongda and Dai, Ning and Chen, Jisheng and Nagawa, Shingo and Cao, Min and Li, Hongjiang and Zhou, Zimin and Chen, Xu and De Rycke, Riet and Rakusová, Hana and Wang, Wen and Jones, Alan and Friml, Jirí and Patterson, Sara and Bleecker, Anthony and Yang, Zhenbiao}, journal = {Science}, number = {6174}, pages = {1025 -- 1028}, publisher = {American Association for the Advancement of Science}, title = {{Cell surface ABP1-TMK auxin sensing complex activates ROP GTPase signaling}}, doi = {10.1126/science.1245125}, volume = {343}, year = {2014}, } @article{1918, abstract = {As the nuclear charge Z is continuously decreased an N-electron atom undergoes a binding-unbinding transition. We investigate whether the electrons remain bound and whether the radius of the system stays finite as the critical value Zc is approached. Existence of a ground state at Zc is shown under the condition Zc < N-K, where K is the maximal number of electrons that can be removed at Zc without changing the energy.}, author = {Bellazzini, Jacopo and Frank, Rupert and Lieb, Élliott and Seiringer, Robert}, journal = {Reviews in Mathematical Physics}, number = {1}, publisher = {World Scientific Publishing}, title = {{Existence of ground states for negative ions at the binding threshold}}, doi = {10.1142/S0129055X13500219}, volume = {26}, year = {2014}, } @article{1919, abstract = {Long-lasting memories are formed when the stimulus is temporally distributed (spacing effect). However, the synaptic mechanisms underlying this robust phenomenon and the precise time course of the synaptic modifications that occur during learning remain unclear. Here we examined the adaptation of horizontal optokinetic response in mice that underwent 1 h of massed and spaced training at varying intervals. Despite similar acquisition by all training protocols, 1 h of spacing produced the highest memory retention at 24 h, which lasted for 1 mo. The distinct kinetics of memory are strongly correlated with the reduction of floccular parallel fiber-Purkinje cell synapses but not with AMPA receptor (AMPAR) number and synapse size. After the spaced training, we observed 25%, 23%, and 12% reduction in AMPAR density, synapse size, and synapse number, respectively. Four hours after the spaced training, half of the synapses and Purkinje cell spines had been eliminated, whereas AMPAR density and synapse size were recovered in remaining synapses. Surprisingly, massed training also produced long-term memory and halving of synapses; however, this occurred slowly over days, and the memory lasted for only 1 wk. This distinct kinetics of structural plasticity may serve as a basis for unique temporal profiles in the formation and decay of memory with or without intervals.}, author = {Aziz, Wajeeha and Wang, Wen and Kesaf, Sebnem and Mohamed, Alsayed and Fukazawa, Yugo and Shigemoto, Ryuichi}, journal = {PNAS}, number = {1}, pages = {E194 -- E202}, publisher = {National Academy of Sciences}, title = {{Distinct kinetics of synaptic structural plasticity, memory formation, and memory decay in massed and spaced learning}}, doi = {10.1073/pnas.1303317110}, volume = {111}, year = {2014}, } @article{1920, abstract = {Cerebellar motor learning is suggested to be caused by long-term plasticity of excitatory parallel fiber-Purkinje cell (PF-PC) synapses associated with changes in the number of synaptic AMPA-type glutamate receptors (AMPARs). However, whether the AMPARs decrease or increase in individual PF-PC synapses occurs in physiological motor learning and accounts for memory that lasts over days remains elusive. We combined quantitative SDS-digested freeze-fracture replica labeling for AMPAR and physical dissector electron microscopy with a simple model of cerebellar motor learning, adaptation of horizontal optokinetic response (HOKR) in mouse. After 1-h training of HOKR, short-term adaptation (STA) was accompanied with transient decrease in AMPARs by 28% in target PF-PC synapses. STA was well correlated with AMPAR decrease in individual animals and both STA and AMPAR decrease recovered to basal levels within 24 h. Surprisingly, long-termadaptation (LTA) after five consecutive daily trainings of 1-h HOKR did not alter the number of AMPARs in PF-PC synapses but caused gradual and persistent synapse elimination by 45%, with corresponding PC spine loss by the fifth training day. Furthermore, recovery of LTA after 2 wk was well correlated with increase of PF-PC synapses to the control level. Our findings indicate that the AMPARs decrease in PF-PC synapses and the elimination of these synapses are in vivo engrams in short- and long-term motor learning, respectively, showing a unique type of synaptic plasticity that may contribute to memory consolidation.}, author = {Wang, Wen and Nakadate, Kazuhiko and Masugi Tokita, Miwako and Shutoh, Fumihiro and Aziz, Wajeeha and Tarusawa, Etsuko and Lörincz, Andrea and Molnár, Elek and Kesaf, Sebnem and Li, Yunqing and Fukazawa, Yugo and Nagao, Soichi and Shigemoto, Ryuichi}, journal = {PNAS}, number = {1}, pages = {E188 -- E193}, publisher = {National Academy of Sciences}, title = {{Distinct cerebellar engrams in short-term and long-term motor learning}}, doi = {10.1073/pnas.1315541111}, volume = {111}, year = {2014}, } @article{1921, abstract = {Cell polarity manifested by asymmetric distribution of cargoes, such as receptors and transporters, within the plasma membrane (PM) is crucial for essential functions in multicellular organisms. In plants, cell polarity (re)establishment is intimately linked to patterning processes. Despite the importance of cell polarity, its underlying mechanisms are still largely unknown, including the definition and distinctiveness of the polar domains within the PM. Here, we show in Arabidopsis thaliana that the signaling membrane components, the phosphoinositides phosphatidylinositol 4-phosphate (PtdIns4P) and phosphatidylinositol 4, 5-bisphosphate [PtdIns(4, 5)P2] as well as PtdIns4P 5-kinases mediating their interconversion, are specifically enriched at apical and basal polar plasma membrane domains. The PtdIns4P 5-kinases PIP5K1 and PIP5K2 are redundantly required for polar localization of specifically apical and basal cargoes, such as PIN-FORMED transporters for the plant hormone auxin. As a consequence of the polarity defects, instructive auxin gradients as well as embryonic and postembryonic patterning are severely compromised. Furthermore, auxin itself regulates PIP5K transcription and PtdIns4P and PtdIns(4, 5)P2 levels, in particular their association with polar PM domains. Our results provide insight into the polar domain-delineating mechanisms in plant cells that depend on apical and basal distribution of membrane lipids and are essential for embryonic and postembryonic patterning.}, author = {Tejos, Ricardo and Sauer, Michael and Vanneste, Steffen and Palacios-Gomez, MiriamPalacios and Li, Hongjiang and Heilmann, Mareike and Van Wijk, Ringo and Vermeer, Joop and Heilmann, Ingo and Munnik, Teun and Friml, Jirí}, journal = {Plant Cell}, number = {5}, pages = {2114 -- 2128}, publisher = {American Society of Plant Biologists}, title = {{Bipolar plasma membrane distribution of phosphoinositides and their requirement for auxin-mediated cell polarity and patterning in Arabidopsis}}, doi = {10.1105/tpc.114.126185}, volume = {26}, year = {2014}, } @article{1922, abstract = {Germination of Arabidopsis seeds in darkness induces apical hook development, based on a tightly regulated differential growth coordinated by a multiple hormone cross-talk. Here, we endeavoured to clarify the function of brassinosteroids (BRs) and cross-talk with ethylene in hook development. An automated infrared imaging system was developed to study the kinetics of hook development in etiolated Arabidopsis seedlings. To ascertain the photomorphogenic control of hook opening, the system was equipped with an automatic light dimmer. We demonstrate that ethylene and BRs are indispensable for hook formation and maintenance. Ethylene regulation of hook formation functions partly through BRs, with BR feedback inhibition of ethylene action. Conversely, BR-mediated extension of hook maintenance functions partly through ethylene. Furthermore, we revealed that a short light pulse is sufficient to induce rapid hook opening. Our dynamic infrared imaging system allows high-resolution, kinetic imaging of up to 112 seedlings in a single experimental run. At this high throughput, it is ideally suited to rapidly gain insight in pathway networks. We demonstrate that BRs and ethylene cooperatively regulate apical hook development in a phase-dependent manner. Furthermore, we show that light is a predominant regulator of hook opening, inhibiting ethylene- and BR-mediated postponement of hook opening.}, author = {Smet, Dajo and Žádníková, Petra and Vandenbussche, Filip and Benková, Eva and Van Der Straeten, Dominique}, journal = {New Phytologist}, number = {4}, pages = {1398 -- 1411}, publisher = {Wiley-Blackwell}, title = {{Dynamic infrared imaging analysis of apical hook development in Arabidopsis: The case of brassinosteroids}}, doi = {10.1111/nph.12751}, volume = {202}, year = {2014}, } @article{1923, abstract = {We derive the equations for a thin, axisymmetric elastic shell subjected to an internal active stress giving rise to active tension and moments within the shell. We discuss the stability of a cylindrical elastic shell and its response to a localized change in internal active stress. This description is relevant to describe the cellular actomyosin cortex, a thin shell at the cell surface behaving elastically at a short timescale and subjected to active internal forces arising from myosin molecular motor activity. We show that the recent observations of cell deformation following detachment of adherent cells (Maître J-L et al 2012 Science 338 253-6) are well accounted for by this mechanical description. The actin cortex elastic and bending moduli can be obtained from a quantitative analysis of cell shapes observed in these experiments. Our approach thus provides a non-invasive, imaging-based method for the extraction of cellular physical parameters.}, author = {Berthoumieux, Hélène and Maître, Jean-Léon and Heisenberg, Carl-Philipp J and Paluch, Ewa and Julicher, Frank and Salbreux, Guillaume}, journal = {New Journal of Physics}, publisher = {IOP Publishing Ltd.}, title = {{Active elastic thin shell theory for cellular deformations}}, doi = {10.1088/1367-2630/16/6/065005}, volume = {16}, year = {2014}, } @article{1924, abstract = {Stomata are two-celled valves that control epidermal pores whose spacing optimizes shoot-atmosphere gas exchange. They develop from protodermal cells after unequal divisions followed by an equal division and differentiation. The concentration of the hormone auxin, a master plant developmental regulator, is tightly controlled in time and space, but its role, if any, in stomatal formation is obscure. Here dynamic changes of auxin activity during stomatal development are monitored using auxin input (DII-VENUS) and output (DR5:VENUS) markers by time-lapse imaging. A decrease in auxin levels in the smaller daughter cell after unequal division presages the acquisition of a guard mother cell fate whose equal division produces the two guard cells. Thus, stomatal patterning requires auxin pathway control of stem cell compartment size, as well as auxin depletion that triggers a developmental switch from unequal to equal division.}, author = {Le, Jie and Liu, Xuguang and Yang, Kezhen and Chen, Xiaolan and Zhu, Lingling and Wang, Hongzhe and Wang, Ming and Vanneste, Steffen and Morita, Miyo and Tasaka, Masao and Ding, Zhaojun and Friml, Jirí and Beeckman, Tom and Sack, Fred}, journal = {Nature Communications}, publisher = {Nature Publishing Group}, title = {{Auxin transport and activity regulate stomatal patterning and development}}, doi = {10.1038/ncomms4090}, volume = {5}, year = {2014}, } @article{1925, abstract = {In the past decade carbon nanotubes (CNTs) have been widely studied as a potential drug-delivery system, especially with functionality for cellular targeting. Yet, little is known about the actual process of docking to cell receptors and transport dynamics after internalization. Here we performed single-particle studies of folic acid (FA) mediated CNT binding to human carcinoma cells and their transport inside the cytosol. In particular, we employed molecular recognition force spectroscopy, an atomic force microscopy based method, to visualize and quantify docking of FA functionalized CNTs to FA binding receptors in terms of binding probability and binding force. We then traced individual fluorescently labeled, FA functionalized CNTs after specific uptake, and created a dynamic 'roadmap' that clearly showed trajectories of directed diffusion and areas of nanotube confinement in the cytosol. Our results demonstrate the potential of a single-molecule approach for investigation of drug-delivery vehicles and their targeting capacity.}, author = {Lamprecht, Constanze and Plochberger, Birgit and Ruprecht, Verena and Wieser, Stefan and Rankl, Christian and Heister, Elena and Unterauer, Barbara and Brameshuber, Mario and Danzberger, Jürgen and Lukanov, Petar and Flahaut, Emmanuel and Schütz, Gerhard and Hinterdorfer, Peter and Ebner, Andreas}, journal = {Nanotechnology}, number = {12}, publisher = {IOP Publishing}, title = {{A single-molecule approach to explore binding uptake and transport of cancer cell targeting nanotubes}}, doi = {10.1088/0957-4484/25/12/125704}, volume = {25}, year = {2014}, } @article{1926, abstract = {We consider cross products of finite graphs with a class of trees that have arbitrarily but finitely long line segments, such as the Fibonacci tree. Such cross products are called tree-strips. We prove that for small disorder random Schrödinger operators on such tree-strips have purely absolutely continuous spectrum in a certain set.}, author = {Sadel, Christian}, journal = {Mathematical Physics, Analysis and Geometry}, number = {3-4}, pages = {409 -- 440}, publisher = {Springer}, title = {{Absolutely continuous spectrum for random Schrödinger operators on the Fibonacci and similar Tree-strips}}, doi = {10.1007/s11040-014-9163-4}, volume = {17}, year = {2014}, } @article{1928, abstract = {In infectious disease epidemiology the basic reproductive ratio, R0, is defined as the average number of new infections caused by a single infected individual in a fully susceptible population. Many models describing competition for hosts between non-interacting pathogen strains in an infinite population lead to the conclusion that selection favors invasion of new strains if and only if they have higher R0 values than the resident. Here we demonstrate that this picture fails in finite populations. Using a simple stochastic SIS model, we show that in general there is no analogous optimization principle. We find that successive invasions may in some cases lead to strains that infect a smaller fraction of the host population, and that mutually invasible pathogen strains exist. In the limit of weak selection we demonstrate that an optimization principle does exist, although it differs from R0 maximization. For strains with very large R0, we derive an expression for this local fitness function and use it to establish a lower bound for the error caused by neglecting stochastic effects. Furthermore, we apply this weak selection limit to investigate the selection dynamics in the presence of a trade-off between the virulence and the transmission rate of a pathogen.}, author = {Humplik, Jan and Hill, Alison and Nowak, Martin}, journal = {Journal of Theoretical Biology}, pages = {149 -- 162}, publisher = {Elsevier}, title = {{Evolutionary dynamics of infectious diseases in finite populations}}, doi = {10.1016/j.jtbi.2014.06.039}, volume = {360}, year = {2014}, } @article{1929, abstract = {We propose an algorithm for the generalization of cartographic objects that can be used to represent maps on different scales.}, author = {Alexeev, V V and Bogaevskaya, V G and Preobrazhenskaya, M M and Ukhalov, A Y and Edelsbrunner, Herbert and Yakimova, Olga}, journal = {Journal of Mathematical Sciences (United States)}, number = {6}, pages = {754 -- 760}, publisher = {Springer}, title = {{An algorithm for cartographic generalization that preserves global topology}}, doi = {10.1007/s10958-014-2165-8}, volume = {203}, year = {2014}, } @article{1930, abstract = {(Figure Presented) Data acquisition, numerical inaccuracies, and sampling often introduce noise in measurements and simulations. Removing this noise is often necessary for efficient analysis and visualization of this data, yet many denoising techniques change the minima and maxima of a scalar field. For example, the extrema can appear or disappear, spatially move, and change their value. This can lead to wrong interpretations of the data, e.g., when the maximum temperature over an area is falsely reported being a few degrees cooler because the denoising method is unaware of these features. Recently, a topological denoising technique based on a global energy optimization was proposed, which allows the topology-controlled denoising of 2D scalar fields. While this method preserves the minima and maxima, it is constrained by the size of the data. We extend this work to large 2D data and medium-sized 3D data by introducing a novel domain decomposition approach. It allows processing small patches of the domain independently while still avoiding the introduction of new critical points. Furthermore, we propose an iterative refinement of the solution, which decreases the optimization energy compared to the previous approach and therefore gives smoother results that are closer to the input. We illustrate our technique on synthetic and real-world 2D and 3D data sets that highlight potential applications.}, author = {Günther, David and Jacobson, Alec and Reininghaus, Jan and Seidel, Hans and Sorkine Hornung, Olga and Weinkauf, Tino}, journal = {IEEE Transactions on Visualization and Computer Graphics}, number = {12}, pages = {2585 -- 2594}, publisher = {IEEE}, title = {{Fast and memory-efficient topological denoising of 2D and 3D scalar fields}}, doi = {10.1109/TVCG.2014.2346432}, volume = {20}, year = {2014}, } @article{1931, abstract = {A wealth of experimental evidence suggests that working memory circuits preferentially represent information that is behaviorally relevant. Still, we are missing a mechanistic account of how these representations come about. Here we provide a simple explanation for a range of experimental findings, in light of prefrontal circuits adapting to task constraints by reward-dependent learning. In particular, we model a neural network shaped by reward-modulated spike-timing dependent plasticity (r-STDP) and homeostatic plasticity (intrinsic excitability and synaptic scaling). We show that the experimentally-observed neural representations naturally emerge in an initially unstructured circuit as it learns to solve several working memory tasks. These results point to a critical, and previously unappreciated, role for reward-dependent learning in shaping prefrontal cortex activity.}, author = {Savin, Cristina and Triesch, Jochen}, journal = {Frontiers in Computational Neuroscience}, number = {MAY}, publisher = {Frontiers Research Foundation}, title = {{Emergence of task-dependent representations in working memory circuits}}, doi = {10.3389/fncom.2014.00057}, volume = {8}, year = {2014}, } @article{1932, abstract = {The existence of complex (multiple-step) genetic adaptations that are "irreducible" (i.e., all partial combinations are less fit than the original genotype) is one of the longest standing problems in evolutionary biology. In standard genetics parlance, these adaptations require the crossing of a wide adaptive valley of deleterious intermediate stages. Here, we demonstrate, using a simple model, that evolution can cross wide valleys to produce "irreducibly complex" adaptations by making use of previously cryptic mutations. When revealed by an evolutionary capacitor, previously cryptic mutants have higher initial frequencies than do new mutations, bringing them closer to a valley-crossing saddle in allele frequency space. Moreover, simple combinatorics implies an enormous number of candidate combinations exist within available cryptic genetic variation. We model the dynamics of crossing of a wide adaptive valley after a capacitance event using both numerical simulations and analytical approximations. Although individual valley crossing events become less likely as valleys widen, by taking the combinatorics of genotype space into account, we see that revealing cryptic variation can cause the frequent evolution of complex adaptations.}, author = {Trotter, Meredith and Weissman, Daniel and Peterson, Grant and Peck, Kayla and Masel, Joanna}, journal = {Evolution}, number = {12}, pages = {3357 -- 3367}, publisher = {Wiley-Blackwell}, title = {{Cryptic genetic variation can make "irreducible complexity" a common mode of adaptation in sexual populations}}, doi = {10.1111/evo.12517}, volume = {68}, year = {2014}, } @article{1933, abstract = {The development of the vertebrate brain requires an exquisite balance between proliferation and differentiation of neural progenitors. Notch signaling plays a pivotal role in regulating this balance, yet the interaction between signaling and receiving cells remains poorly understood. We have found that numerous nascent neurons and/or intermediate neurogenic progenitors expressing the ligand of Notch retain apical endfeet transiently at the ventricular lumen that form adherens junctions (AJs) with the endfeet of progenitors. Forced detachment of the apical endfeet of those differentiating cells by disrupting AJs resulted in precocious neurogenesis that was preceded by the downregulation of Notch signaling. Both Notch1 and its ligand Dll1 are distributed around AJs in the apical endfeet, and these proteins physically interact with ZO-1, a constituent of the AJ. Furthermore, live imaging of a fluorescently tagged Notch1 demonstrated its trafficking from the apical endfoot to the nucleus upon cleavage. Our results identified the apical endfoot as the central site of active Notch signaling to securely prohibit inappropriate differentiation of neural progenitors.}, author = {Hatakeyama, Jun and Wakamatsu, Yoshio and Nagafuchi, Akira and Kageyama, Ryoichiro and Shigemoto, Ryuichi and Shimamura, Kenji}, journal = {Development}, number = {8}, pages = {1671 -- 1682}, publisher = {Company of Biologists}, title = {{Cadherin-based adhesions in the apical endfoot are required for active Notch signaling to control neurogenesis in vertebrates}}, doi = {10.1242/dev.102988}, volume = {141}, year = {2014}, } @article{1934, abstract = {The plant hormones auxin and cytokinin mutually coordinate their activities to control various aspects of development [1-9], and their crosstalk occurs at multiple levels [10, 11]. Cytokinin-mediated modulation of auxin transport provides an efficient means to regulate auxin distribution in plant organs. Here, we demonstrate that cytokinin does not merely control the overall auxin flow capacity, but might also act as a polarizing cue and control the auxin stream directionality during plant organogenesis. Cytokinin enhances the PIN-FORMED1 (PIN1) auxin transporter depletion at specific polar domains, thus rearranging the cellular PIN polarities and directly regulating the auxin flow direction. This selective cytokinin sensitivity correlates with the PIN protein phosphorylation degree. PIN1 phosphomimicking mutations, as well as enhanced phosphorylation in plants with modulated activities of PIN-specific kinases and phosphatases, desensitize PIN1 to cytokinin. Our results reveal conceptually novel, cytokinin-driven polarization mechanism that operates in developmental processes involving rapid auxin stream redirection, such as lateral root organogenesis, in which a gradual PIN polarity switch defines the growth axis of the newly formed organ.}, author = {Marhavy, Peter and Duclercq, Jérôme and Weller, Benjamin and Feraru, Elena and Bielach, Agnieszka and Offringa, Remko and Friml, Jirí and Schwechheimer, Claus and Murphy, Angus and Benková, Eva}, journal = {Current Biology}, number = {9}, pages = {1031 -- 1037}, publisher = {Cell Press}, title = {{Cytokinin controls polarity of PIN1-dependent Auxin transport during lateral root organogenesis}}, doi = {10.1016/j.cub.2014.04.002}, volume = {24}, year = {2014}, } @article{1935, abstract = {We consider Ising models in d = 2 and d = 3 dimensions with nearest neighbor ferromagnetic and long-range antiferromagnetic interactions, the latter decaying as (distance)-p, p > 2d, at large distances. If the strength J of the ferromagnetic interaction is larger than a critical value J c, then the ground state is homogeneous. It has been conjectured that when J is smaller than but close to J c, the ground state is periodic and striped, with stripes of constant width h = h(J), and h → ∞ as J → Jc -. (In d = 3 stripes mean slabs, not columns.) Here we rigorously prove that, if we normalize the energy in such a way that the energy of the homogeneous state is zero, then the ratio e 0(J)/e S(J) tends to 1 as J → Jc -, with e S(J) being the energy per site of the optimal periodic striped/slabbed state and e 0(J) the actual ground state energy per site of the system. Our proof comes with explicit bounds on the difference e 0(J)-e S(J) at small but positive J c-J, and also shows that in this parameter range the ground state is striped/slabbed in a certain sense: namely, if one looks at a randomly chosen window, of suitable size ℓ (very large compared to the optimal stripe size h(J)), one finds a striped/slabbed state with high probability.}, author = {Giuliani, Alessandro and Lieb, Élliott and Seiringer, Robert}, journal = {Communications in Mathematical Physics}, number = {1}, pages = {333 -- 350}, publisher = {Springer}, title = {{Formation of stripes and slabs near the ferromagnetic transition}}, doi = {10.1007/s00220-014-1923-2}, volume = {331}, year = {2014}, } @article{1936, abstract = {The social intelligence hypothesis states that the need to cope with complexities of social life has driven the evolution of advanced cognitive abilities. It is usually invoked in the context of challenges arising from complex intragroup structures, hierarchies, and alliances. However, a fundamental aspect of group living remains largely unexplored as a driving force in cognitive evolution: the competition between individuals searching for resources (producers) and conspecifics that parasitize their findings (scroungers). In populations of social foragers, abilities that enable scroungers to steal by outsmarting producers, and those allowing producers to prevent theft by outsmarting scroungers, are likely to be beneficial and may fuel a cognitive arms race. Using analytical theory and agent-based simulations, we present a general model for such a race that is driven by the producer-scrounger game and show that the race's plausibility is dramatically affected by the nature of the evolving abilities. If scrounging and scrounging avoidance rely on separate, strategy-specific cognitive abilities, arms races are short-lived and have a limited effect on cognition. However, general cognitive abilities that facilitate both scrounging and scrounging avoidance undergo stable, long-lasting arms races. Thus, ubiquitous foraging interactions may lead to the evolution of general cognitive abilities in social animals, without the requirement of complex intragroup structures.}, author = {Arbilly, Michal and Weissman, Daniel and Feldman, Marcus and Grodzinski, Uri}, journal = {Behavioral Ecology}, number = {3}, pages = {487 -- 495}, publisher = {Oxford University Press}, title = {{An arms race between producers and scroungers can drive the evolution of social cognition}}, doi = {10.1093/beheco/aru002}, volume = {25}, year = {2014}, } @article{1937, abstract = {We prove the edge universality of the beta ensembles for any β ≥ 1, provided that the limiting spectrum is supported on a single interval, and the external potential is C4 and regular. We also prove that the edge universality holds for generalized Wigner matrices for all symmetry classes. Moreover, our results allow us to extend bulk universality for beta ensembles from analytic potentials to potentials in class C4.}, author = {Bourgade, Paul and Erdös, László and Yau, Horngtzer}, journal = {Communications in Mathematical Physics}, number = {1}, pages = {261 -- 353}, publisher = {Springer}, title = {{Edge universality of beta ensembles}}, doi = {10.1007/s00220-014-2120-z}, volume = {332}, year = {2014}, } @inbook{6178, abstract = {Mechanically coupled cells can generate forces driving cell and tissue morphogenesis during development. Visualization and measuring of these forces is of major importance to better understand the complexity of the biomechanic processes that shape cells and tissues. Here, we describe how UV laser ablation can be utilized to quantitatively assess mechanical tension in different tissues of the developing zebrafish and in cultures of primary germ layer progenitor cells ex vivo.}, author = {Smutny, Michael and Behrndt, Martin and Campinho, Pedro and Ruprecht, Verena and Heisenberg, Carl-Philipp J}, booktitle = {Tissue Morphogenesis}, editor = {Nelson, Celeste}, isbn = {9781493911639}, issn = {1064-3745}, pages = {219--235}, publisher = {Springer}, title = {{UV laser ablation to measure cell and tissue-generated forces in the zebrafish embryo in vivo and ex vivo}}, doi = {10.1007/978-1-4939-1164-6_15}, volume = {1189}, year = {2014}, } @book{6853, abstract = {This monograph presents a short course in computational geometry and topology. In the first part the book covers Voronoi diagrams and Delaunay triangulations, then it presents the theory of alpha complexes which play a crucial role in biology. The central part of the book is the homology theory and their computation, including the theory of persistence which is indispensable for applications, e.g. shape reconstruction. The target audience comprises researchers and practitioners in mathematics, biology, neuroscience and computer science, but the book may also be beneficial to graduate students of these fields.}, author = {Edelsbrunner, Herbert}, isbn = {9-783-3190-5956-3}, issn = {2191-5318}, pages = {IX, 110}, publisher = {Springer Nature}, title = {{A Short Course in Computational Geometry and Topology}}, doi = {10.1007/978-3-319-05957-0}, year = {2014}, } @article{1375, abstract = {We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1) First we show that the algorithmic question is reducible to the problem of a logarithmic number of min-plus matrix multiplications of n×n-matrices, where n is the number of vertices of the graph. (2) Second, when the weights are nonnegative, we present the first (1+ε)-approximation algorithm for the problem and the running time of our algorithm is Õ(nωlog3(nW/ε)/ε),1 where O(nω) is the time required for the classic n×n-matrix multiplication and W is the maximum value of the weights. With an additional O(log(nW/ε)) factor in space a cycle with approximately optimal weight can be computed within the same time bound.}, author = {Chatterjee, Krishnendu and Henzinger, Monika and Krinninger, Sebastian and Loitzenbauer, Veronika and Raskin, Michael}, journal = {Theoretical Computer Science}, number = {C}, pages = {104 -- 116}, publisher = {Elsevier}, title = {{Approximating the minimum cycle mean}}, doi = {10.1016/j.tcs.2014.06.031}, volume = {547}, year = {2014}, } @inproceedings{1392, abstract = {Fault-tolerant distributed algorithms play an important role in ensuring the reliability of many software applications. In this paper we consider distributed algorithms whose computations are organized in rounds. To verify the correctness of such algorithms, we reason about (i) properties (such as invariants) of the state, (ii) the transitions controlled by the algorithm, and (iii) the communication graph. We introduce a logic that addresses these points, and contains set comprehensions with cardinality constraints, function symbols to describe the local states of each process, and a limited form of quantifier alternation to express the verification conditions. We show its use in automating the verification of consensus algorithms. In particular, we give a semi-decision procedure for the unsatisfiability problem of the logic and identify a decidable fragment. We successfully applied our framework to verify the correctness of a variety of consensus algorithms tolerant to both benign faults (message loss, process crashes) and value faults (message corruption).}, author = {Dragoi, Cezara and Henzinger, Thomas A and Veith, Helmut and Widder, Josef and Zufferey, Damien}, location = {San Diego, USA}, pages = {161 -- 181}, publisher = {Springer}, title = {{A logic-based framework for verifying consensus algorithms}}, doi = {10.1007/978-3-642-54013-4_10}, volume = {8318}, year = {2014}, } @inproceedings{1393, abstract = {Probabilistic programs are usual functional or imperative programs with two added constructs: (1) the ability to draw values at random from distributions, and (2) the ability to condition values of variables in a program via observations. Models from diverse application areas such as computer vision, coding theory, cryptographic protocols, biology and reliability analysis can be written as probabilistic programs. Probabilistic inference is the problem of computing an explicit representation of the probability distribution implicitly specified by a probabilistic program. Depending on the application, the desired output from inference may vary-we may want to estimate the expected value of some function f with respect to the distribution, or the mode of the distribution, or simply a set of samples drawn from the distribution. In this paper, we describe connections this research area called \Probabilistic Programming" has with programming languages and software engineering, and this includes language design, and the static and dynamic analysis of programs. We survey current state of the art and speculate on promising directions for future research.}, author = {Gordon, Andrew and Henzinger, Thomas A and Nori, Aditya and Rajamani, Sriram}, booktitle = {Proceedings of the on Future of Software Engineering}, location = {Hyderabad, India}, pages = {167 -- 181}, publisher = {ACM}, title = {{Probabilistic programming}}, doi = {10.1145/2593882.2593900}, year = {2014}, } @phdthesis{1395, abstract = {In this thesis I studied various individual and social immune defences employed by the invasive garden ant Lasius neglectus mostly against entomopathogenic fungi. The first two chapters of this thesis address the phenomenon of 'social immunisation'. Social immunisation, that is the immunological protection of group members due to social contact to a pathogen-exposed nestmate, has been described in various social insect species against different types of pathogens. However, in the case of entomopathogenic fungi it has, so far, only been demonstrated that social immunisation exists at all. Its underlying mechanisms r any other properties were, however, unknown. In the first chapter of this thesis I identified the mechanistic basis of social immunisation in L. neglectus against the entomopathogenous fungus Metarhizium. I could show that nestmates of a pathogen-exposed individual contract low-level infections due to social interactions. These low-level infections are, however, non-lethal and cause an active stimulation of the immune system, which protects the nestmates upon subsequent pathogen encounters. In the second chapter of this thesis I investigated the specificity and colony level effects of social immunisation. I demonstrated that the protection conferred by social immunisation is highly specific, protecting ants only against the same pathogen strain. In addition, depending on the respective context, social immunisation may even cause fitness costs. I further showed that social immunisation crucially affects sanitary behaviour and disease dynamics within ant groups. In the third chapter of this thesis I studied the effects of the ectosymbiotic fungus Laboulbenia formicarum on its host L. neglectus. Although Laboulbeniales are the largest order of insect-parasitic fungi, research concerning host fitness consequence is sparse. I showed that highly Laboulbenia-infected ants sustain fitness costs under resource limitation, however, gain fitness benefits when exposed to an entomopathogenus fungus. These effects are probably cause by a prophylactic upregulation of behavioural as well as physiological immune defences in highly infected ants.}, author = {Konrad, Matthias}, pages = {131}, publisher = {IST Austria}, title = {{Immune defences in ants: Effects of social immunisation and a fungal ectosymbiont in the ant Lasius neglectus}}, year = {2014}, } @phdthesis{1402, abstract = {Phosphatidylinositol (Ptdlns) is a structural phospholipid that can be phosphorylated into various lipid signaling molecules, designated polyphosphoinositides (PPIs). The reversible phosphorylation of PPIs on the 3, 4, or 5 position of inositol is performed by a set of organelle-specific kinases and phosphatases, and the characteristic head groups make these molecules ideal for regulating biological processes in time and space. In yeast and mammals, Ptdlns3P and Ptdlns(3,5)P2 play crucial roles in trafficking toward the lytic compartments, whereas the role in plants is not yet fully understood. Here we identified the role of a land plant-specific subgroup of PPI phosphatases, the suppressor of actin 2 (SAC2) to SAC5, during vauolar trafficking and morphogenesis in Arabidopsis thaliana. SAC2-SAC5 localize to the tonoplast along with Ptdlns3P, the presumable product of their activity. in SAC gain- and loss-of-function mutants, the levels of Ptdlns monophosphates and bisphosphates were changed, with opposite effects on the morphology of storage and lytic vacuoles, and the trafficking toward the vacuoles was defective. Moreover, multiple sac knockout mutants had an increased number of smaller storage and lytic vacuoles, whereas extralarge vacuoles were observed in the overexpression lines, correlating with various growth and developmental defects. The fragmented vacuolar phenotype of sac mutants could be mimicked by treating wild-type seedlings with Ptdlns(3,5)P2, corroborating that this PPI is important for vacuole morphology. Taken together, these results provide evidence that PPIs, together with their metabolic enzymes SAC2-SAC5, are crucial for vacuolar trafficking and for vacuolar morphology and function in plants.}, author = {Marhavá, Petra}, pages = {90}, publisher = {IST Austria}, title = {{Molecular mechanisms of patterning and subcellular trafficking in Arabidopsis thaliana}}, year = {2014}, } @phdthesis{1403, abstract = {A variety of developmental and disease related processes depend on epithelial cell sheet spreading. In order to gain insight into the biophysical mechanism(s) underlying the tissue morphogenesis we studied the spreading of an epithelium during the early development of the zebrafish embryo. In zebrafish epiboly the enveloping cell layer (EVL), a simple squamous epithelium, spreads over the yolk cell to completely engulf it at the end of gastrulation. Previous studies have proposed that an actomyosin ring forming within the yolk syncytial layer (YSL) acts as purse string that through constriction along its circumference pulls on the margin of the EVL. Direct biophysical evidence for this hypothesis has however been missing. The aim of the thesis was to understand how the actomyosin ring may generate pulling forces onto the EVL and what cellular mechanism(s) may facilitate the spreading of the epithelium. Using laser ablation to measure cortical tension within the actomyosin ring we found an anisotropic tension distribution, which was highest along the circumference of the ring. However the low degree of anisotropy was incompatible with the actomyosin ring functioning as a purse string only. Additionally, we observed retrograde cortical flow from vegetal parts of the ring into the EVL margin. Interpreting the experimental data using a theoretical distribution that models the tissues as active viscous gels led us to proposen that the actomyosin ring has a twofold contribution to EVL epiboly. It not only acts as a purse string through constriction along its circumference, but in addition constriction along the width of the ring generates pulling forces through friction-resisted cortical flow. Moreover, when rendering the purse string mechanism unproductive EVL epiboly proceeded normally indicating that the flow-friction mechanism is sufficient to drive the process. Aiming to understand what cellular mechanism(s) may facilitate the spreading of the epithelium we found that tension-oriented EVL cell divisions limit tissue anisotropy by releasing tension along the division axis and promote epithelial spreading. Notably, EVL cells undergo ectopic cell fusion in conditions in which oriented-cell division is impaired or the epithelium is mechanically challenged. Taken together our study of EVL epiboly suggests a novel mechanism of force generation for actomyosin rings through friction-resisted cortical flow and highlights the importance of tension-oriented cell divisions in epithelial morphogenesis.}, author = {Behrndt, Martin}, pages = {91}, publisher = {IST Austria}, title = {{Forces driving epithelial spreading in zebrafish epiboly}}, year = {2014}, } @phdthesis{1404, abstract = {The co-evolution of hosts and pathogens is characterized by continuous adaptations of both parties. Pathogens of social insects need to adapt towards disease defences at two levels: 1) individual immunity of each colony member consisting of behavioural defence strategies as well as humoral and cellular immune responses and 2) social immunity that is collectively performed by all group members comprising behavioural, physiological and organisational defence strategies. To disentangle the selection pressure on pathogens by the collective versus individual level of disease defence in social insects, we performed an evolution experiment using the Argentine Ant, Linepithema humile, as a host and a mixture of the general insect pathogenic fungus Metarhizium spp. (6 strains) as a pathogen. We allowed pathogen evolution over 10 serial host passages to two different evolution host treatments: (1) only individual host immunity in a single host treatment, and (2) simultaneously acting individual and social immunity in a social host treatment, in which an exposed ant was accompanied by two untreated nestmates. Before starting the pathogen evolution experiment, the 6 Metarhizium spp. strains were characterised concerning conidiospore size killing rates in singly and socially reared ants, their competitiveness under coinfecting conditions and their influence on ant behaviour. We analysed how the ancestral atrain mixture changed in conidiospere size, killing rate and strain composition dependent on host treatment (single or social hosts) during 10 passages and found that killing rate and conidiospere size of the pathogen increased under both evolution regimes, but different depending on host treatment. Testing the evolved strain mixtures that evolved under either the single or social host treatment under both single and social current rearing conditions in a full factorial design experiment revealed that the additional collective defences in insect societies add new selection pressure for their coevolving pathogens that compromise their ability to adapt to its host at the group level. To our knowledge, this is the first study directly measuring the influence of social immunity on pathogen evolution.}, author = {Stock, Miriam}, pages = {101}, publisher = {IST Austria}, title = {{Evolution of a fungal pathogen towards individual versus social immunity in ants}}, year = {2014}, } @inproceedings{1507, abstract = {The Wigner-Dyson-Gaudin-Mehta conjecture asserts that the local eigenvalue statistics of large real and complex Hermitian matrices with independent, identically distributed entries are universal in a sense that they depend only on the symmetry class of the matrix and otherwise are independent of the details of the distribution. We present the recent solution to this half-century old conjecture. We explain how stochastic tools, such as the Dyson Brownian motion, and PDE ideas, such as De Giorgi-Nash-Moser regularity theory, were combined in the solution. We also show related results for log-gases that represent a universal model for strongly correlated systems. Finally, in the spirit of Wigner’s original vision, we discuss the extensions of these universality results to more realistic physical systems such as random band matrices.}, author = {Erdös, László}, location = {Seoul, Korea}, pages = {214 -- 236}, publisher = {Kyung Moon SA Co. Ltd.}, title = {{Random matrices, log-gases and Hölder regularity}}, volume = {3}, year = {2014}, } @inproceedings{1516, abstract = {We present a rigorous derivation of the BCS gap equation for superfluid fermionic gases with point interactions. Our starting point is the BCS energy functional, whose minimizer we investigate in the limit when the range of the interaction potential goes to zero. }, author = {Bräunlich, Gerhard and Hainzl, Christian and Seiringer, Robert}, booktitle = {Proceedings of the QMath12 Conference}, location = {Berlin, Germany}, pages = {127 -- 137}, publisher = {World Scientific Publishing}, title = {{On the BCS gap equation for superfluid fermionic gases}}, doi = {10.1142/9789814618144_0007}, year = {2014}, } @article{1532, abstract = {Ammonium is the major nitrogen source in some plant ecosystems but is toxic at high concentrations, especially when available as the exclusive nitrogen source. Ammonium stress rapidly leads to various metabolic and hormonal imbalances that ultimately inhibit root and shoot growth in many plant species, including Arabidopsis thaliana (L.) Heynh. To identify molecular and genetic factors involved in seedling survival with prolonged exclusive NH4+ nutrition, a transcriptomic analysis with microarrays was used. Substantial transcriptional differences were most pronounced in (NH4)2SO4-grown seedlings, compared with plants grown on KNO3 or NH4NO3. Consistent with previous physiological analyses, major differences in the expression modules of photosynthesis-related genes, an altered mitochondrial metabolism, differential expression of the primary NH4+ assimilation, alteration of transporter gene expression and crucial changes in cell wall biosynthesis were found. A major difference in plant hormone responses, particularly of auxin but not cytokinin, was striking. The activity of the DR5::GUS reporter revealed a dramatically decreased auxin response in (NH4)2SO4-grown primary roots. The impaired root growth on (NH4)2SO4 was partially rescued by exogenous auxin or in specific mutants in the auxin pathway. The data suggest that NH4+-induced nutritional and metabolic imbalances can be partially overcome by elevated auxin levels.}, author = {Yang, Huaiyu and Von Der Fecht Bartenbach, Jenny and Friml, Jirí and Lohmann, Jan and Neuhäuser, Benjamin and Ludewig, Uwe}, journal = {Functional Plant Biology}, number = {3}, pages = {239 -- 251}, publisher = {CSIRO}, title = {{Auxin-modulated root growth inhibition in Arabidopsis thaliana seedlings with ammonium as the sole nitrogen source}}, doi = {10.1071/FP14171}, volume = {42}, year = {2014}, } @article{1994, abstract = {The emergence and radiation of multicellular land plants was driven by crucial innovations to their body plans [1]. The directional transport of the phytohormone auxin represents a key, plant-specific mechanism for polarization and patterning in complex seed plants [2-5]. Here, we show that already in the early diverging land plant lineage, as exemplified by the moss Physcomitrella patens, auxin transport by PIN transporters is operational and diversified into ER-localized and plasma membrane-localized PIN proteins. Gain-of-function and loss-of-function analyses revealed that PIN-dependent intercellular auxin transport in Physcomitrella mediates crucial developmental transitions in tip-growing filaments and waves of polarization and differentiation in leaf-like structures. Plasma membrane PIN proteins localize in a polar manner to the tips of moss filaments, revealing an unexpected relation between polarization mechanisms in moss tip-growing cells and multicellular tissues of seed plants. Our results trace the origins of polarization and auxin-mediated patterning mechanisms and highlight the crucial role of polarized auxin transport during the evolution of multicellular land plants.}, author = {Viaene, Tom and Landberg, Katarina and Thelander, Mattias and Medvecka, Eva and Pederson, Eric and Feraru, Elena and Cooper, Endymion and Karimi, Mansour and Delwiche, Charles and Ljung, Karin and Geisler, Markus and Sundberg, Eva and Friml, Jirí}, journal = {Current Biology}, number = {23}, pages = {2786 -- 2791}, publisher = {Cell Press}, title = {{Directional auxin transport mechanisms in early diverging land plants}}, doi = {10.1016/j.cub.2014.09.056}, volume = {24}, year = {2014}, } @article{1995, abstract = {Optical transport represents a natural route towards fast communications, and it is currently used in large scale data transfer. The progressive miniaturization of devices for information processing calls for the microscopic tailoring of light transport and confinement at length scales appropriate for upcoming technologies. With this goal in mind, we present a theoretical analysis of a one-dimensional Fabry-Perot interferometer built with two highly saturable nonlinear mirrors: a pair of two-level systems. Our approach captures nonlinear and nonreciprocal effects of light transport that were not reported previously. Remarkably, we show that such an elementary device can operate as a microscopic integrated optical rectifier.}, author = {Fratini, Filippo and Mascarenhas, Eduardo and Safari, Laleh and Poizat, Jean and Valente, Daniel and Auffèves, Alexia and Gerace, Dario and Santos, Marcelo}, journal = {Physical Review Letters}, number = {24}, publisher = {American Physical Society}, title = {{Fabry-Perot interferometer with quantum mirrors: Nonlinear light transport and rectification}}, doi = {10.1103/PhysRevLett.113.243601}, volume = {113}, year = {2014}, } @article{1996, abstract = {Auxin polar transport, local maxima, and gradients have become an importantmodel system for studying self-organization. Auxin distribution is regulated by auxin-dependent positive feedback loops that are not well-understood at the molecular level. Previously, we showed the involvement of the RHO of Plants (ROP) effector INTERACTOR of CONSTITUTIVELY active ROP 1 (ICR1) in regulation of auxin transport and that ICR1 levels are posttranscriptionally repressed at the site of maximum auxin accumulation at the root tip. Here, we show that bimodal regulation of ICR1 levels by auxin is essential for regulating formation of auxin local maxima and gradients. ICR1 levels increase concomitant with increase in auxin response in lateral root primordia, cotyledon tips, and provascular tissues. However, in the embryo hypophysis and root meristem, when auxin exceeds critical levels, ICR1 is rapidly destabilized by an SCF(TIR1/AFB) [SKP, Cullin, F-box (transport inhibitor response 1/auxin signaling F-box protein)]-dependent auxin signaling mechanism. Furthermore, ectopic expression of ICR1 in the embryo hypophysis resulted in reduction of auxin accumulation and concomitant root growth arrest. ICR1 disappeared during root regeneration and lateral root initiation concomitantly with the formation of a local auxin maximum in response to external auxin treatments and transiently after gravitropic stimulation. Destabilization of ICR1 was impaired after inhibition of auxin transport and signaling, proteasome function, and protein synthesis. A mathematical model based on these findings shows that an in vivo-like auxin distribution, rootward auxin flux, and shootward reflux can be simulated without assuming preexisting tissue polarity. Our experimental results and mathematical modeling indicate that regulation of auxin distribution is tightly associated with auxin-dependent ICR1 levels.}, author = {Hazak, Ora and Obolski, Uri and Prat, Tomas and Friml, Jiří and Hadany, Lilach and Yalovsky, Shaul}, journal = {PNAS}, number = {50}, pages = {E5471 -- E5479}, publisher = {National Academy of Sciences}, title = {{Bimodal regulation of ICR1 levels generates self-organizing auxin distribution}}, doi = {10.1073/pnas.1413918111}, volume = {111}, year = {2014}, } @article{1998, abstract = {Immune systems are able to protect the body against secondary infection with the same parasite. In insect colonies, this protection is not restricted to the level of the individual organism, but also occurs at the societal level. Here, we review recent evidence for and insights into the mechanisms underlying individual and social immunisation in insects. We disentangle general immune-protective effects from specific immune memory (priming), and examine immunisation in the context of the lifetime of an individual and that of a colony, and of transgenerational immunisation that benefits offspring. When appropriate, we discuss parallels with disease defence strategies in human societies. We propose that recurrent parasitic threats have shaped the evolution of both the individual immune systems and colony-level social immunity in insects.}, author = {El Masri, Leila and Cremer, Sylvia}, journal = {Trends in Immunology}, number = {10}, pages = {471 -- 482}, publisher = {Elsevier}, title = {{Individual and social immunisation in insects}}, doi = {10.1016/j.it.2014.08.005}, volume = {35}, year = {2014}, } @article{2001, abstract = {Antibiotics affect bacterial cell physiology at many levels. Rather than just compensating for the direct cellular defects caused by the drug, bacteria respond to antibiotics by changing their morphology, macromolecular composition, metabolism, gene expression and possibly even their mutation rate. Inevitably, these processes affect each other, resulting in a complex response with changes in the expression of numerous genes. Genome‐wide approaches can thus help in gaining a comprehensive understanding of bacterial responses to antibiotics. In addition, a combination of experimental and theoretical approaches is needed for identifying general principles that underlie these responses. Here, we review recent progress in our understanding of bacterial responses to antibiotics and their combinations, focusing on effects at the levels of growth rate and gene expression. We concentrate on studies performed in controlled laboratory conditions, which combine promising experimental techniques with quantitative data analysis and mathematical modeling. While these basic research approaches are not immediately applicable in the clinic, uncovering the principles and mechanisms underlying bacterial responses to antibiotics may, in the long term, contribute to the development of new treatment strategies to cope with and prevent the rise of resistant pathogenic bacteria.}, author = {Mitosch, Karin and Bollenbach, Tobias}, journal = {Environmental Microbiology Reports}, number = {6}, pages = {545 -- 557}, publisher = {Wiley}, title = {{Bacterial responses to antibiotics and their combinations}}, doi = {10.1111/1758-2229.12190}, volume = {6}, year = {2014}, } @article{2002, abstract = {Oriens-lacunosum moleculare (O-LM) interneurons in the CA1 region of the hippocampus play a key role in feedback inhibition and in the control of network activity. However, how these cells are efficiently activated in the network remains unclear. To address this question, I performed recordings from CA1 pyramidal neuron axons, the presynaptic fibers that provide feedback innervation of these interneurons. Two forms of axonal action potential (AP) modulation were identified. First, repetitive stimulation resulted in activity-dependent AP broadening. Broadening showed fast onset, with marked changes in AP shape following a single AP. Second, tonic depolarization in CA1 pyramidal neuron somata induced AP broadening in the axon, and depolarization-induced broadening summated with activity-dependent broadening. Outsideout patch recordings from CA1 pyramidal neuron axons revealed a high density of a-dendrotoxin (α-DTX)-sensitive, inactivating K+ channels, suggesting that K+ channel inactivation mechanistically contributes to AP broadening. To examine the functional consequences of axonal AP modulation for synaptic transmission, I performed paired recordings between synaptically connected CA1 pyramidal neurons and O-LM interneurons. CA1 pyramidal neuron-O-LM interneuron excitatory postsynaptic currents (EPSCs) showed facilitation during both repetitive stimulation and tonic depolarization of the presynaptic neuron. Both effects were mimicked and occluded by α-DTX, suggesting that they were mediated by K+ channel inactivation. Therefore, axonal AP modulation can greatly facilitate the activation of O-LM interneurons. In conclusion, modulation of AP shape in CA1 pyramidal neuron axons substantially enhances the efficacy of principal neuron-interneuron synapses, promoting the activation of O-LM interneurons in recurrent inhibitory microcircuits.}, author = {Kim, Sooyun}, journal = {PLoS One}, number = {11}, publisher = {Public Library of Science}, title = {{Action potential modulation in CA1 pyramidal neuron axons facilitates OLM interneuron activation in recurrent inhibitory microcircuits of rat hippocampus}}, doi = {10.1371/journal.pone.0113124}, volume = {9}, year = {2014}, } @article{2003, abstract = {Learning can be facilitated by previous knowledge when it is organized into relational representations forming schemas. In this issue of Neuron, McKenzie et al. (2014) demonstrate that the hippocampus rapidly forms interrelated, hierarchical memory representations to support schema-based learning.}, author = {O'Neill, Joseph and Csicsvari, Jozsef L}, journal = {Neuron}, number = {1}, pages = {8 -- 10}, publisher = {Elsevier}, title = {{Learning by example in the hippocampus}}, doi = {10.1016/j.neuron.2014.06.013}, volume = {83}, year = {2014}, } @article{2004, abstract = {We have assembled a network of cell-fate determining transcription factors that play a key role in the specification of the ventral neuronal subtypes of the spinal cord on the basis of published transcriptional interactions. Asynchronous Boolean modelling of the network was used to compare simulation results with reported experimental observations. Such comparison highlighted the need to include additional regulatory connections in order to obtain the fixed point attractors of the model associated with the five known progenitor cell types located in the ventral spinal cord. The revised gene regulatory network reproduced previously observed cell state switches between progenitor cells observed in knock-out animal models or in experiments where the transcription factors were overexpressed. Furthermore the network predicted the inhibition of Irx3 by Nkx2.2 and this prediction was tested experimentally. Our results provide evidence for the existence of an as yet undescribed inhibitory connection which could potentially have significance beyond the ventral spinal cord. The work presented in this paper demonstrates the strength of Boolean modelling for identifying gene regulatory networks.}, author = {Lovrics, Anna and Gao, Yu and Juhász, Bianka and Bock, István and Byrne, Helen and Dinnyés, András and Kovács, Krisztián}, journal = {PLoS One}, number = {11}, publisher = {Public Library of Science}, title = {{Boolean modelling reveals new regulatory connections between transcription factors orchestrating the development of the ventral spinal cord}}, doi = {10.1371/journal.pone.0111430}, volume = {9}, year = {2014}, } @article{2005, abstract = {By eliciting a natural exploratory behavior in rats, head scanning, a study reveals that hippocampal place cells form new, stable firing fields in those locations where the behavior has just occurred.}, author = {Dupret, David and Csicsvari, Jozsef L}, journal = {Nature Neuroscience}, number = {5}, pages = {643 -- 644}, publisher = {Nature Publishing Group}, title = {{Turning heads to remember places}}, doi = {10.1038/nn.3700}, volume = {17}, year = {2014}, } @misc{2007, author = {Anna Klimova and Rudas, Tamás}, publisher = {The Comprehensive R Archive Network}, title = {{gIPFrm: Generalized iterative proportional fitting for relational models}}, year = {2014}, } @article{2011, abstract = {The protection of privacy of individual-level information in genome-wide association study (GWAS) databases has been a major concern of researchers following the publication of “an attack” on GWAS data by Homer et al. (2008). Traditional statistical methods for confidentiality and privacy protection of statistical databases do not scale well to deal with GWAS data, especially in terms of guarantees regarding protection from linkage to external information. The more recent concept of differential privacy, introduced by the cryptographic community, is an approach that provides a rigorous definition of privacy with meaningful privacy guarantees in the presence of arbitrary external information, although the guarantees may come at a serious price in terms of data utility. Building on such notions, Uhler et al. (2013) proposed new methods to release aggregate GWAS data without compromising an individual’s privacy. We extend the methods developed in Uhler et al. (2013) for releasing differentially-private χ2χ2-statistics by allowing for arbitrary number of cases and controls, and for releasing differentially-private allelic test statistics. We also provide a new interpretation by assuming the controls’ data are known, which is a realistic assumption because some GWAS use publicly available data as controls. We assess the performance of the proposed methods through a risk-utility analysis on a real data set consisting of DNA samples collected by the Wellcome Trust Case Control Consortium and compare the methods with the differentially-private release mechanism proposed by Johnson and Shmatikov (2013).}, author = {Yu, Fei and Fienberg, Stephen and Slaković, Alexandra and Uhler, Caroline}, journal = {Journal of Biomedical Informatics}, pages = {133 -- 141}, publisher = {Elsevier}, title = {{Scalable privacy-preserving data sharing methodology for genome-wide association studies}}, doi = {10.1016/j.jbi.2014.01.008}, volume = {50}, year = {2014}, } @inproceedings{2012, abstract = {The classical sphere packing problem asks for the best (infinite) arrangement of non-overlapping unit balls which cover as much space as possible. We define a generalized version of the problem, where we allow each ball a limited amount of overlap with other balls. We study two natural choices of overlap measures and obtain the optimal lattice packings in a parameterized family of lattices which contains the FCC, BCC, and integer lattice.}, author = {Iglesias Ham, Mabel and Kerber, Michael and Uhler, Caroline}, location = {Halifax, Canada}, pages = {155 -- 161}, publisher = {Unknown}, title = {{Sphere packing with limited overlap}}, year = {2014}, } @article{2013, abstract = {An asymptotic theory is developed for computing volumes of regions in the parameter space of a directed Gaussian graphical model that are obtained by bounding partial correlations. We study these volumes using the method of real log canonical thresholds from algebraic geometry. Our analysis involves the computation of the singular loci of correlation hypersurfaces. Statistical applications include the strong-faithfulness assumption for the PC algorithm and the quantification of confounder bias in causal inference. A detailed analysis is presented for trees, bow ties, tripartite graphs, and complete graphs. }, author = {Lin, Shaowei and Uhler, Caroline and Sturmfels, Bernd and Bühlmann, Peter}, journal = {Foundations of Computational Mathematics}, number = {5}, pages = {1079 -- 1116}, publisher = {Springer}, title = {{Hypersurfaces and their singularities in partial correlation testing}}, doi = {10.1007/s10208-014-9205-0}, volume = {14}, year = {2014}, } @article{2018, abstract = {Synaptic cell adhesion molecules are increasingly gaining attention for conferring specific properties to individual synapses. Netrin-G1 and netrin-G2 are trans-synaptic adhesion molecules that distribute on distinct axons, and their presence restricts the expression of their cognate receptors, NGL1 and NGL2, respectively, to specific subdendritic segments of target neurons. However, the neural circuits and functional roles of netrin-G isoform complexes remain unclear. Here, we use netrin-G-KO and NGL-KO mice to reveal that netrin-G1/NGL1 and netrin-G2/NGL2 interactions specify excitatory synapses in independent hippocampal pathways. In the hippocampal CA1 area, netrin-G1/NGL1 and netrin-G2/NGL2 were expressed in the temporoammonic and Schaffer collateral pathways, respectively. The lack of presynaptic netrin-Gs led to the dispersion of NGLs from postsynaptic membranes. In accord, netrin-G mutant synapses displayed opposing phenotypes in long-term and short-term plasticity through discrete biochemical pathways. The plasticity phenotypes in netrin-G-KOs were phenocopied in NGL-KOs, with a corresponding loss of netrin-Gs from presynaptic membranes. Our findings show that netrin-G/NGL interactions differentially control synaptic plasticity in distinct circuits via retrograde signaling mechanisms and explain how synaptic inputs are diversified to control neuronal activity.}, author = {Matsukawa, Hiroshi and Akiyoshi Nishimura, Sachiko and Zhang, Qi and Luján, Rafael and Yamaguchi, Kazuhiko and Goto, Hiromichi and Yaguchi, Kunio and Hashikawa, Tsutomu and Sano, Chie and Shigemoto, Ryuichi and Nakashiba, Toshiaki and Itohara, Shigeyoshi}, journal = {Journal of Neuroscience}, number = {47}, pages = {15779 -- 15792}, publisher = {Society for Neuroscience}, title = {{Netrin-G/NGL complexes encode functional synaptic diversification}}, doi = {10.1523/JNEUROSCI.1141-14.2014}, volume = {34}, year = {2014}, } @article{2019, abstract = {We prove that the empirical density of states of quantum spin glasses on arbitrary graphs converges to a normal distribution as long as the maximal degree is negligible compared with the total number of edges. This extends the recent results of Keating et al. (2014) that were proved for graphs with bounded chromatic number and with symmetric coupling distribution. Furthermore, we generalise the result to arbitrary hypergraphs. We test the optimality of our condition on the maximal degree for p-uniform hypergraphs that correspond to p-spin glass Hamiltonians acting on n distinguishable spin- 1/2 particles. At the critical threshold p = n1/2 we find a sharp classical-quantum phase transition between the normal distribution and the Wigner semicircle law. The former is characteristic to classical systems with commuting variables, while the latter is a signature of noncommutative random matrix theory.}, author = {Erdös, László and Schröder, Dominik J}, journal = {Mathematical Physics, Analysis and Geometry}, number = {3-4}, pages = {441 -- 464}, publisher = {Springer}, title = {{Phase transition in the density of states of quantum spin glasses}}, doi = {10.1007/s11040-014-9164-3}, volume = {17}, year = {2014}, } @article{2020, abstract = {The mammalian heart has long been considered a postmitotic organ, implying that the total number of cardiomyocytes is set at birth. Analysis of cell division in the mammalian heart is complicated by cardiomyocyte binucleation shortly after birth, which makes it challenging to interpret traditional assays of cell turnover [Laflamme MA, Murray CE (2011) Nature 473(7347):326–335; Bergmann O, et al. (2009) Science 324(5923):98–102]. An elegant multi-isotope imaging-mass spectrometry technique recently calculated the low, discrete rate of cardiomyocyte generation in mice [Senyo SE, et al. (2013) Nature 493(7432):433–436], yet our cellular-level understanding of postnatal cardiomyogenesis remains limited. Herein, we provide a new line of evidence for the differentiated α-myosin heavy chain-expressing cardiomyocyte as the cell of origin of postnatal cardiomyogenesis using the “mosaic analysis with double markers” mouse model. We show limited, life-long, symmetric division of cardiomyocytes as a rare event that is evident in utero but significantly diminishes after the first month of life in mice; daughter cardiomyocytes divide very seldom, which this study is the first to demonstrate, to our knowledge. Furthermore, ligation of the left anterior descending coronary artery, which causes a myocardial infarction in the mosaic analysis with double-marker mice, did not increase the rate of cardiomyocyte division above the basal level for up to 4 wk after the injury. The clonal analysis described here provides direct evidence of postnatal mammalian cardiomyogenesis.}, author = {Ali, Shah and Hippenmeyer, Simon and Saadat, Lily and Luo, Liqun and Weissman, Irving and Ardehali, Reza}, journal = {PNAS}, number = {24}, pages = {8850 -- 8855}, publisher = {National Academy of Sciences}, title = {{Existing cardiomyocytes generate cardiomyocytes at a low rate after birth in mice}}, doi = {10.1073/pnas.1408233111}, volume = {111}, year = {2014}, } @article{2021, abstract = {Neurotrophins regulate diverse aspects of neuronal development and plasticity, but their precise in vivo functions during neural circuit assembly in the central brain remain unclear. We show that the neurotrophin receptor tropomyosin-related kinase C (TrkC) is required for dendritic growth and branching of mouse cerebellar Purkinje cells. Sparse TrkC knockout reduced dendrite complexity, but global Purkinje cell knockout had no effect. Removal of the TrkC ligand neurotrophin-3 (NT-3) from cerebellar granule cells, which provide major afferent input to developing Purkinje cell dendrites, rescued the dendrite defects caused by sparse TrkC disruption in Purkinje cells. Our data demonstrate that NT-3 from presynaptic neurons (granule cells) is required for TrkC-dependent competitive dendrite morphogenesis in postsynaptic neurons (Purkinje cells)—a previously unknown mechanism of neural circuit development.}, author = {William, Joo and Hippenmeyer, Simon and Luo, Liqun}, journal = {Science}, number = {6209}, pages = {626 -- 629}, publisher = {American Association for the Advancement of Science}, title = {{Dendrite morphogenesis depends on relative levels of NT-3/TrkC signaling}}, doi = {10.1126/science.1258996}, volume = {346}, year = {2014}, } @article{2022, abstract = {Radial glial progenitors (RGPs) are responsible for producing nearly all neocortical neurons. To gain insight into the patterns of RGP division and neuron production, we quantitatively analyzed excitatory neuron genesis in the mouse neocortex using Mosaic Analysis with Double Markers, which provides single-cell resolution of progenitor division patterns and potential in vivo. We found that RGPs progress through a coherent program in which their proliferative potential diminishes in a predictable manner. Upon entry into the neurogenic phase, individual RGPs produce ∼8–9 neurons distributed in both deep and superficial layers, indicating a unitary output in neuronal production. Removal of OTX1, a transcription factor transiently expressed in RGPs, results in both deep- and superficial-layer neuron loss and a reduction in neuronal unit size. Moreover, ∼1/6 of neurogenic RGPs proceed to produce glia. These results suggest that progenitor behavior and histogenesis in the mammalian neocortex conform to a remarkably orderly and deterministic program.}, author = {Gao, Peng and Postiglione, Maria P and Krieger, Teresa and Hernandez, Luisirene and Wang, Chao and Han, Zhi and Streicher, Carmen and Papusheva, Ekaterina and Insolera, Ryan and Chugh, Kritika and Kodish, Oren and Huang, Kun and Simons, Benjamin and Luo, Liqun and Hippenmeyer, Simon and Shi, Song}, journal = {Cell}, number = {4}, pages = {775 -- 788}, publisher = {Cell Press}, title = {{Deterministic progenitor behavior and unitary production of neurons in the neocortex}}, doi = {10.1016/j.cell.2014.10.027}, volume = {159}, year = {2014}, } @article{2023, abstract = {Understanding the evolution of dispersal is essential for understanding and predicting the dynamics of natural populations. Two main factors are known to influence dispersal evolution: spatio-temporal variation in the environment and relatedness between individuals. However, the relation between these factors is still poorly understood, and they are usually treated separately. In this article, I present a theoretical framework that contains and connects effects of both environmental variation and relatedness, and reproduces and extends their known features. Spatial habitat variation selects for balanced dispersal strategies, whereby the population is kept at an ideal free distribution. Within this class of dispersal strategies, I explain how increased dispersal is promoted by perturbations to the dispersal type frequencies. An explicit formula shows the magnitude of the selective advantage of increased dispersal in terms of the spatial variability in the frequencies of the different dispersal strategies present. These variances are capable of capturing various sources of stochasticity and hence establish a common scale for their effects on the evolution of dispersal. The results furthermore indicate an alternative approach to identifying effects of relatedness on dispersal evolution.}, author = {Novak, Sebastian}, journal = {Ecology and Evolution}, number = {24}, pages = {4589 -- 4597}, publisher = {Wiley-Blackwell}, title = {{Habitat heterogeneities versus spatial type frequency variances as driving forces of dispersal evolution}}, doi = {10.1002/ece3.1289}, volume = {4}, year = {2014}, } @article{2024, abstract = {The yeast Rab5 homologue, Vps21p, is known to be involved both in the vacuolar protein sorting (VPS) pathway from the trans-Golgi network to the vacuole, and in the endocytic pathway from the plasma membrane to the vacuole. However, the intracellular location at which these two pathways converge remains unclear. In addition, the endocytic pathway is not completely blocked in yeast cells lacking all Rab5 genes, suggesting the existence of an unidentified route that bypasses the Rab5-dependent endocytic pathway. Here we show that convergence of the endocytic and VPS pathways occurs upstream of the requirement for Vps21p in these pathways. We also identify a previously unidentified endocytic pathway mediated by the AP-3 complex. Importantly, the AP-3-mediated pathway appears mostly intact in Rab5-disrupted cells, and thus works as an alternative route to the vacuole/lysosome. We propose that the endocytic traffic branches into two routes to reach the vacuole: a Rab5-dependent VPS pathway and a Rab5-independent AP-3-mediated pathway.}, author = {Toshima, Junko and Nishinoaki, Show and Sato, Yoshifumi and Yamamoto, Wataru and Furukawa, Daiki and Siekhaus, Daria E and Sawaguchi, Akira and Toshima, Jiro}, journal = {Nature Communications}, publisher = {Nature Publishing Group}, title = {{Bifurcation of the endocytic pathway into Rab5-dependent and -independent transport to the vacuole}}, doi = {10.1038/ncomms4498}, volume = {5}, year = {2014}, } @inproceedings{2026, abstract = {We present a tool for translating LTL formulae into deterministic ω-automata. It is the first tool that covers the whole LTL that does not use Safra’s determinization or any of its variants. This leads to smaller automata. There are several outputs of the tool: firstly, deterministic Rabin automata, which are the standard input for probabilistic model checking, e.g. for the probabilistic model-checker PRISM; secondly, deterministic generalized Rabin automata, which can also be used for probabilistic model checking and are sometimes by orders of magnitude smaller. We also link our tool to PRISM and show that this leads to a significant speed-up of probabilistic LTL model checking, especially with the generalized Rabin automata.}, author = {Komárková, Zuzana and Kretinsky, Jan}, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, editor = {Cassez, Franck and Raskin, Jean-François}, location = {Sydney, Australia}, pages = {235 -- 241}, publisher = {Springer}, title = {{Rabinizer 3: Safraless translation of ltl to small deterministic automata}}, doi = {10.1007/978-3-319-11936-6_17}, volume = {8837}, year = {2014}, } @inproceedings{2027, abstract = {We present a general framework for applying machine-learning algorithms to the verification of Markov decision processes (MDPs). The primary goal of these techniques is to improve performance by avoiding an exhaustive exploration of the state space. Our framework focuses on probabilistic reachability, which is a core property for verification, and is illustrated through two distinct instantiations. The first assumes that full knowledge of the MDP is available, and performs a heuristic-driven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP, and yields probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. The latter is the first extension of statistical model checking for unbounded properties inMDPs. In contrast with other related techniques, our approach is not restricted to time-bounded (finite-horizon) or discounted properties, nor does it assume any particular properties of the MDP. We also show how our methods extend to LTL objectives. We present experimental results showing the performance of our framework on several examples.}, author = {Brázdil, Tomáš and Chatterjee, Krishnendu and Chmelik, Martin and Forejt, Vojtěch and Kretinsky, Jan and Kwiatkowska, Marta and Parker, David and Ujma, Mateusz}, booktitle = { Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, editor = {Cassez, Franck and Raskin, Jean-François}, location = {Sydney, Australia}, pages = {98 -- 114}, publisher = {Society of Industrial and Applied Mathematics}, title = {{Verification of markov decision processes using learning algorithms}}, doi = {10.1007/978-3-319-11936-6_8}, volume = {8837}, year = {2014}, } @article{2029, abstract = {Spin-wave theory is a key ingredient in our comprehension of quantum spin systems, and is used successfully for understanding a wide range of magnetic phenomena, including magnon condensation and stability of patterns in dipolar systems. Nevertheless, several decades of research failed to establish the validity of spin-wave theory rigorously, even for the simplest models of quantum spins. A rigorous justification of the method for the three-dimensional quantum Heisenberg ferromagnet at low temperatures is presented here. We derive sharp bounds on its free energy by combining a bosonic formulation of the model introduced by Holstein and Primakoff with probabilistic estimates and operator inequalities.}, author = {Correggi, Michele and Giuliani, Alessandro and Seiringer, Robert}, journal = {EPL}, number = {2}, publisher = {IOP Publishing Ltd.}, title = {{Validity of spin-wave theory for the quantum Heisenberg model}}, doi = {10.1209/0295-5075/108/20003}, volume = {108}, year = {2014}, } @article{2031, abstract = {A puzzling property of synaptic transmission, originally established at the neuromuscular junction, is that the time course of transmitter release is independent of the extracellular Ca2+ concentration ([Ca2+]o), whereas the rate of release is highly [Ca2+]o-dependent. Here, we examine the time course of release at inhibitory basket cell-Purkinje cell synapses and show that it is independent of [Ca2+]o. Modeling of Ca2+-dependent transmitter release suggests that the invariant time course of release critically depends on tight coupling between Ca2+ channels and release sensors. Experiments with exogenous Ca2+ chelators reveal that channel-sensor coupling at basket cell-Purkinje cell synapses is very tight, with a mean distance of 10–20 nm. Thus, tight channel-sensor coupling provides a mechanistic explanation for the apparent [Ca2+]o independence of the time course of release.}, author = {Arai, Itaru and Jonas, Peter M}, journal = {eLife}, publisher = {eLife Sciences Publications}, title = {{Nanodomain coupling explains Ca^2+ independence of transmitter release time course at a fast central synapse}}, doi = {10.7554/eLife.04057}, volume = {3}, year = {2014}, } @article{2032, abstract = {As light-based control of fundamental signaling pathways is becoming a reality, the field of optogenetics is rapidly moving beyond neuroscience. We have recently developed receptor tyrosine kinases that are activated by light and control cell proliferation, epithelial–mesenchymal transition, and angiogenic sprouting—cell behaviors central to cancer progression.}, author = {Inglés Prieto, Álvaro and Gschaider-Reichhart, Eva and Schelch, Karin and Janovjak, Harald L and Grusch, Michael}, journal = {Molecular and Cellular Oncology}, number = {4}, publisher = {Taylor & Francis}, title = {{The optogenetic promise for oncology: Episode I}}, doi = {10.4161/23723548.2014.964045}, volume = {1}, year = {2014}, } @inproceedings{2033, abstract = {The learning with privileged information setting has recently attracted a lot of attention within the machine learning community, as it allows the integration of additional knowledge into the training process of a classifier, even when this comes in the form of a data modality that is not available at test time. Here, we show that privileged information can naturally be treated as noise in the latent function of a Gaussian process classifier (GPC). That is, in contrast to the standard GPC setting, the latent function is not just a nuisance but a feature: it becomes a natural measure of confidence about the training data by modulating the slope of the GPC probit likelihood function. Extensive experiments on public datasets show that the proposed GPC method using privileged noise, called GPC+, improves over a standard GPC without privileged knowledge, and also over the current state-of-the-art SVM-based method, SVM+. Moreover, we show that advanced neural networks and deep learning methods can be compressed as privileged information.}, author = {Hernandez Lobato, Daniel and Sharmanska, Viktoriia and Kersting, Kristian and Lampert, Christoph and Quadrianto, Novi}, booktitle = {Advances in Neural Information Processing Systems}, location = {Montreal, Canada}, number = {January}, pages = {837--845}, publisher = {Neural Information Processing Systems}, title = {{Mind the nuisance: Gaussian process classification using privileged noise}}, volume = {1}, year = {2014}, } @article{2036, abstract = { In rapidly changing environments, selection history may impact the dynamics of adaptation. Mutations selected in one environment may result in pleiotropic fitness trade-offs in subsequent novel environments, slowing the rates of adaptation. Epistatic interactions between mutations selected in sequential stressful environments may slow or accelerate subsequent rates of adaptation, depending on the nature of that interaction. We explored the dynamics of adaptation during sequential exposure to herbicides with different modes of action in Chlamydomonas reinhardtii. Evolution of resistance to two of the herbicides was largely independent of selection history. For carbetamide, previous adaptation to other herbicide modes of action positively impacted the likelihood of adaptation to this herbicide. Furthermore, while adaptation to all individual herbicides was associated with pleiotropic fitness costs in stress-free environments, we observed that accumulation of resistance mechanisms was accompanied by a reduction in overall fitness costs. We suggest that antagonistic epistasis may be a driving mechanism that enables populations to more readily adapt in novel environments. These findings highlight the potential for sequences of xenobiotics to facilitate the rapid evolution of multiple-drug and -pesticide resistance, as well as the potential for epistatic interactions between adaptive mutations to facilitate evolutionary rescue in rapidly changing environments. }, author = {Lagator, Mato and Colegrave, Nick and Neve, Paul}, journal = {Proceedings of the Royal Society of London Series B Biological Sciences}, number = {1794}, publisher = {Royal Society, The}, title = {{Selection history and epistatic interactions impact dynamics of adaptation to novel environmental stresses}}, doi = {10.1098/rspb.2014.1679}, volume = {281}, year = {2014}, } @article{2038, abstract = {Recently, there has been an effort to add quantitative objectives to formal verification and synthesis. We introduce and investigate the extension of temporal logics with quantitative atomic assertions. At the heart of quantitative objectives lies the accumulation of values along a computation. It is often the accumulated sum, as with energy objectives, or the accumulated average, as with mean-payoff objectives. We investigate the extension of temporal logics with the prefix-accumulation assertions Sum(v) ≥ c and Avg(v) ≥ c, where v is a numeric (or Boolean) variable of the system, c is a constant rational number, and Sum(v) and Avg(v) denote the accumulated sum and average of the values of v from the beginning of the computation up to the current point in time. We also allow the path-accumulation assertions LimInfAvg(v) ≥ c and LimSupAvg(v) ≥ c, referring to the average value along an entire infinite computation. We study the border of decidability for such quantitative extensions of various temporal logics. In particular, we show that extending the fragment of CTL that has only the EX, EF, AX, and AG temporal modalities with both prefix-accumulation assertions, or extending LTL with both path-accumulation assertions, results in temporal logics whose model-checking problem is decidable. Moreover, the prefix-accumulation assertions may be generalized with "controlled accumulation," allowing, for example, to specify constraints on the average waiting time between a request and a grant. On the negative side, we show that this branching-time logic is, in a sense, the maximal logic with one or both of the prefix-accumulation assertions that permits a decidable model-checking procedure. Extending a temporal logic that has the EG or EU modalities, such as CTL or LTL, makes the problem undecidable.}, author = {Boker, Udi and Chatterjee, Krishnendu and Henzinger, Thomas A and Kupferman, Orna}, journal = {ACM Transactions on Computational Logic (TOCL)}, number = {4}, publisher = {ACM}, title = {{Temporal specifications with accumulative values}}, doi = {10.1145/2629686}, volume = {15}, year = {2014}, } @article{2039, abstract = {A fundamental question in biology is the following: what is the time scale that is needed for evolutionary innovations? There are many results that characterize single steps in terms of the fixation time of new mutants arising in populations of certain size and structure. But here we ask a different question, which is concerned with the much longer time scale of evolutionary trajectories: how long does it take for a population exploring a fitness landscape to find target sequences that encode new biological functions? Our key variable is the length, (Formula presented.) of the genetic sequence that undergoes adaptation. In computer science there is a crucial distinction between problems that require algorithms which take polynomial or exponential time. The latter are considered to be intractable. Here we develop a theoretical approach that allows us to estimate the time of evolution as function of (Formula presented.) We show that adaptation on many fitness landscapes takes time that is exponential in (Formula presented.) even if there are broad selection gradients and many targets uniformly distributed in sequence space. These negative results lead us to search for specific mechanisms that allow evolution to work on polynomial time scales. We study a regeneration process and show that it enables evolution to work in polynomial time.}, author = {Chatterjee, Krishnendu and Pavlogiannis, Andreas and Adlam, Ben and Nowak, Martin}, journal = {PLoS Computational Biology}, number = {9}, publisher = {Public Library of Science}, title = {{The time scale of evolutionary innovation}}, doi = {10.1371/journal.pcbi.1003818}, volume = {10}, year = {2014}, } @article{2040, abstract = {Development requires tissue growth as well as cell diversification. To address how these processes are coordinated, we analyzed the development of molecularly distinct domains of neural progenitors in the mouse and chick neural tube. We show that during development, these domains undergo changes in size that do not scale with changes in overall tissue size. Our data show that domain proportions are first established by opposing morphogen gradients and subsequently controlled by domain-specific regulation of differentiation rate but not differences in proliferation rate. Regulation of differentiation rate is key to maintaining domain proportions while accommodating both intra- and interspecies variations in size. Thus, the sequential control of progenitor specification and differentiation elaborates pattern without requiring that signaling gradients grow as tissues expand. }, author = {Kicheva, Anna and Bollenbach, Mark Tobias and Ribeiro, Ana and Pérez Valle, Helena and Lovell Badge, Robin and Episkopou, Vasso and Briscoe, James}, journal = {Science}, number = {6204}, publisher = {American Association for the Advancement of Science}, title = {{Coordination of progenitor specification and growth in mouse and chick spinal cord}}, doi = {10.1126/science.1254927}, volume = {345}, year = {2014}, } @article{2041, abstract = {The hippocampus mediates several higher brain functions, such as learning, memory, and spatial coding. The input region of the hippocampus, the dentate gyrus, plays a critical role in these processes. Several lines of evidence suggest that the dentate gyrus acts as a preprocessor of incoming information, preparing it for subsequent processing in CA3. For example, the dentate gyrus converts input from the entorhinal cortex, where cells have multiple spatial fields, into the spatially more specific place cell activity characteristic of the CA3 region. Furthermore, the dentate gyrus is involved in pattern separation, transforming relatively similar input patterns into substantially different output patterns. Finally, the dentate gyrus produces a very sparse coding scheme in which only a very small fraction of neurons are active at any one time.}, author = {Jonas, Peter M and Lisman, John}, journal = {Frontiers in Neural Circuits}, publisher = {Frontiers Research Foundation}, title = {{Structure, function and plasticity of hippocampal dentate gyrus microcircuits}}, doi = {10.3389/fncir.2014.00107}, volume = {8}, year = {2014}, } @article{2042, abstract = {Background: CRISPR is a microbial immune system likely to be involved in host-parasite coevolution. It functions using target sequences encoded by the bacterial genome, which interfere with invading nucleic acids using a homology-dependent system. The system also requires protospacer associated motifs (PAMs), short motifs close to the target sequence that are required for interference in CRISPR types I and II. Here, we investigate whether PAMs are depleted in phage genomes due to selection pressure to escape recognition.Results: To this end, we analyzed two data sets. Phages infecting all bacterial hosts were analyzed first, followed by a detailed analysis of phages infecting the genus Streptococcus, where PAMs are best understood. We use two different measures of motif underrepresentation that control for codon bias and the frequency of submotifs. We compare phages infecting species with a particular CRISPR type to those infecting species without that type. Since only known PAMs were investigated, the analysis is restricted to CRISPR types I-C and I-E and in Streptococcus to types I-C and II. We found evidence for PAM depletion in Streptococcus phages infecting hosts with CRISPR type I-C, in Vibrio phages infecting hosts with CRISPR type I-E and in Streptococcus thermopilus phages infecting hosts with type II-A, known as CRISPR3.Conclusions: The observed motif depletion in phages with hosts having CRISPR can be attributed to selection rather than to mutational bias, as mutational bias should affect the phages of all hosts. This observation implies that the CRISPR system has been efficient in the groups discussed here.}, author = {Kupczok, Anne and Bollback, Jonathan P}, journal = {BMC Genomics}, number = {1}, publisher = {BioMed Central}, title = {{Motif depletion in bacteriophages infecting hosts with CRISPR systems}}, doi = {10.1186/1471-2164-15-663}, volume = {15}, year = {2014}, } @inproceedings{2043, abstract = {Persistent homology is a popular and powerful tool for capturing topological features of data. Advances in algorithms for computing persistent homology have reduced the computation time drastically – as long as the algorithm does not exhaust the available memory. Following up on a recently presented parallel method for persistence computation on shared memory systems [1], we demonstrate that a simple adaption of the standard reduction algorithm leads to a variant for distributed systems. Our algorithmic design ensures that the data is distributed over the nodes without redundancy; this permits the computation of much larger instances than on a single machine. Moreover, we observe that the parallelism at least compensates for the overhead caused by communication between nodes, and often even speeds up the computation compared to sequential and even parallel shared memory algorithms. In our experiments, we were able to compute the persistent homology of filtrations with more than a billion (109) elements within seconds on a cluster with 32 nodes using less than 6GB of memory per node.}, author = {Bauer, Ulrich and Kerber, Michael and Reininghaus, Jan}, booktitle = {Proceedings of the Workshop on Algorithm Engineering and Experiments}, editor = { McGeoch, Catherine and Meyer, Ulrich}, location = {Portland, USA}, pages = {31 -- 38}, publisher = {Society of Industrial and Applied Mathematics}, title = {{Distributed computation of persistent homology}}, doi = {10.1137/1.9781611973198.4}, year = {2014}, } @inbook{2044, abstract = {We present a parallel algorithm for computing the persistent homology of a filtered chain complex. Our approach differs from the commonly used reduction algorithm by first computing persistence pairs within local chunks, then simplifying the unpaired columns, and finally applying standard reduction on the simplified matrix. The approach generalizes a technique by Günther et al., which uses discrete Morse Theory to compute persistence; we derive the same worst-case complexity bound in a more general context. The algorithm employs several practical optimization techniques, which are of independent interest. Our sequential implementation of the algorithm is competitive with state-of-the-art methods, and we further improve the performance through parallel computation.}, author = {Bauer, Ulrich and Kerber, Michael and Reininghaus, Jan}, booktitle = {Topological Methods in Data Analysis and Visualization III}, editor = {Bremer, Peer-Timo and Hotz, Ingrid and Pascucci, Valerio and Peikert, Ronald}, pages = {103 -- 117}, publisher = {Springer}, title = {{Clear and Compress: Computing Persistent Homology in Chunks}}, doi = {10.1007/978-3-319-04099-8_7}, year = {2014}, } @inproceedings{2045, abstract = {We introduce and study a new notion of enhanced chosen-ciphertext security (ECCA) for public-key encryption. Loosely speaking, in the ECCA security experiment, the decryption oracle provided to the adversary is augmented to return not only the output of the decryption algorithm on a queried ciphertext but also of a randomness-recovery algorithm associated to the scheme. Our results mainly concern the case where the randomness-recovery algorithm is efficient. We provide constructions of ECCA-secure encryption from adaptive trapdoor functions as defined by Kiltz et al. (EUROCRYPT 2010), resulting in ECCA encryption from standard number-theoretic assumptions. We then give two applications of ECCA-secure encryption: (1) We use it as a unifying concept in showing equivalence of adaptive trapdoor functions and tag-based adaptive trapdoor functions, resolving an open question of Kiltz et al. (2) We show that ECCA-secure encryption can be used to securely realize an approach to public-key encryption with non-interactive opening (PKENO) originally suggested by Damgård and Thorbek (EUROCRYPT 2007), resulting in new and practical PKENO schemes quite different from those in prior work. Our results demonstrate that ECCA security is of both practical and theoretical interest.}, author = {Dachman Soled, Dana and Fuchsbauer, Georg and Mohassel, Payman and O’Neill, Adam}, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, editor = {Krawczyk, Hugo}, location = {Buenos Aires, Argentina}, pages = {329 -- 344}, publisher = {Springer}, title = {{Enhanced chosen-ciphertext security and applications}}, doi = {10.1007/978-3-642-54631-0_19}, volume = {8383}, year = {2014}, } @inproceedings{2046, abstract = {We introduce policy-based signatures (PBS), where a signer can only sign messages conforming to some authority-specified policy. The main requirements are unforgeability and privacy, the latter meaning that signatures not reveal the policy. PBS offers value along two fronts: (1) On the practical side, they allow a corporation to control what messages its employees can sign under the corporate key. (2) On the theoretical side, they unify existing work, capturing other forms of signatures as special cases or allowing them to be easily built. Our work focuses on definitions of PBS, proofs that this challenging primitive is realizable for arbitrary policies, efficient constructions for specific policies, and a few representative applications.}, author = {Bellare, Mihir and Fuchsbauer, Georg}, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, editor = {Krawczyk, Hugo}, location = {Buenos Aires, Argentina}, pages = {520 -- 537}, publisher = {Springer}, title = {{Policy-based signatures}}, doi = {10.1007/978-3-642-54631-0_30}, volume = {8383}, year = {2014}, } @inproceedings{2047, abstract = {Following the publication of an attack on genome-wide association studies (GWAS) data proposed by Homer et al., considerable attention has been given to developing methods for releasing GWAS data in a privacy-preserving way. Here, we develop an end-to-end differentially private method for solving regression problems with convex penalty functions and selecting the penalty parameters by cross-validation. In particular, we focus on penalized logistic regression with elastic-net regularization, a method widely used to in GWAS analyses to identify disease-causing genes. We show how a differentially private procedure for penalized logistic regression with elastic-net regularization can be applied to the analysis of GWAS data and evaluate our method’s performance.}, author = {Yu, Fei and Rybar, Michal and Uhler, Caroline and Fienberg, Stephen}, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, editor = {Domingo Ferrer, Josep}, location = {Ibiza, Spain}, pages = {170 -- 184}, publisher = {Springer}, title = {{Differentially-private logistic regression for detecting multiple-SNP association in GWAS databases}}, doi = {10.1007/978-3-319-11257-2_14}, volume = {8744}, year = {2014}, } @article{2050, abstract = {The flow instability and further transition to turbulence in a toroidal pipe (torus) with curvature ratio (tube-to-coiling diameter) 0.049 is investigated experimentally. The flow inside the toroidal pipe is driven by a steel sphere fitted to the inner pipe diameter. The sphere is moved with constant azimuthal velocity from outside the torus by a moving magnet. The experiment is designed to investigate curved pipe flow by optical measurement techniques. Using stereoscopic particle image velocimetry, laser Doppler velocimetry and pressure drop measurements, the flow is measured for Reynolds numbers ranging from 1000 to 15 000. Time- and space-resolved velocity fields are obtained and analysed. The steady axisymmetric basic flow is strongly influenced by centrifugal effects. On an increase of the Reynolds number we find a sequence of bifurcations. For Re=4075±2% a supercritical bifurcation to an oscillatory flow is found in which waves travel in the streamwise direction with a phase velocity slightly faster than the mean flow. The oscillatory flow is superseded by a presumably quasi-periodic flow at a further increase of the Reynolds number before turbulence sets in. The results are found to be compatible, in general, with earlier experimental and numerical investigations on transition to turbulence in helical and curved pipes. However, important aspects of the bifurcation scenario differ considerably.}, author = {Kühnen, Jakob and Holzner, Markus and Hof, Björn and Kuhlmann, Hendrik}, journal = {Journal of Fluid Mechanics}, pages = {463 -- 491}, publisher = {Cambridge University Press}, title = {{Experimental investigation of transitional flow in a toroidal pipe}}, doi = {10.1017/jfm.2013.603}, volume = {738}, year = {2014}, } @inproceedings{2052, abstract = {A standard technique for solving the parameterized model checking problem is to reduce it to the classic model checking problem of finitely many finite-state systems. This work considers some of the theoretical power and limitations of this technique. We focus on concurrent systems in which processes communicate via pairwise rendezvous, as well as the special cases of disjunctive guards and token passing; specifications are expressed in indexed temporal logic without the next operator; and the underlying network topologies are generated by suitable Monadic Second Order Logic formulas and graph operations. First, we settle the exact computational complexity of the parameterized model checking problem for some of our concurrent systems, and establish new decidability results for others. Second, we consider the cases that model checking the parameterized system can be reduced to model checking some fixed number of processes, the number is known as a cutoff. We provide many cases for when such cutoffs can be computed, establish lower bounds on the size of such cutoffs, and identify cases where no cutoff exists. Third, we consider cases for which the parameterized system is equivalent to a single finite-state system (more precisely a Büchi word automaton), and establish tight bounds on the sizes of such automata.}, author = {Aminof, Benjamin and Kotek, Tomer and Rubin, Sacha and Spegni, Francesco and Veith, Helmut}, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, editor = {Baldan, Paolo and Gorla, Daniele}, location = {Rome, Italy}, pages = {109 -- 124}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Parameterized model checking of rendezvous systems}}, doi = {10.1007/978-3-662-44584-6_9}, volume = {8704}, year = {2014}, } @inproceedings{2053, abstract = {In contrast to the usual understanding of probabilistic systems as stochastic processes, recently these systems have also been regarded as transformers of probabilities. In this paper, we give a natural definition of strong bisimulation for probabilistic systems corresponding to this view that treats probability distributions as first-class citizens. Our definition applies in the same way to discrete systems as well as to systems with uncountable state and action spaces. Several examples demonstrate that our definition refines the understanding of behavioural equivalences of probabilistic systems. In particular, it solves a longstanding open problem concerning the representation of memoryless continuous time by memoryfull continuous time. Finally, we give algorithms for computing this bisimulation not only for finite but also for classes of uncountably infinite systems.}, author = {Hermanns, Holger and Krčál, Jan and Kretinsky, Jan}, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, editor = {Baldan, Paolo and Gorla, Daniele}, location = {Rome, Italy}, pages = {249 -- 265}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Probabilistic bisimulation: Naturally on distributions}}, doi = {10.1007/978-3-662-44584-6_18}, volume = {8704}, year = {2014}, } @inproceedings{2054, abstract = {We study two-player concurrent games on finite-state graphs played for an infinite number of rounds, where in each round, the two players (player 1 and player 2) choose their moves independently and simultaneously; the current state and the two moves determine the successor state. The objectives are ω-regular winning conditions specified as parity objectives. We consider the qualitative analysis problems: the computation of the almost-sure and limit-sure winning set of states, where player 1 can ensure to win with probability 1 and with probability arbitrarily close to 1, respectively. In general the almost-sure and limit-sure winning strategies require both infinite-memory as well as infinite-precision (to describe probabilities). While the qualitative analysis problem for concurrent parity games with infinite-memory, infinite-precision randomized strategies was studied before, we study the bounded-rationality problem for qualitative analysis of concurrent parity games, where the strategy set for player 1 is restricted to bounded-resource strategies. In terms of precision, strategies can be deterministic, uniform, finite-precision, or infinite-precision; and in terms of memory, strategies can be memoryless, finite-memory, or infinite-memory. We present a precise and complete characterization of the qualitative winning sets for all combinations of classes of strategies. In particular, we show that uniform memoryless strategies are as powerful as finite-precision infinite-memory strategies, and infinite-precision memoryless strategies are as powerful as infinite-precision finite-memory strategies. We show that the winning sets can be computed in (n2d+3) time, where n is the size of the game structure and 2d is the number of priorities (or colors), and our algorithms are symbolic. The membership problem of whether a state belongs to a winning set can be decided in NP ∩ coNP. Our symbolic algorithms are based on a characterization of the winning sets as μ-calculus formulas, however, our μ-calculus formulas are crucially different from the ones for concurrent parity games (without bounded rationality); and our memoryless witness strategy constructions are significantly different from the infinite-memory witness strategy constructions for concurrent parity games.}, author = {Chatterjee, Krishnendu}, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, editor = {Baldan, Paolo and Gorla, Daniele}, location = {Rome, Italy}, pages = {544 -- 559}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Qualitative concurrent parity games: Bounded rationality}}, doi = {10.1007/978-3-662-44584-6_37}, volume = {8704}, year = {2014}, } @article{2056, abstract = {We consider a continuous-time Markov chain (CTMC) whose state space is partitioned into aggregates, and each aggregate is assigned a probability measure. A sufficient condition for defining a CTMC over the aggregates is presented as a variant of weak lumpability, which also characterizes that the measure over the original process can be recovered from that of the aggregated one. We show how the applicability of de-aggregation depends on the initial distribution. The application section is devoted to illustrate how the developed theory aids in reducing CTMC models of biochemical systems particularly in connection to protein-protein interactions. We assume that the model is written by a biologist in form of site-graph-rewrite rules. Site-graph-rewrite rules compactly express that, often, only a local context of a protein (instead of a full molecular species) needs to be in a certain configuration in order to trigger a reaction event. This observation leads to suitable aggregate Markov chains with smaller state spaces, thereby providing sufficient reduction in computational complexity. This is further exemplified in two case studies: simple unbounded polymerization and early EGFR/insulin crosstalk.}, author = {Ganguly, Arnab and Petrov, Tatjana and Koeppl, Heinz}, journal = {Journal of Mathematical Biology}, number = {3}, pages = {767 -- 797}, publisher = {Springer}, title = {{Markov chain aggregation and its applications to combinatorial reaction networks}}, doi = {10.1007/s00285-013-0738-7}, volume = {69}, year = {2014}, } @inproceedings{2057, abstract = {In the past few years, a lot of attention has been devoted to multimedia indexing by fusing multimodal informations. Two kinds of fusion schemes are generally considered: The early fusion and the late fusion. We focus on late classifier fusion, where one combines the scores of each modality at the decision level. To tackle this problem, we investigate a recent and elegant well-founded quadratic program named MinCq coming from the machine learning PAC-Bayesian theory. MinCq looks for the weighted combination, over a set of real-valued functions seen as voters, leading to the lowest misclassification rate, while maximizing the voters’ diversity. We propose an extension of MinCq tailored to multimedia indexing. Our method is based on an order-preserving pairwise loss adapted to ranking that allows us to improve Mean Averaged Precision measure while taking into account the diversity of the voters that we want to fuse. We provide evidence that this method is naturally adapted to late fusion procedures and confirm the good behavior of our approach on the challenging PASCAL VOC’07 benchmark.}, author = {Morvant, Emilie and Habrard, Amaury and Ayache, Stéphane}, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, location = {Joensuu, Finland}, pages = {153 -- 162}, publisher = {Springer}, title = {{Majority vote of diverse classifiers for late fusion}}, doi = {10.1007/978-3-662-44415-3_16}, volume = {8621}, year = {2014}, } @article{2059, abstract = {Plant embryogenesis is regulated by differential distribution of the plant hormone auxin. However, the cells establishing these gradients during microspore embryogenesis remain to be identified. For the first time, we describe, using the DR5 or DR5rev reporter gene systems, the GFP- and GUS-based auxin biosensors to monitor auxin during Brassica napus androgenesis at cellular resolution in the initial stages. Our study provides evidence that the distribution of auxin changes during embryo development and depends on the temperature-inducible in vitro culture conditions. For this, microspores (mcs) were induced to embryogenesis by heat treatment and then subjected to genetic modification via Agrobacterium tumefaciens. The duration of high temperature treatment had a significant influence on auxin distribution in isolated and in vitro-cultured microspores and on microspore-derived embryo development. In the “mild” heat-treated (1 day at 32 °C) mcs, auxin localized in a polar way already at the uni-nucleate microspore, which was critical for the initiation of embryos with suspensor-like structure. Assuming a mean mcs radius of 20 μm, endogenous auxin content in a single cell corresponded to concentration of 1.01 μM. In mcs subjected to a prolonged heat (5 days at 32 °C), although auxin concentration increased dozen times, auxin polarization was set up at a few-celled pro-embryos without suspensor. Those embryos were enclosed in the outer wall called the exine. The exine rupture was accompanied by the auxin gradient polarization. Relative quantitative estimation of auxin, using time-lapse imaging, revealed that primordia possess up to 1.3-fold higher amounts than those found in the root apices of transgenic MDEs in the presence of exogenous auxin. Our results show, for the first time, which concentration of endogenous auxin coincides with the first cell division and how the high temperature interplays with auxin, by what affects delay early establishing microspore polarity. Moreover, we present how the local auxin accumulation demonstrates the apical–basal axis formation of the androgenic embryo and directs the axiality of the adult haploid plant.}, author = {Dubas, Ewa and Moravčíková, Jana and Libantová, Jana and Matušíková, Ildikó and Benková, Eva and Zur, Iwona and Krzewska, Monika}, journal = {Protoplasma}, number = {5}, pages = {1077 -- 1087}, publisher = {Springer}, title = {{The influence of heat stress on auxin distribution in transgenic B napus microspores and microspore derived embryos}}, doi = {10.1007/s00709-014-0616-1}, volume = {251}, year = {2014}, } @article{2061, abstract = {Development of cambium and its activity is important for our knowledge of the mechanism of secondary growth. Arabidopsis thaliana emerges as a good model plant for such a kind of study. Thus, this paper reports on cellular events taking place in the interfascicular regions of inflorescence stems of A. thaliana, leading to the development of interfascicular cambium from differentiated interfascicular parenchyma cells (IPC). These events are as follows: appearance of auxin accumulation, PIN1 gene expression, polar PIN1 protein localization in the basal plasma membrane and periclinal divisions. Distribution of auxin was observed to be higher in differentiating into cambium parenchyma cells compared to cells within the pith and cortex. Expression of PIN1 in IPC was always preceded by auxin accumulation. Basal localization of PIN1 was already established in the cells prior to their periclinal division. These cellular events initiated within parenchyma cells adjacent to the vascular bundles and successively extended from that point towards the middle region of the interfascicular area, located between neighboring vascular bundles. The final consequence of which was the closure of the cambial ring within the stem. Changes in the chemical composition of IPC walls were also detected and included changes of pectic epitopes, xyloglucans (XG) and extensins rich in hydroxyproline (HRGPs). In summary, results presented in this paper describe interfascicular cambium ontogenesis in terms of successive cellular events in the interfascicular regions of inflorescence stems of Arabidopsis.}, author = {Mazur, Ewa and Kurczyñska, Ewa and Friml, Jiří}, journal = {Protoplasma}, number = {5}, pages = {1125 -- 1139}, publisher = {Springer}, title = {{Cellular events during interfascicular cambium ontogenesis in inflorescence stems of Arabidopsis}}, doi = {10.1007/s00709-014-0620-5}, volume = {251}, year = {2014}, } @article{2062, abstract = {The success story of fast-spiking, parvalbumin-positive (PV+) GABAergic interneurons (GABA, γ-aminobutyric acid) in the mammalian central nervous system is noteworthy. In 1995, the properties of these interneurons were completely unknown. Twenty years later, thanks to the massive use of subcellular patch-clamp techniques, simultaneous multiple-cell recording, optogenetics, in vivo measurements, and computational approaches, our knowledge about PV+ interneurons became more extensive than for several types of pyramidal neurons. These findings have implications beyond the “small world” of basic research on GABAergic cells. For example, the results provide a first proof of principle that neuroscientists might be able to close the gaps between the molecular, cellular, network, and behavioral levels, representing one of the main challenges at the present time. Furthermore, the results may form the basis for PV+ interneurons as therapeutic targets for brain disease in the future. However, much needs to be learned about the basic function of these interneurons before clinical neuroscientists will be able to use PV+ interneurons for therapeutic purposes.}, author = {Hu, Hua and Gan, Jian and Jonas, Peter M}, journal = {Science}, number = {6196}, publisher = {American Association for the Advancement of Science}, title = {{Fast-spiking parvalbumin^+ GABAergic interneurons: From cellular design to microcircuit function}}, doi = {10.1126/science.1255263}, volume = {345}, year = {2014}, } @inproceedings{2063, abstract = {We consider Markov decision processes (MDPs) which are a standard model for probabilistic systems.We focus on qualitative properties forMDPs that can express that desired behaviors of the system arise almost-surely (with probability 1) or with positive probability. We introduce a new simulation relation to capture the refinement relation ofMDPs with respect to qualitative properties, and present discrete graph theoretic algorithms with quadratic complexity to compute the simulation relation.We present an automated technique for assume-guarantee style reasoning for compositional analysis ofMDPs with qualitative properties by giving a counterexample guided abstraction-refinement approach to compute our new simulation relation. We have implemented our algorithms and show that the compositional analysis leads to significant improvements.}, author = {Chatterjee, Krishnendu and Chmelik, Martin and Daca, Przemyslaw}, location = {Vienna, Austria}, pages = {473 -- 490}, publisher = {Springer}, title = {{CEGAR for qualitative analysis of probabilistic systems}}, doi = {10.1007/978-3-319-08867-9_31}, volume = {8559}, year = {2014}, } @article{2064, abstract = {We examined the synaptic structure, quantity, and distribution of α-amino-3-hydroxy-5-methylisoxazole-4-propionic acid (AMPA)- and N-methyl-D-aspartate (NMDA)-type glutamate receptors (AMPARs and NMDARs, respectively) in rat cochlear nuclei by a highly sensitive freeze-fracture replica labeling technique. Four excitatory synapses formed by two distinct inputs, auditory nerve (AN) and parallel fibers (PF), on different cell types were analyzed. These excitatory synapse types included AN synapses on bushy cells (AN-BC synapses) and fusiform cells (AN-FC synapses) and PF synapses on FC (PF-FC synapses) and cartwheel cell spines (PF-CwC synapses). Immunogold labeling revealed differences in synaptic structure as well as AMPAR and NMDAR number and/or density in both AN and PF synapses, indicating a target-dependent organization. The immunogold receptor labeling also identified differences in the synaptic organization of FCs based on AN or PF connections, indicating an input-dependent organization in FCs. Among the four excitatory synapse types, the AN-BC synapses were the smallest and had the most densely packed intramembrane particles (IMPs), whereas the PF-CwC synapses were the largest and had sparsely packed IMPs. All four synapse types showed positive correlations between the IMP-cluster area and the AMPAR number, indicating a common intrasynapse-type relationship for glutamatergic synapses. Immunogold particles for AMPARs were distributed over the entire area of individual AN synapses; PF synapses often showed synaptic areas devoid of labeling. The gold-labeling for NMDARs occurred in a mosaic fashion, with less positive correlations between the IMP-cluster area and the NMDAR number. Our observations reveal target- and input-dependent features in the structure, number, and organization of AMPARs and NMDARs in AN and PF synapses.}, author = {Rubio, Maía and Fukazawa, Yugo and Kamasawa, Naomi and Clarkson, Cheryl and Molnár, Elek and Shigemoto, Ryuichi}, journal = {Journal of Comparative Neurology}, number = {18}, pages = {4023 -- 4042}, publisher = {Wiley-Blackwell}, title = {{Target- and input-dependent organization of AMPA and NMDA receptors in synaptic connections of the cochlear nucleus}}, doi = {10.1002/cne.23654}, volume = {522}, year = {2014}, } @inproceedings{2082, abstract = {NMAC is a mode of operation which turns a fixed input-length keyed hash function f into a variable input-length function. A practical single-key variant of NMAC called HMAC is a very popular and widely deployed message authentication code (MAC). Security proofs and attacks for NMAC can typically be lifted to HMAC. NMAC was introduced by Bellare, Canetti and Krawczyk [Crypto'96], who proved it to be a secure pseudorandom function (PRF), and thus also a MAC, assuming that (1) f is a PRF and (2) the function we get when cascading f is weakly collision-resistant. Unfortunately, HMAC is typically instantiated with cryptographic hash functions like MD5 or SHA-1 for which (2) has been found to be wrong. To restore the provable guarantees for NMAC, Bellare [Crypto'06] showed its security based solely on the assumption that f is a PRF, albeit via a non-uniform reduction. - Our first contribution is a simpler and uniform proof for this fact: If f is an ε-secure PRF (against q queries) and a δ-non-adaptively secure PRF (against q queries), then NMAC f is an (ε+ℓqδ)-secure PRF against q queries of length at most ℓ blocks each. - We then show that this ε+ℓqδ bound is basically tight. For the most interesting case where ℓqδ ≥ ε we prove this by constructing an f for which an attack with advantage ℓqδ exists. This also violates the bound O(ℓε) on the PRF-security of NMAC recently claimed by Koblitz and Menezes. - Finally, we analyze the PRF-security of a modification of NMAC called NI [An and Bellare, Crypto'99] that differs mainly by using a compression function with an additional keying input. This avoids the constant rekeying on multi-block messages in NMAC and allows for a security proof starting by the standard switch from a PRF to a random function, followed by an information-theoretic analysis. We carry out such an analysis, obtaining a tight ℓq2/2 c bound for this step, improving over the trivial bound of ℓ2q2/2c. The proof borrows combinatorial techniques originally developed for proving the security of CBC-MAC [Bellare et al., Crypto'05].}, author = {Gazi, Peter and Pietrzak, Krzysztof Z and Rybar, Michal}, editor = {Garay, Juan and Gennaro, Rosario}, location = {Santa Barbara, USA}, number = {1}, pages = {113 -- 130}, publisher = {Springer}, title = {{The exact PRF-security of NMAC and HMAC}}, doi = {10.1007/978-3-662-44371-2_7}, volume = {8616}, year = {2014}, } @article{2083, abstract = {Understanding the effects of sex and migration on adaptation to novel environments remains a key problem in evolutionary biology. Using a single-cell alga Chlamydomonas reinhardtii, we investigated how sex and migration affected rates of evolutionary rescue in a sink environment, and subsequent changes in fitness following evolutionary rescue. We show that sex and migration affect both the rate of evolutionary rescue and subsequent adaptation. However, their combined effects change as the populations adapt to a sink habitat. Both sex and migration independently increased rates of evolutionary rescue, but the effect of sex on subsequent fitness improvements, following initial rescue, changed with migration, as sex was beneficial in the absence of migration but constraining adaptation when combined with migration. These results suggest that sex and migration are beneficial during the initial stages of adaptation, but can become detrimental as the population adapts to its environment.}, author = {Lagator, Mato and Morgan, Andrew and Neve, Paul and Colegrave, Nick}, journal = {Evolution}, number = {8}, pages = {2296 -- 2305}, publisher = {Wiley}, title = {{Role of sex and migration in adaptation to sink environments}}, doi = {10.1111/evo.12440}, volume = {68}, year = {2014}, } @article{2084, abstract = {Receptor tyrosine kinases (RTKs) are a large family of cell surface receptors that sense growth factors and hormones and regulate a variety of cell behaviours in health and disease. Contactless activation of RTKs with spatial and temporal precision is currently not feasible. Here, we generated RTKs that are insensitive to endogenous ligands but can be selectively activated by low-intensity blue light. We screened light-oxygen-voltage (LOV)-sensing domains for their ability to activate RTKs by light-activated dimerization. Incorporation of LOV domains found in aureochrome photoreceptors of stramenopiles resulted in robust activation of the fibroblast growth factor receptor 1 (FGFR1), epidermal growth factor receptor (EGFR) and rearranged during transfection (RET). In human cancer and endothelial cells, light induced cellular signalling with spatial and temporal precision. Furthermore, light faithfully mimicked complex mitogenic and morphogenic cell behaviour induced by growth factors. RTKs under optical control (Opto-RTKs) provide a powerful optogenetic approach to actuate cellular signals and manipulate cell behaviour.}, author = {Grusch, Michael and Schelch, Karin and Riedler, Robert and Gschaider-Reichhart, Eva and Differ, Christopher and Berger, Walter and Inglés Prieto, Álvaro and Janovjak, Harald L}, journal = {EMBO Journal}, number = {15}, pages = {1713 -- 1726}, publisher = {Wiley-Blackwell}, title = {{Spatio-temporally precise activation of engineered receptor tyrosine kinases by light}}, doi = {10.15252/embj.201387695}, volume = {33}, year = {2014}, } @article{2086, abstract = {Pathogens may gain a fitness advantage through manipulation of the behaviour of their hosts. Likewise, host behavioural changes can be a defence mechanism, counteracting the impact of pathogens on host fitness. We apply harmonic radar technology to characterize the impact of an emerging pathogen - Nosema ceranae (Microsporidia) - on honeybee (Apis mellifera) flight and orientation performance in the field. Honeybees are the most important commercial pollinators. Emerging diseases have been proposed to play a prominent role in colony decline, partly through sub-lethal behavioural manipulation of their hosts. We found that homing success was significantly reduced in diseased (65.8%) versus healthy foragers (92.5%). Although lost bees had significantly reduced continuous flight times and prolonged resting times, other flight characteristics and navigational abilities showed no significant difference between infected and non-infected bees. Our results suggest that infected bees express normal flight characteristics but are constrained in their homing ability, potentially compromising the colony by reducing its resource inputs, but also counteracting the intra-colony spread of infection. We provide the first high-resolution analysis of sub-lethal effects of an emerging disease on insect flight behaviour. The potential causes and the implications for both host and parasite are discussed.}, author = {Wolf, Stephan and Mcmahon, Dino and Lim, Ka and Pull, Christopher and Clark, Suzanne and Paxton, Robert and Osborne, Juliet}, journal = {PLoS One}, number = {8}, publisher = {Public Library of Science}, title = {{So near and yet so far: Harmonic radar reveals reduced homing ability of Nosema infected honeybees}}, doi = {10.1371/journal.pone.0103989}, volume = {9}, year = {2014}, } @article{2141, abstract = {The computation of the winning set for Büchi objectives in alternating games on graphs is a central problem in computer-aided verification with a large number of applications. The long-standing best known upper bound for solving the problem is Õ(n ⋅ m), where n is the number of vertices and m is the number of edges in the graph. We are the first to break the Õ(n ⋅ m) boundary by presenting a new technique that reduces the running time to O(n2). This bound also leads to O(n2)-time algorithms for computing the set of almost-sure winning vertices for Büchi objectives (1) in alternating games with probabilistic transitions (improving an earlier bound of Õ(n ⋅ m)), (2) in concurrent graph games with constant actions (improving an earlier bound of O(n3)), and (3) in Markov decision processes (improving for m>n4/3 an earlier bound of O(m ⋅ √m)). We then show how to maintain the winning set for Büchi objectives in alternating games under a sequence of edge insertions or a sequence of edge deletions in O(n) amortized time per operation. Our algorithms are the first dynamic algorithms for this problem. We then consider another core graph theoretic problem in verification of probabilistic systems, namely computing the maximal end-component decomposition of a graph. We present two improved static algorithms for the maximal end-component decomposition problem. Our first algorithm is an O(m ⋅ √m)-time algorithm, and our second algorithm is an O(n2)-time algorithm which is obtained using the same technique as for alternating Büchi games. Thus, we obtain an O(min &lcu;m ⋅ √m,n2})-time algorithm improving the long-standing O(n ⋅ m) time bound. Finally, we show how to maintain the maximal end-component decomposition of a graph under a sequence of edge insertions or a sequence of edge deletions in O(n) amortized time per edge deletion, and O(m) worst-case time per edge insertion. Again, our algorithms are the first dynamic algorithms for this problem.}, author = {Chatterjee, Krishnendu and Henzinger, Monika}, journal = {Journal of the ACM}, number = {3}, publisher = {ACM}, title = {{Efficient and dynamic algorithms for alternating Büchi games and maximal end-component decomposition}}, doi = {10.1145/2597631}, volume = {61}, year = {2014}, } |
b4c26066b3be6ad1 | Skip to main content
Fast escape of a quantum walker from an integrated photonic maze
Escaping from a complex maze, by exploring different paths with several decision-making branches in order to reach the exit, has always been a very challenging and fascinating task. Wave field and quantum objects may explore a complex structure in parallel by interference effects, but without necessarily leading to more efficient transport. Here, inspired by recent observations in biological energy transport phenomena, we demonstrate how a quantum walker can efficiently reach the output of a maze by partially suppressing the presence of interference. In particular, we show theoretically an unprecedented improvement in transport efficiency for increasing maze size with respect to purely quantum and classical approaches. In addition, we investigate experimentally these hybrid transport phenomena, by mapping the maze problem in an integrated waveguide array, probed by coherent light, hence successfully testing our theoretical results. These achievements may lead towards future bio-inspired photonics technologies for more efficient transport and computation.
Transport problems are very popular in several fields of science, as biology, chemistry, sociology, information science, physics and even in everyday life. One of the most challenging transport problems is represented by efficiently traversing a maze, that is, finding the exit in the shortest possible time of a topologically complex network of interconnected sites (Fig. 1). The efficiency in reaching the exit of a maze dramatically decreases with the number of sites in the structure, rapidly making this problem intractable1.
Figure 1: Maze problem.
figure 1
Pictorial view of a maze with single input (IN) and output (OUT) ports. An ideal walker has to travel from IN to OUT in the shortest possible time.
The problem of solving mazes has fascinated mankind since the ancient times. One famous maze is the Cretan one, designed by the architect Daedalus, build to hold the mythological creature Minotaur that was eventually killed by the hero Theseus. To find the Minotaur he used the most typical maze-solving strategy: exploring several possible alternatives, while marking the visited paths (by a ball of thread). Around 60 years ago, Shannon realized the first ever experiment on maze-solving that was based on physical means, in particular an electromagnetic mouse Theseus2. Nowadays, the availability of new physical, chemical and biological systems has opened up the way for traversing a maze with a parallel exploration of all possible transport channels at the same time. For instance, in ref. 3 a maze is experimentally solved by filling it with a Belousov–Zhabotinsky reaction mixture and then exploiting the superposition effect of travelling chemical wavefronts. More recently, it was shown that this parallel addressing can be indirectly obtained by the chemo-attractant waves emitted by the oat flake placed at the destination site, while a plasmodium slime walks directly to the exit4. This demonstrates the crucial role of interference to find the maze’s exit in a more efficient way.
In the framework of quantum mechanics, even a single particle, represented by a wavefunction, shows interference effects. Exploiting this property, a quantum walker is able to propagate in the fastest way inside perfectly ordered lattices5,6; however, localization phenomena may occur when disorder is present7,8,9,10. Quantum walks find applications to energy transport11 and quantum information12,13,14,15 with polynomial as well as exponential speedup16, for example, Grover search algorithm17, universal models for quantum computation18, state transfer in spin and harmonic networks19,20,21 and recent proposals on web page ranking22. Recently, the maze problem has been converted into a quantum search problem to get a quadratic speedup23. Interestingly enough, the interplay of interference and noise effects can further enhance quantum transport over complex networks, as recently observed for energy transport phenomena in light-harvesting proteins24,25,26,27,28 and proposed for noise-assisted quantum communication29. In particular, it is extremely difficult to study quantum transport phenomena in biological systems, as well as to change in a controlled way the problem parameters to fully understand their role. For this reason, it is very important to develop a perfectly controlled artificial platform that can be used to simulate, understand and engineer these phenomena.
In the last years, several technological platforms have been employed to investigate quantum transport phenomena, such as NMR30,31, trapped ions32,33, neutral atoms34 and several photonic schemes as bulk optics35,36, fibre loop configurations37,38 and miniaturized integrated waveguide circuits39,40,41,42,43. Among these, a very interesting experimental platform is represented by three-dimensional waveguide arrays, fabricated by femtosecond laser micromachining41,44,45,46. Femtosecond laser waveguide writing47 enables to fabricate high-quality optical waveguides, directly buried in the bulk of a transparent substrate. Ultrashort laser pulses are focused at the desired depth in the substrate and nonlinear absorption processes induce localized and permanent refractive index increase; translation of the sample at uniform speed allows to draw guiding paths in the substrate with unique three-dimensional design freedom. Many diverse quantum phenomena48,49 can be observed and simulated by means of such structures: in particular, a powerful analogy can be exploited between the Schrödinger equation, describing the evolution of a wavepacket in a two-dimension potential, and the equations describing the paraxial evolution of light into a dielectric structure, such as a waveguide array. In particular, an array of coupled waveguides is equivalent to a two-dimensional array of quantum wells. The temporal evolution of a single quantum particle, placed initially in a certain well, can be mapped to the spatial evolution along the propagation direction of a single photon, injected initially in a certain waveguide.
Here, we investigate the role of a partial suppression of interference effects in the transport dynamics through maze-like graphs. In particular, we theoretically demonstrate that an optimal mixing of classical and quantum dynamics leads to a remarkably efficient transmission of energy/information from the input to the exit door of a generic maze. In addition, we show that it is possible to reproduce experimentally these dynamics in a photonic simulator, unfolding the maze onto a femtosecond-laser-written three-dimensional waveguide array, where noise is implemented by modulating the propagation constants of the waveguides during the writing process. The results provide a clear demonstration that a controlled amount of decoherence in the walker can produce an enhanced transport efficiency in escaping the maze and that these phenomena can be investigated in an experimentally accessible platform and not only in abstract models.
The maze structure is created here by the so-called random Depth-First Search algorithm applied on a square lattice of N nodes50 (see the Methods section: Maze construction, together with Supplementary Fig. 1). The transport model is represented by a walker entering the maze in some initial (IN) site or input door and moving over the structure until reaching a final (OUT) site or exit door (maze’s solution).
Following the framework of quantum stochastic walks28,51, the density matrix ρ describing the state of the system evolves according to the Lindblad master equation:
A purely unitary evolution, given by the hermitian Hamiltonian H, which implements the quantum walk dynamics, is mixed with an incoherent evolution describing a classical random walk, given by the operators Li,j. The balance between the two parts of the Lindblad superoperator is given by the value of the parameter p. In particular, for p=0 a fully coherent (pure interference) dynamics is observed, whereas p=1 corresponds to the case of classical random walk, that is, classical random hopping with no interference; for intermediate values, a mixing of the two types of behaviour is obtained. An irreversible transfer process from the exit site to an external sink is added and the walker’s probability in getting the exit at time t is quantified by transfer efficiency function to the sink , whose values are in the range (0, 1). Further technical details are given in Supplementary Note 1.
As shown in the left side of Fig. 2, the transfer efficiency for a maze of about one thousand sites, for a given time (linearly increasing with the maze size), is more than five order of magnitudes larger when one partially suppresses interference effects (p0.1, that is, 10% of mixing), with respect to the limiting cases of purely coherent and fully classical dynamics. Such transport enhancement is based on an intricate interplay between coherence and noise and shows peculiar features that makes it a fascinating field to investigate. In fact, an analogous optimal mixing has been very recently demonstrated over a large family of complex networks for p0.1 (ref. 28) and experimentally observed in ref. 52 (for the robustness of this mixing value see Supplementary Note 1 and Supplementary Fig. 2). In addition, noise-enhanced transport dynamics was observed even for totally regular and ordered graphs53 (where an intuitive picture of this optimality can be given in terms of a ‘momentum rejuvenation’), thus evidencing how this phenomenon cannot be explained as just a cross-over from disorder-induced coherent localization towards classic diffusive regime. As in ref. 53, we can analyse the transport inefficiency in terms of the average dwelling time in the network, which we define as , with P(t) being the population remaining on the network, that is the probability that at time t the energy quantum has failed to exit the network—see the right side of Fig. 2. This further supports the behaviour observed above for the transfer efficiency at long time scales (Fig. 2 left), showing that our particular choice of the time t for the plotted does not affect our conclusions.
Figure 2: Transport efficiency for different sizes.
figure 2
Left: Transfer efficiency as a function of the size N of the maze, for a time scale t linearly increasing with N, that is, t=10 N. For a maze with N=900 nodes, the optimal mixing p0.1 provides a transfer efficiency that is about five orders of magnitude larger than the perfectly coherent (quantum, that is, p=0) and fully noisy (classical, that is, p=1) regimes. The trend of the curves with the maze complexity N, for the different values of p, indicates that even higher speedup can be achieved for increasingly larger mazes. Right: Dwelling time as a function of the size N of the maze, for p=0, 0.1, 1.
One can consider how in the noiseless case the particle undergoes discrete diffraction in the structure: the strong interference effects given by full coherence generate bright and dark zones, even if the wavefunction does not strictly localize, and this may limit the transfer efficiency between two distant sites of the graph. Adding an optimal quantity of noise may help in suppressing the fine-grained interference pattern while keeping the wavefunction spread almost as in the ballistic case, without reaching the diffusive limit where the transport dynamics is much slower. Although the Lindblad model introduces decoherence only through direct classical transitions (T1-like processes), a similar behaviour would be obtained by considering a pure dephasing process (T2-like)—see (refs 27, 28, 29, 53).
Experimental realization
Taking advantage of the unique three-dimensional fabrication capabilities of femtosecond laser waveguide writing, we implement a simulator of quantum stochastic walks by engineering an integrated photonic device probed by laser light. In fact, the probability distribution at the output for a single photon is perfectly reproduced by the intensity distribution of coherent light in the waveguide array. The maze structure is mapped onto a three-dimensional waveguide array, in which each waveguide represents a site of the maze. In particular, our experimental study is focused on the maze configuration shown in Fig. 3a, composed of 18 sites, taken as a significant example for observing the dynamics predicted by our theoretical model.
Figure 3: Implementing the maze.
figure 3
(a) Maze structure that is experimentally implemented; (b) unfolding of the maze into an almost linear graph, where each node is represented by a wave guide; (c,d) snapshots of the light diffusion for uniform (c) and noisy structure (d). The latter pictures correspond both to a propagation length of 60 mm. The noisy configuration is noise 3 in Fig. 4.
A first problem that has to be addressed is how to map in a waveguide system the topology of the links between the sites of our maze. Whereas in an arbitrary maze structure transfer between adjacent sites can be inhibited by walls, in waveguide arrays the coupling between two waveguides is solely determined by their relative distance. Thus, the geometry of the array needs to be engineered to keep far enough from each other waveguides that must not couple. This might not be possible if the maze graph is too complex. In our case, however, it was possible to unfold the maze graph in Fig. 3a, by considering chains with side tails, onto the partially linear and more feasible structure in Fig. 3b. Note that this unfolded geometry is not unique, other configurations being conceivable in principle with the same distances between equally coupled sites.
Another experimental issue is the realization of the exit door (that is, OUT site). In the theoretical model, this site should behave like a sink that absorbs energy irreversibly. In our photonic implementation, the sink is implemented by a long chain of waveguides (62 waveguides), which approximates well a one-way energy transfer process, with negligible probability for the light to be coupled back to the system.
Structures composed of uniform waveguides correspond to the purely coherent case (QW). Fully coherent transport dynamics in such maze can be studied straightforwardly by fabricating arrays with different lengths and characterizing the output distribution when coherent light is injected in the desired initial site (IN). It is worth noting that in this realization, the evolution parameter t, considered in the theoretical model, is mapped onto the propagation length, which we still label as t.
A controlled amount of noise is introduced in the structure by segmenting the waveguides corresponding to the sites of the maze. This is achieved by modulating the writing speed in the fabrication process, which induce a proportional variation of the propagation constant, while keeping the coupling coefficient unvaried46. The value of the propagation constant variation in each segment is randomly picked from a uniform distribution with a given amplitude; the same distribution is used for every waveguide within the same array. The random variation of the propagation constants is equivalent to a random variation of the site energy due to the interaction with an incoherent environment25, hence effectively adding also pure dephasing in the dynamics. This approach has been extensively tested by numerical simulations of this specific implementation as compared with the theoretical Lindblad model discussed in the previous section (for further details, see Supplementary Notes 2 and 3, together with Supplementary Figs 3 and 4).
The waveguide array implementing the sink is in all cases composed by uniform, not-segmented, waveguides. To characterize the transfer efficiency to the sink, the output facet of each fabricated structure is imaged onto a CMOS camera (examples of snapshots are shown in Fig. 3c,d), the light intensity on the maze and sink regions of the array are numerically integrated and the fraction of light in the sink is calculated. Technical details of the characterization procedure are given in the Methods section (Characterization measurements: experimental details).
Twenty-four structures were fabricated with the transverse layout as in Fig. 3b, implementing six different propagation lengths for both the noiseless, fully coherent, situation and three different noise configurations with the same strength (that is, same amplitude of propagation constant distribution). Waveguide arrays were inscribed in EAGLE2000 (Corning) glass substrates, by femtosecond laser writing. A Yb-based amplified laser system (FemtoREGEN, HighQLaser) was used, providing laser pulses with 400 fs duration and 300 nJ energy at 1 MHz repetition rate. The laser was focused in the substrate by a 0.45 numerical aperture, × 20 microscope objective, compensated for spherical aberrations at 170 μm below the glass surface, which is the average depth of the fabricated structures. The waveguides yield single-mode operation at the wavelength of 850 nm and the coupling coefficient between nearest-neighbouring waveguides is κ=0.40 mm−1. The amplitude of the random distribution of the propagation constants, adopted in the noise implementation, is Δβmax=0.40 mm−1. The modulation of the propagation constant is achieved by proportionally varying the waveguide writing speed in the 10–40 mm s−1 range (see also Supplementary Fig. 5 for details). In fact, varying the writing speed means changing the amount of deposited energy in the material, which, in the above range, causes a proportional variation of refractive index change and thus of Δβ. The value of the propagation constant is modified every 3 mm of waveguide length.
Transfer efficiency
The transfer efficiency to the sink, calculated theoretically with the method reported in ref. 28, is shown in Fig. 4 for a maze with the layout presented in Fig. 3. Such layout represents the actual structure that we have experimentally implemented and is considered both for the case of fully coherent quantum transport and for the case of partially incoherent transport with p=0.1. Figure 4 shows the experimentally retrieved transfer efficiencies, each point corresponding to a physically different structure fabricated to implement a certain noise map and a given propagation length. The average between the points corresponding to the three different noise implementations is also shown (right panel).
Figure 4: Role of noise in the transfer efficiency time evolution.
figure 4
Left: Theoretical behaviour of the transfer efficiency as a function of the evolution parameter t for two values of the mixing parameter p, corresponding to QW (p=0) and QSW (p=0.1), for the maze in Fig. 3. Inset: a larger time scale is shown in order to point out the remarkable efficiency enhancement when there is a partial suppression of interference. Centre: Experimental results for both uniform (triangles) and three noisy realizations of the structure reported in Fig. 3. Where not shown the error bars are smaller than the mark size. Right: as in the middle panel, but only with the uniform case (triangles) and with the average efficiency over the noise realizations (circles). In all three panels, the time t is in units of mm because it is experimentally mapped into the propagation length of the three-dimensional waveguide array.
The physical quantity that is measured experimentally is the fraction of light present in the sink after a certain propagation, and not the fraction of light that is transferred to the sink. In case propagation losses are the same both in the maze waveguides and in the sink waveguides, the two quantities indeed correspond. As a matter of fact, the modulation of the writing speed produces additional losses in the waveguides of the maze with respect to the waveguides of the sink and this causes in general an overestimation of the transfer efficiency. However, we characterized accurately such additional losses in our structures and simulated their impact on the estimation of the transfer efficiency. The consequent systematic error contribution has been directly taken into account in Fig. 4, whereas the random error contribution is reported with the error bars. Details on the treatment of noise and error contributions in the efficiency estimation are given in the Methods section (Characterization measurements: experimental details).
A very good agreement between theoretical and experimental curves is observed both for the noiseless, fully coherent case, and for the partially coherent transport when considering the average of our ‘noisy’ waveguide implementations. Therefore, this experimental evidence is consistent with our claim that the interplay of noise and interference effects leads to higher efficiency in finding the way out from the maze.
To further assess our experimental observation of a noise-induced enhancement in transfer efficiency in this platform, we fabricated other photonic structures implementing the maze of Fig. 3 at an evolution parameter t=60 mm, but employing a different femtosecond laser writing setup for the fabrication (see Supplementary Note 4 for details). We fabricated one uniform structure and 16 different random ‘noisy’ implementations, 8 with a noise strength Δβmax=0.12 mm−1 and 8 implementing a noise strength Δβmax=0.40 mm−1. As shown in Fig. 5, we can experimentally observe a transfer efficiency of 12.5% for the uniform case, to be compared with an average transfer efficiency of 14.1% and 22.2% for the Δβmax=0.12 mm−1 and Δβmax=0.40 mm−1 cases, respectively. The experimental data are in good agreement with the simulations, which take into account small differences in the waveguide properties with respect to those fabricated with the previous system. It can be noticed that the distribution of the measurements (open circles in Fig. 5) around the mean value spreads with increasing disorder: however, the increase in the average transfer efficiency is clear when the amount of noise approaches the optimum value.
Figure 5: Averaging over different noise realizations.
figure 5
Experimentally measured transfer efficiency in photonic maze structures with the topology of Fig. 3 and propagation length t=60 mm. Both uniform and different noise maps with Δβmax=0.12 mm−1 and Δβmax=0.40 mm−1 have been implemented. The light grey (left) bars indicate the average over several experimental results obtained for different noise maps with the same amplitude Δβmax (single measurements are reported as open circles). The red (right) bars represent the expected average transfer efficiency, numerically simulated for 1,000 different noise maps with the given Δβmax. Error bars are smaller than the mark size.
To summarize, here we have studied both theoretically and experimentally the dynamics of a walker travelling in a maze, having a single path from the input door (starting point) to the exit (solution). By considering a model that mixes the behaviour of a classical walker and a quantum one, we have found an optimal condition leading to extremely efficient and fast transmission. For large enough maze size, this leads to a remarkably high enhancement of more than five order of magnitudes in the transfer efficiency with respect to both the classical and purely quantum limits. This result is a clear example that decoherence is not always a detrimental phenomenon that should be avoided in quantum processes and it may provide some insight on the reason why nature evolution has made the observation of purely coherent phenomena so difficult.
By exploiting the unique capabilities of the femtosecond laser writing technology, we have unfolded the maze and implemented it in a three-dimensional waveguide array, where a suitable modulation of the waveguide properties allowed us to mimic a partial decoherence of the walker. Our measurements have faithfully confirmed the theoretical predictions and, in particular, the remarkable role of a partial suppression of interference in enhancing transport dynamics in mazes. It is also worth noting that our technological platform has enabled an experimental simulation of a noise-assisted problem in well-controlled conditions and over complex topologies, and can thus represent a very powerful tool for further studies in this direction. These results, together with future full circuit reconfigurability, will pave the way to much more complex integrated photonics devices exploiting interference, quantum features and noise effects for improved problem-solving efficiency, and remarkably fast transmission of information in ICT applications and of energy in novel solar technologies.
Another experimental demonstration of enhanced quantum transport by controlled decoherence has been reported during the preparation of this manuscript54.
Maze construction
Depth-First Search algorithm is the simplest maze generation algorithm and is based on the following iterative procedure that is applied to a regular square grid of N nodes, where all neighbour sites are separated by a wall50. One starts from a random node and then search for a random neighbour that has not considered yet. If so, the wall between these two sites is knocked down; otherwise, one backs up to the previous node. This procedure is repeated until all sites of the grid have been visited. By doing so, the final structure is a maze where we have only one path connecting the IN to the OUT node, that is, a so-called two-dimensional perfect maze with no closed loops. Applying this procedure to larger and larger square lattices, one obtains increasing large maze graphs (see Supplementary Fig. 1 for details).
Characterization measurements: experimental details
The fabricated structures are probed by coherent light. Laser light at 850 nm wavelength is injected into the input waveguide. The output distribution is imaged onto a CMOS camera by means of a 0.12 numerical aperture objective. Numerical integration on different parts of the acquired image allows to retrieve the fraction of light present in the sink waveguide array. The advantages of this method are, on one hand, insensitivity to coupling losses of the input beam and, on the other hand, the possibility of a fast acquisition of the output of many waveguides.
A careful analysis of the measurement error has been performed. A possible source of error is the quantization of the intensity levels of the CMOS sensor, as well as its finite spatial resolution. To analyse this error contribution, we simulated the numerical integration of gaussian modes, with the same size as the measured ones, but random intensity and peak position, discretized both in the (256) intensity levels and in the pixels of the spatial profile. Supplementary Fig. 6 shows the (normalized) difference between the numerically calculated integral and the analytic integral of the gaussian profile, for 1,000 randomly distributed modes, as a function of the peak intensity. Note that, given a certain peak intensity, the numerical integral may give different values depending on the position of the peak, because of pixel discretization. Systematic and random errors are almost independent on the peak intensity and they have been taken into account in data elaboration assuming that the acquired image contains n modes of random uniformly distributed intensity. As a matter of fact, the contribution of such errors on the experimentally measured efficiencies is relevant only for the shortest lengths, where the light intensity in the sink is particularly low.
Furthermore, cascading waveguide segments with different fabrication speed (to mimic noise) causes small additional losses, at each interface between different waveguide segments. Importantly, these losses are present only in the waveguides representing the maze and not in the sink waveguides. Because of those losses, measuring the fraction of power present in the sink after a certain propagation distance, with respect to the overall output power, as is done in our characterization process, may not correspond precisely to the fraction of input power transferred to the sink. In fact, the transfer efficiency is slightly overestimated.
Overall additional losses can be measured quite accurately, by simply measuring and comparing the insertion losses of the fabricated devices (ration of the overall output power over the input power). However, the precise contribution of these losses on each measured transfer efficiency can be hardly retrieved. In fact, the transfer process is not uniform during the propagation and depends on the random noise map implemented.
Thus, to statistically quantify such overestimation, we numerically simulated the light propagation in waveguide structures analogous to the fabricated ones, for 100 random different noise distributions (with always the same amplitude, as the one adopted in the experiment), both in the case of waveguides with no losses (which corresponds to the ideal situation) and in the case of waveguides yielding uniform additional losses with respect to the waveguides of the sink, in such a way that the overall losses of the structure correspond to the experimentally measured ones (2 dB additional losses for the longest arrays). We evaluated in each case the estimation error of the transfer efficiency and calculated the error statistical distribution. Supplementary Fig. 7 shows the average error, together with its standard deviation, as a function of the propagation length. The effect of these losses reveals to be small (the systematic component is <3% for 60 mm, correspondent to our longest fabricated devices) and does not significantly influence our experimental observation of an increase in transfer efficiency in the cases in which noise is added.
Additional information
How to cite this article: Caruso, F. et al. Fast escape of a quantum walker from an integrated photonic maze. Nat. Commun. 7:11682 doi: 10.1038/ncomms11682 (2016).
1. Hyafil, L. & Rivest, R. L. Constructing optimal binary decision trees is NP-complete. Inform. Proc. Lett. 5, 15–17 (1976).
MathSciNet Article Google Scholar
2. Shannon, C. Presentation of a maze solving machine. Trans. 8th Conf. Cybernetics: Circular, Causal and Feedback Mechanisms in Biological and Social Systems 169–181New York, USA, (1951).
3. Steinbock, O., Toth, A. & Showalter, K. Navigating complex labyrinths: optimal paths from chemical waves. Science 267, 868–871 (1995).
CAS ADS Article Google Scholar
4. Adamatzky, A. Slime mold solves maze in one pass, assisted by gradient of chemo-attractants. IEEE Trans. NanoBiosci. 11, 131–134 (2012).
Article Google Scholar
CAS ADS Article Google Scholar
6. Kempe, J. Quantum random walks: an introductory overview. Contemp. Phys. 44, 307 (2003).
ADS Article Google Scholar
7. Keating, J. P., Linden, N., Matthews, J. C. F. & Winter, A. Localization and its consequences for quantum walk algorithms and quantum communication. Phys. Rev. A 76, 012315 (2007).
ADS Article Google Scholar
8. Schwartz, T., Bartal, G., Fishman, S. & Segev, M. Transport and Anderson localization in disordered two-dimensional photonic lattices. Nature (London) 446, 52 (2007).
CAS ADS Article Google Scholar
9. Lahini, Y. et al. Anderson localization and nonlinearity in one-dimensional disordered photonic lattices. Phys. Rev. Lett. 100, 013906 (2008).
ADS Article Google Scholar
10. Moonseok, K. et al. Maximal energy transport through disordered media with the implementation of transmission eigenchannels. Nat. Photonics 6, 581585 (2012).
Google Scholar
11. Mülken, O. & Blumen, A. Continuous-time quantum walks: Models for coherent transport on complex networks. Phys. Rep. 502, 37–87 (2011).
ADS MathSciNet Article Google Scholar
12. Santha, M. in Theory and Applications of Models of Computation. Lecture Notes in Computer Science eds Agrawal M., Du D. Z., Duan Z. H., Li A. S.) Vol. 4978, 31(Springer (2008).
Article Google Scholar
13. Farhi, E. & Gutmann, S. Quantum computation and decision trees. Phys. Rev. A 58, 915–928 (1998).
CAS ADS MathSciNet Article Google Scholar
14. Childs, A., Farhi, E. & Gutmann, S. An example of the difference between quantum and classical random walks. J. Quant. Inf. Proc 1, 35 (2002).
MathSciNet Article Google Scholar
15. Childs, A. Universal computation by quantum walk. Phys. Rev. Lett. 102, 180501 (2009).
ADS MathSciNet Article Google Scholar
16. Ambainis, A. Quantum walks and their algorithmic applications. Int. J. Quantum Inform. 1, 507 (2003).
Article Google Scholar
17. Grover, L. K. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett. 79, 325 (1997).
CAS ADS Article Google Scholar
18. Childs, A. M., Gosset, D. & Webb, Z. Universal computation by multiparticle quantum walk. Science 339, 791 (2013).
CAS ADS MathSciNet Article Google Scholar
19. Bose, S. Quantum communication through an unmodulated spin chain. Phys. Rev. Lett. 91, 207901 (2003).
ADS Article Google Scholar
20. Christandl, M., Datta, N., Ekert, A. & Landahl, A. J. Perfect state transfer in quantum spin networks. Phys. Rev. Lett. 92, 187902 (2004).
ADS Article Google Scholar
21. Plenio, M. B., Hartley, J. & Eisert, J. Dynamics and manipulation of entanglement in coupled harmonic systems with many degrees of freedom. New J. Phys. 6, 36 (2004).
ADS Article Google Scholar
22. Sánchez-Burillo, E., Duch, J., Gómez-Gardeñes, J. & Zueco, D. Quantum navigation and ranking in complex networks. Sci. Rep. 2, 605 (2012).
ADS Article Google Scholar
23. Kumar, N. & Goswami, D. Quantum algorithm to solve a maze: converting the maze problem into a search problem. Preprint at (2013).
24. Engel, G. S. et al. Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Nature 446, 782 (2007).
CAS ADS Article Google Scholar
25. Mohseni, M., Rebentrost, P., Lloyd, S. & Aspuru-Guzik, A. Environment-assisted quantum walks in photosynthetic energy transfer. J. Chem. Phys. 129, 174106 (2008).
ADS Article Google Scholar
26. Plenio, M. B. & Huelga, S. F. Dephasing-assisted transport: quantum networks and biomolecules. New J. Phys. 10, 113019 (2008).
ADS Article Google Scholar
27. Caruso, F., Chin, A. W., Datta, A., Huelga, S. F. & Plenio, M. B. Highly efficient energy excitation transfer in light-harvesting complexes: The fundamental role of noise-assisted transport. J. Chem. Phys. 131, 105106 (2009).
ADS Article Google Scholar
28. Caruso, F. Universally optimal noisy quantum walks on complex networks. New J. Phys. 16, 055015 (2014).
ADS Article Google Scholar
29. Caruso, F., Huelga, S. F. & Plenio, M. B. Noise-enhanced classical and quantum capacities in communication networks. Phys. Rev. Lett. 105, 190501 (2010).
ADS Article Google Scholar
30. Du, J. et al. Experimental implementation of the quantum random-walk algorithm. Phys. Rev. A 67, 042316 (2003).
ADS Article Google Scholar
31. Ryan, C. A., Laforest, M., Boileau, J. C. & Laflamme, R. Experimental implementation of a discrete-time quantum random walk on an NMR quantum-information processor. Phys. Rev. A 72, 062317 (2005).
ADS Article Google Scholar
32. Schmitz, H. et al. Quantum walk of a trapped ion in phase space. Phys. Rev. Lett. 103, 090504 (2009).
CAS ADS Article Google Scholar
33. Zähringer, F. et al. Realization of a quantum walk with one and two trapped ions. Phys. Rev. Lett. 104, 100503 (2010).
ADS Article Google Scholar
34. Karski, M. et al. Quantum walk in position space with single optically trapped atoms. Science 325, 174 (2009).
CAS ADS Article Google Scholar
35. Broome, M. A. et al. Discrete single-photon quantum walks with tunable decoherence. Phys. Rev. Lett. 104, 153602 (2010).
CAS ADS Article Google Scholar
Article Google Scholar
37. Schreiber, A. et al. A 2D quantum walk simulation of two-particle dynamics. Science 336, 55–58 (2012).
CAS ADS Article Google Scholar
38. Jeong, Y.-C., Di Franco, C., Lim, H.-T., Kim, M. & Kim, Y.-H. Experimental demonstration of delayed-choice decoherence suppression. Nat. Commun. 5, 4522 (2013).
Google Scholar
39. Peruzzo, A. et al. Quantum walks of correlated photons. Science 329, 1500–1503 (2010).
CAS ADS Article Google Scholar
40. Perets, H. B. et al. Realization of quantum walks with negligible decoherence in waveguide lattices. Phys. Rev. Lett. 100, 170506 (2008).
ADS Article Google Scholar
41. Owens, J. O. et al. Two-photon quantum walks in an elliptical direct-write waveguide array. New J. Phys. 13, 075003 (2011).
ADS Article Google Scholar
42. Sansoni, L. et al. Two-particle bosonic-fermionic quantum walk via integrated photonics. Phys. Rev. Lett. 108, 010502 (2012).
ADS Article Google Scholar
43. Crespi, A. et al. Anderson localization of entangled photons in an integrated quantum walk. Nat. Photonics 7, 322 (2013).
CAS ADS Article Google Scholar
44. Crespi, A., Corrielli, G., Della Valle, G., Osellame, R. & Longhi, S. Dynamic band collapse in photonic graphene. New J. Phys. 15, 013012 (2013).
ADS Article Google Scholar
CAS ADS Article Google Scholar
46. Corrielli, G., Crespi, A., Della Valle, G., Longhi, S. & Osellame, R. Fractional Bloch oscillations in photonic lattices. Nat. Commun. 4, 1555 (2013).
ADS Article Google Scholar
47. Della Valle, G., Osellame, R. & Laporta, P. Micromachining of photonic devices by femtosecond laser pulses. J. Opt. A Pure Appl. Opt. 11, 013001 (2009).
ADS Article Google Scholar
48. Longhi, S. Quantum-optical analogies using photonic structures. Laser Photonics Rev. 3, 243–261 (2009).
CAS ADS Article Google Scholar
49. Szameit, A. & Nolte, S. S. Discrete optics in femtosecond-laser-written photonic structures. J. Phys. B At. Mol. Opt. Phys. 43, 163001 (2010).
ADS Article Google Scholar
50. Shimon, E. Graph Algorithms 2nd ed Cambridge Univ. (2011).
51. Whitfield, J. D., Rodríguez-Rosario, C. A. & Aspuru-Guzik, A. Quantum stochastic walks: a generalization of classical random walks and quantum walks. Phys. Rev. E 81, 022323 (2010).
ADS Article Google Scholar
52. Viciani, S., Lima, M., Bellini, M. & Caruso, F. Observation of noise-assisted transport in an all-optical cavity-based network. Phys. Rev. Lett. 115, 083601 (2015).
ADS Article Google Scholar
53. Li, Y., Caruso, F., Gauger, E. & Benjamin, S. C. Momentum rejuvenation underlies the phenomenon of noise-assisted quantum energy flow. New J. Phys. 17, 013057 (2015).
ADS Article Google Scholar
54. Biggerstaff, D. N. et al. Enhancing quantum transport in a photonic network using controllable decoherence. Preprint at (2015).
Download references
We acknowledge M.B. Plenio for carefully reading the manuscript and very useful and stimulating discussions. This work was supported by the European Union through the project FP7-ICT-2011-9-600838 (QWAD Quantum Waveguides Application and Development), the European Research Council (ERC-Starting Grant 3D-QUEST, 3D-Quantum Integrated Optical Simulation, grant agreement no. 307783,, EU FP7 Marie-Curie Programme (Career Integration Grant, project no. 293449) and by national grants as PRIN (Programmi di ricerca di rilevante interesse nazionale) project AQUASIM (Advanced Quantum Simulation and Metrology) and MIUR-FIRB project (RBFR10M3SB). We acknowledge QSTAR for computational resources, partially based also on GPU programming by NVIDIA Tesla C2075 GPU computing processors.
Author information
Authors and Affiliations
F.C., F.S. and R.O. conceived the whole project. A.C., A.G.C. and R.O. carried out the experiment; F.C. led and carried out the theoretical work. A.C. and A.G.C. exploited ultra-fast laser writing process to realize the maze structure. F.C. and A.C. performed the numerical simulations of the transport dynamics. F.C., F.S. and R.O. supervised the project. All authors contributed to the discussion, analysis of the results and the writing of the manuscript.
Corresponding authors
Correspondence to Filippo Caruso or Roberto Osellame.
Ethics declarations
Competing interests
The authors declare no competing financial interests.
Supplementary information
Supplementary Information
Supplementary Figures 1-7, Supplementary Notes 1-4 and Supplementary References. (PDF 1070 kb)
Rights and permissions
Reprints and Permissions
About this article
Verify currency and authenticity via CrossMark
Cite this article
Caruso, F., Crespi, A., Ciriolo, A. et al. Fast escape of a quantum walker from an integrated photonic maze. Nat Commun 7, 11682 (2016).
Download citation
• Received:
• Accepted:
• Published:
• DOI:
Further reading
Quick links
Nature Briefing
|
a2f44d4662440c3b | Corrupt universities, journals, fake news-9.
Corrupt university, journal, fake news
Top journals "endorsed" fake science.
[ Sin of top journals Science, Nature = fake science spreader. ]
(S-1) ↓ Unreal quasiparticle, parallel worlds by
Actually, the current leading theory of everything uniting Einstein relativity and quantum mechanics is just a "science fiction" embracing fantasy extra-dimensions and its leading physicists still "endorses" untrustworthy candidate only for protecting their dubious "science", trying to keep us ignorant of truth.
See this week physicis is still useless.
Top journal Nature "crushes" true science.
[ Flat earth pseudo-science and twitter's war on free speech. ]
(N-1) ↓ Dubious first room-temperature superconductor in
In Galileo's era, free thoughts, free speech and true "science" have been suppressed by then-academia = medieval church. People had been compelled to accept imaginary flat-earth pseudo-science blindly.
Have you ever thought the same horrible "science-fiction" story is happening even in the present highly-modernized society ?
See this week physics is still useless.
Top journal 'Science' endorses fake science.
[ Today's 'science' sees atoms only as 'fictional' particles. ]
(S-1) ↓ Any microscopes detect only pseudo-particles in ?
Quantum mechanics hampers interpreting the microscopic observation of atoms.
See this week physics is still useless.
Science journals spread fake science.
[ Why cannot we apply 'electrons' to medicine ? ]
(A-1) ↓ Real electron force can't be obtained by
The current so-called "science" is just "illusion" which is getting us nowhere near curing diseases ? Dazzling prizes are just "camouflage" to hide inconvenient truth ?
See this week physics is still useless.
Nobel prize is steeped in fake science.
[ Science journals and Nobel are a source of fake science ? ]
(P-1) Illusory concepts spread by journals and
What if today's sacred "science" taught in schools is just a lie ?
Schools and universities are just the places infested with viruses and science fiction, tormenting students by raising tuition ?
If so, the highest academic organizations such as prestigious Nobel prize and top science journals could be a main source of immoral, fake science.
See this week physics is still useless.
Quantum computer is fiction, forever.
[ Unreal fractional-charge anyon quasi-particle → robust fictional quantum computer ? ← nonsense. ]
(Q-1) ↓ Quantum mechanics needs unreal quasi-particle
Recentry, the top peer-reviewed science journal Nature gave sensational coverage to fictitious quasi-particle called "anyon" which might have finally be found, though it's an unreal particle ?
Quasiparticle with unrealistically-changeable fictitious mass is Not an actual elementary particle. Real particles must have their own unchangeable, definite charge and mass.
Why is a quasi-particle, which is just made of other multiple elementary particles pretending to be an independent (fictitious) particle, such a big deal ? Is it even worth "science" ?
This unscientific tendency dominating today's academia worshiping unreal objects is seen in the way physicists "intentionally misinterpret" observed electric current called quantum Hall effect as one caused by illusory quasiparticles.
When electric current is flowing under some electric and magnetic fields, its electric conductance (= reciprocal of resistance ) sometimes shows peculiar quantized values.
When this electric conductance is a fraction times e2/h (= e is electric charge, h is Planck constant ), this phenomenon is called "fractional quantum Hall effect".
All real particles such as electrons and protons have their unique unchangeable definite charge (= the smallest charge is always "e", which cannot split into a smaller fractional-charge ! ) and mass.
So it is certain that all physical phenomena including this quantum Hall effect are caused by behavior of real electrons and nuclei with fixed charge (= an integer multiple of charge "e" ) and mass which can neither be changed nor divided under any electromagnetic fields.
But today's quantum mechanics is so unrealistic as to say stupid things: this "fractional Hall effect" is caused by "illusory fractional-charge quasiparticle" called anyon ( this p.18 ).
Of course, a real electron can never split into smaller fractional-charges except in "imaginary" world. Ignoring this obvious fact, Nobel commitee awarded illusory fractional-charge quasiparticle the most prestigious Nobel prize ( this p.17 ). ← The end of real "science".
Furthermore, today's physics claims electrons inside this quantum Hall effect can unrealistically change not only their charge but also "mass" into fictitious effective mass which can even be fictional negative mass ( this p.5 ).
So the current only atomic theory = quantum mechanics relying on fake quasi-particle model with unrealistically-changeable charge and mass clearly disagrees with real-world phenomena caused only by "real particles", hence, quantum mechanics is false.
What's the true physical mechanism behind this quantum Hall effect allegedly riddled with imaginary fractional-charge quasiparticle anyons ?
One of these unreal anyon quasiparticles is said to be Majorana fermion.
Each time it has to face "truth", quantum mechanics always withdraws into its own 'shell' isolated from real world, and says quantum Hall effect may be caused by unphysical "topology" full of gibberish like "Berry phase" and "Chern number".
What the heck are these "topology", Berry phase and Chern number ? ← These are real objects ? Can we touch them ?
Unfortunately all these unfamiliar gibberish concepts are just ghost-like illusion, appearing only inside theoretical physicists' heads.
Artificial concept called Berry phase was introduced as "fictitious (= Not real ) magnetic field" appearing only inside fake quasi-momentum (= k ) space.
Berry phase is neither real phase nor real magnetic field ( this p.5, this p.4 upper ), hence No physical meaning.
Surprisingly, insane quantum mechanics allows this fake magnetic field = Berry phase to have unrealistic magnetic monopole called Chern number ( this p.5 right-lower, this p.6 ).
Hypothetical magnetic monopole is just fiction which is said to unrealistically separate north and south poles of magnet.
Of course, unphysical magnetic monopole can Not exist in our daily life except as fake quasiparticle.
So quantum mechanical fake picture of Hall effect haunted by illusory quasi-particle, monopole and fictitious magnetic field called Berry phase is just a nonsense unscientific theory telling us nothing about true physical mechanism based on real particles.
The media and academia repeatedly insist future quantum computer utilizing fictional quasiparticle would be "robust" against disturbance (= noise ), error-free, fault-tolerant, because it is "topologically protected" ?
↑ Fictional quasiparticle has such a "supernatural" power, which real particles don't have, for future imaginary computer, though quasiparticle itself does Not even exist ? ← How contradictory today's quantum theory is !
Why can physicsts so confidently and unrealistically assert still-imaginary quantum computer based on unreal quasiparticles ( which Microsoft pursues in vain ) could be such a superb, robust and error-free dreamlike machine ? ← Out of their mind ?
What are "physical grounds" for this imaginary "robustness" of still-unrealized quantum computer ?
They circulate unsubstantial hypothesis that when the world lines (= illusory trail ) of anyon quasiparticles pass around each other, those fictitious trails intertwine with each other like imaginary threads or shoestrings and form "braids" which is hard to separate, hence, could be a hypothetically robust and stable quantum computer ( this 3rd paragraph ) ?
↑ This explanation is nonsense and groundless, Not referring to any real physical objects, hence, showing No detailed convincing reason why imaginary quantum computer could be robust.
In fact, there is No real evidence that imaginary future quantum computer could be robust. Physicists just artificially invented "fake rules or principles" which are physically meaningless, irrelevant to our real world.
When we exchange positions of two real particles twice or make one particle go around another particle and return to its original position, the whole situation also returns to its original state.
But according to their artificial hypothesis, when we exchange unreal anyon quasiparticles or make one anyon go around another anyon and return to its original position, the whole situation becomes completely different from the initial state !
↑ Of course, such an occult thing would never happen in our real world, so anyon quasiparticle is purely fiction, but physicists recklessly try to make imaginary quantum computer based on fictional anyon quasiparticle and its artificially-invented hypothesis. ← nonsense.
This uncanny anyon-quasiparticle's behavior is expressed in the nonphysical way that exchanging two anyon quasiparticles causes their wavefunctions to acquire additional fictitious phase (= e ).
This fictitious phase (= e ) is supposed to take "any" values, so Nobel laureate Frank Wilczek named this unreal quasiparticle "anyon" ( this p.18 ).
← So even today's top physicists are wasting time in "fictional objects".
Different from real particles, quasiparticle anyons leave their "fictitious trails" in space where anyons passed around. ← Though physicists never clarify what this trail caused by anyon-quasiparticle is made of.
This fictitious unseen trail allegedly left by anyon quasiparticle could be entangled with other anyon's trail, which engraves "fictitious phase" in their trail as an information of how anyons passed each other, according to their ad-hoc hypothesis.
They try to exploit this "fictitious trail or phase" caused by exchanging two hypothetical quasiparticle anyons as "memory (= information )" of imaginary future quantum computer ( this p.13 ). ← nonsense.
↑ This artificial rule physicists suddenly fabricated out of nowhere about swapping quasiparticles causing unseen trail or phase ( this p.3 ) is meaningless with No reasonable grounds, because it has nothing to do with our real world from the beginning.
As long as anyons are fictitious quasiparticles (= Not real elementary particles ) which cannot be isolated or confirmed, their empty theory about "fictitious trail or phase" left by anyons is also invalid and useless for any dreamlike future computers.
Two hypothetical anyon quasiparticles pass each other, exchange their positions (= called "braiding", this 5th last paragraph ) and cause unseen ghost-like phase shift (= e ) in their fictitious trails.
↑ Physicists artificially interpret this picture as the way this artificial phase shift information would be encoded in intertwined quasiparticles's trails like a braided cord or hair, which can be loosened but Not undone ? ← so quantum computer could be robust ? ← nonsense.
↑ This imaginary hypothesis treating fictional intertwined trails (= memory information ? ) left by unreal anyon quasipartices as "robust braided cord or threads" could be a reason why future quantum computer based on quasiparticles would be robust and immune to error ( this p.3 left ) ? ← Impossible !
Imaginary paths or trails of two quasiparticle are "braided" like imaginary threads or shoelaces tied in knots, which knot could be used for imaginary robust quantum memory, just because the knots is usually hard to untie (= hence, robust ? ), regardless of whether those knots are real or not ( this 4th-5th paragraphs, this 8th-10th paragraphs ) ? ← Baseless empry hypothesis !
↑ This reason why fictional quantum computer based on unreal quasiparticles could be "robust" is insane and meaningless, because all these anyon quasiparticles and their fictitious trails or phases ( this p.2 right ) allegedly created by swapping two unreal quasiparticles are just illusory artificially-proposed concepts ( this p.3 right middle ).
So there is No physical evidence that illusory quantum computer could be robust against noise, empowered by imaginary fractional-charge quasiparticles.
The media and academia are blatantly spreading fake news such as "robust dreamlike quantum computer !" based only on groundless hypothesis.
Actually, Nobel laureate Frank = a namer of 'crazy anyon' seems to obsess only about how to use his pseudo-science and Nobel prize for a "cheap political tool" like extra-dimensional Witten, because their so-called "science" can never be put to practical use.
So the latest sensational news published in Nature physics claiming "direct observation of (unreal) anyon quasiparticle (← ? )" is nonsense and meaningless research, as long as quasiparticles themselves do Not really exist.
← This "sad" fact would never change, as long as we embrace unphysical quantum mechanics.
Quasiparticle anyons ( and imaginary quantum computer ) still ( and forever ) have No practical application ( this 2-4th last paragraphs ).
Physicists just measured some change of "electric current" or "conductance", fitted these observed electric parameters to their ad-hoc anyon pseudo-model and claimed this may be a "direct evidence of anyons" , without directly isolating or observing fictitious anyon quasiparticle ( this 7th paragraph, this p.3 right ).
The present quantum mechanics based on unphysical Schrödinger equation is completely useless in describing multi-electron material, so it has to insanely treat the whole many-electron material as one-pseudo-electron or unreal quasiparticle model with fake mass and charge.
This is reason why physicists had to introduce non-existent fractional-charge anyon quaiparticles to explain some electric conductance seen in quantum Hall effect, ignoring the fact that an elementary particle = electron can never split into smaller fractional charges.
Fractional quantum Hall effect can be realistically explained using quantized electron's de Broglie wave ( this p.7 ) where the electric conductance changes depending on the "number of electron's orbital layers and vacant quantized orbits" under different electromagnetic fields.
↑ This realistic interpretation of quantum Hall effect needs "real many electrons" which are actually moving and causing real de Broglie waves, which is impossible in today's unrealistic quantum mechanical picture where physicists always have to treat the whole multi-electron solid as unrealistically "indistinguishable one-pseudo-electron band model".
See this week physics is still useless.
Quantum mechanical model is ridiculous.
[ Band model uses unreal quasi-electron with fake mass and quasi-momentum. ]
(Q-1) ↓ Fake quasi-particle mass model = ?
The current atomic theory = quantum mechanics is weird and unrealistic.
Quantum mechanical calculation tool = nonphysical Schrödinger equation cannot even specify the exact position and velocity of each electron.
← True atomic behavior would be undisclosed by today's "science" forever.
This weird physical theory claims each electron must always spread all over space as vague probability cloud (= still lacks physical reality ) which means an electron exists in multiple different places simultaneously using fictional parallel worlds.
This fuzzy electron cloud covering all space in quantum mechanics causes serious problems and contradiction. It cannot even generate the most fundamental force = Coulomb electric force enough to form molecular bonds.
To generate fake molecular bond energy instead of using real Coulomb force, quantum mechanics has to rely on crazy assumption that all different electrons are indistinguishable existing in all different atoms and places simultaneously using fantasy parallel worlds.
These "ghost-like" electrons always existing in every place and every atom by splitting into multiple parallel worlds cause nonphysical "exchange energy" which occult energy cannot be described by any real physical objects on the earth.
This nonphysical exchange energy is supposedly essential for causing main attractive pseudo-(= Non Coulomb ) energy in molecular bond ( this p.3 ) and repulsive pseudo-energy in Pauli principle. ← This pseudo-energy is unrealistic, because there is only "exchange energy" with No "exchange force" or force carrier.
All physicists failed to give realistic physical meaning to this uncanny exchange energy ( this p.6, this p.11 ). Feynman finally admitted Nobody understands occult quantum mechanics.
So quantum mechanics, which can Not use any real physical force between particles or distinguish each individual electron, is a completely useless dead theory, because it is inherently unable to describe multi-particle reactions in many-electron solids and materials in a realistic way.
Then, how are today's physicists using this useless obsolete quantum mechanics to describe actual many-paricle (= many-electron ) materials such as metal, molecules, crystal, semiconductor and solar cells ?
Unbelievably, quantum mechanics resorts to illegitimate tricks of treating the whole many-electron material as "one single fake quasi-electron" model ( this p.2 ) called band theory ( this p.2 ).
In this bogus quantum mechanical approximation called "band theory", its one pseudo-electron representing the entire material is supposed to have fictitious mass (= called effective mass, which pseudo-mass can be freely changed even into unreal negative mass, this p.51 ), fake quasi-momentum and effective-pseudo-potential energy ( this p.11 ).
All today's physicists can do is artificially manipulate and change these fake physical concepts such as quasi-electron's effective mass and other pseudo-parameters (= spin-orbit interaction ) to fit experimentally measured results ( this p.3, this p.6 ) instead of predicting new physical phenomena using actually-existing real particles.
Quantum mechanics tries to hide and pack various complicated many-electron phenomena into "nonphysical band model" with artificially-manipulable fake electron's mass which can be freely changed = contradicting the fact that all real particles such as an electron and proton have their unique unchangeable definite mass.
So quantum mechanics contradicting physical reality by manipulating originally-unchangeable particle's mass is intrinsically useless with No ability to predict any new physical values based on real partciles.
Each time we observe some new physical phenomena of materials, quantum mechanics just forces us to fabricate new artificial pseudo-particle models and manipulate their nonphysical parameters such as fake electron's effective mass, instead of using real electrons with definite unchangeable mass.
Quantum mechanics can only roughly express the relation between energy and pseudo-momentum as a nonphysical band (= band theory can Not specify each particle's position ! ) of quasi-electron with unreal (= effective, this p.3 ) mass which tells us nothing about true mechanism of how exactly real electrons and nuclei interact with each other and cause various physical phenomena inside materials.
All materials can be roughly classified into three groups: conductor (= through which electron current can easily flow ), insulator (= electron current cannot flow ) and semi-conductor (= electric current conductance lies between conductor and insulator ).
Quantum mechanics describes this macroscopically-observable electric conductance using only two nonphysical linear bands (= of course, these nonphysical bands themselves cannot be seen ) which include No real electron picture.
The upper band is called "conduction band" where electrons can flow as electric current, and the lower band is called "valence band" where electrons are tightly bound to their nearest nuclei, causing no electric current (= How exactly electrons are actually moving and interacting in each band is unknown forever as long as we embrace quantum mechanics ).
When the energy band-gap between these conduction and valence bands is wide ( or narrow ), electrons cannot ( or can ) move up to conduction band or flow as electric current.
The point is energy levels of these bands and their energy gap between conduction and balence bands have to be experimentally determined after measuring electric conductance of each material.
It is Not that quantum mechanical band model itself can 'predict' new energy gap or state of unknown material. Instead, after experimentally measuring electric current of materials, physicists artificially adjust (= manipulate ) and determine the energy band-gap parameter of each material to fit experimental results.
Therefore, quantum mechanics and its band model can actually do nothing about 'predicting' new physical values, instead, it just "adjusts" free parameters such as band-gap energy to experimental results. ← There is No harm in forgetting this meaningless quantum mechanical theory.
Topological insulator is a composite material that behaves as an insulator in its interior but whose surface behaves as a conductor.
This exotic topological material got Nobel physics prize in 2016, but it still has No practical application ( using quantum mechanics ).
Because quantum mechanics tries to aim at illusory energy-efficiency utilizing unreal quasiparticles allegedly popping up (= of course, these ghost-like quasiparticles are directly undetectable ) inside topological insulator.
All research papers on this topological insulator use only ambiguous hackneyed phrase "potential ( like seen in this abstract )". ← These indecisive expression in the media means there is still No practical application of topological material allegedly containing fictitious quasiparticles.
How does quantum mechanics describe physical mechanism (= conductor only on the surface ) of topological insulator ?
According to quantum mechanics, the upper conduction band has fictitious positive electron's effective mass and the lower valence band has unrealistic negative effective mass.
Physicists claim topological insulator's band has an imaginary 'point of intersection' between these two bands of positive and negative effective masses.
So this "middle point" indicates the existence of fictitious zero-mass fermion ( this p.20 ) which is also an unreal quasiparticle. ← this quantum mechanical explanation based on unreal quasiparticles is far from "true mechanism" inside topological insulator.
Energy efficient solar cell is indispensable for stopping global warming ( if it was real ). To do so, we need to elucidate true physical mechanism inside solar cell.
Quantum mechanics says an electron in valence band absorbing a photon (= a particle of light ? ) is excited and moved up to conduction band in solar cell.
But, first of all, a real electron cannot interact with (= absorb or emit ) a real photon, so if electromagnetic wave is not light wave but a photon particle, this photon must be unreal "virtual photon" with unphysical imaginary mass.
After absorbing this imaginary photon, quantum mechanics says the electron interacts with the energy of a "phonon" which is one of unreal quasiparticles representing thermal atomic vibration of material.
↑ So, quantum mechanics can Not describe actual atomic motion or vibration inside material using real particles such as electrons and nuclei, instead, it has to rely on fictitious quasiparticle model ( atomic vibration → phonon quasiparticle ) even now. ← Quantum mechanics is intrinsically unable to describe real physical mechanism based on real particles.
And then, an excited electron into conduction band and a hole (= created by excited electron ) left in valence band are said to bind each other to form another quasiparticle called exciton ( this p.7, this p.2 ).
Contrary to the media's sensational narration, all these fictitious quasiparticles such as exciton and phonon are expressed just as nonphysical, abstract math symbols with No physical figure ( quasiparticle = a, b, c ? .. this p.8 ) whose too simple interaction equation tells us nothing about detailed physical mechanism of how real electrons have to form unreal quasiparticles inside solar cell ( this p.2 ).
This unreal quasiparticle exciton is also supposed to have its fake (= effective ) mass which is a freely changeable value different from true electron's definite mass.
All other quasiparticle's pseudo-parameters are also freely-adjustable ( this p.6 ) meaningless physical values in predicting some actual values or phenomena.
And quantum mechanics says this unreal exciton quasiparticle decomposes into other quasiparticle called polaron ( this p.3, this p.2 ) which pseudo-polaron-quasiparticle just represents some state where an electron inside solid interacts with other electrically-polarized crystal atoms or nuclei.
↑ Again, quantum mechanics refuses to describe interaction between electron and other atoms inside solar cell using real particles, instead, it always has to rely on unreal quasiparticle model even now and forever. ← Quantum mechanics is intrinsically an unscientific dead theory which has nothing to do with real objects.
This polaron quasiparticle is also supposed to have its fake effective mass (= freely-adjustable parameter, so meaningless for predicting new physical values, this p.8 ).
↑ It is imposssible to know true underlying atomic and (real) electrons' behavior inside solar cell under the current dead atomic theory. ← How can we realize a "miracle energy-efficient solar cell" satisfying the demanding Green-new-deal ? ← Impossible dream forever !
You might have often heard the media's hackneyed phrases "modern computer's transistors wouldn't have been realized without quantum mechanical band theory." ← This is a complete lie.
As I said, quantum mechanical band theory replaced actual many-electron behavior or reactions by "one-pseudo-electron" and unreal quasi-particle model with fake (= effective ) mass which is freely-adjustable (= contradicting real particle's definite mass ) to artificially fit observed phenomena.
So, the quantum mechanical band theory based on unreal concepts has No ability to predict true transistor's mechanism of how real electrons are actually moving, responding and interacting with each other when a computer is running.
Semiconductos used in modern computer's transistors were invented by researchers' persistent efforts, long experience and "trial and error" approach where existing quantum mechanical theory was completely useless for developing or predicting new transistors.
Transistors and LED consist of two types of semiconductors (= p and n ) including different impurity atoms.
These different semiconductors have different band energies (= heights of conduction or valence band energies are different between different p- and n- type semiconductors ).
Each band is supposed to contain different "one-pseudo-electron" and quasi-particle model with fake effective mass (= freely-changeable, different from true electron's unchangeable mass, and their band pseudo-energy gap is also freely-adjustable meaningless parameter ).
This quantum mechanical band model intrinsically can never explain or predict real electron's interaction inside computer transistor due to its unphysical quasi-particle and effective mass model, as seen in other topological insulator and solar cells.
Depending on components and the magnitude of voltage applied to these transistors, they just artificially change and manipulate the energy levels of each semiconductor's conduction and valence bands ( by which electron's fake effective mass is also manipulated ) after experimentally measuring the change of electric conductivity depending on applied voltage ( this p.19, this p.156 ).
↑ Quantum mechanics is helpless, it cannot 'predict' how each band energy changes responding to applied voltage in transistors in advance.
So it is Not that quantum mechanical band model contributed to developing semiconductors for modern PC transitors, but that after physicists invented practical semiconductors by "trial and error" approach, they artificially adjusted fictitious quantum mechanical band model parameters such as fake electron's effective mass and band energy to fit observed semiconductor phenomena.
Developing useful transistor's semiconductor was the first. → Fabricating quantum mechanical band model with fake quasi-particle and effective mass was later.
← Quantum mechanics has nothing to do with modern computers and smartphones.
Superconductivity is a physical property observed in certain materials where electrical resistance vanished and magnetic flux fields are expelled.
Quantum mechanical BCS theory got Nobel prize in 1972 by allegedly explaining physical mechanism of how this superconductor occurs.
How could weird quantum mechanics describe superconductor ?
BCS theory claims unreal quasiparticle called "virtual phonons" unrealistically bind two repulsive electrons and form fictitious Cooper pair ( this p.9 ), which is said to condense into the lowest superconducting band-energy state called Bose-Einstein condensate.
But it's impossible for two negative electrons to attract each other disobeying Coulomb repulsion between electrons.
This unreal virtual phonon model is supposed to approximately represent "vibration of many positive nuclei or atoms" which appear to attract a pair of electrons via fictitious quasiparticle phonons.
As you see, quantum mechanics just replaced interactions caused by real electrons and nuclei by fictitious quasi-particle model. ← Because today's quantum mechanics intrinsically cannot deal with complicated many-electron behavior inside material using real particles and real forces, so completely useless atomic theory as a tool for other applied science.
And quantum mechanics says this Cooper pair can break into another unreal quasiparticle called Bogoliubov quasiparticle ( this p.9 ).
These phonon quasiparticles and Cooper pair can be expressed only as abstract, nonphysical math symbols which tell us nothing about true detailed mechanism of what is actually happening inside supercondutor using real electron interaction.
We have to discard today's quantum mechanical unreal quasiparticle concept and nonphysical band model which just artificially fabricate and manipulate quasi-electron's fake effective mass to fit experimental results = hence, useless for 'predicting' new actual physical phenomena.
Otherwise, we can never clarify true underlying atomic behavior and mechanism using real physical objects or electrons' interaction. ← Using fictional quantum mechanical concepts in other applied science such as medicine and nano-technology is impossible forever (= probably, quantum mechanical unreal quasi-particle model will be helpless in drug development ).
Unfeasible Green new deal lacking technological innovation due to today's useless basic atomic theory has become just a pie-in-the-sky "political tool" like other fake science for outdated academia colluding with corporations to get taxpayers' money, pretending to be "real science".
The current schools and meaninglessly-expensive educational facilities were reduced to a mere hotbed of harmful viruses, = useless except for indoctrinating students with superstition called "fake science".
Actually physics professors are obsessed only with killing time by using their pseudo-science such as extra-dimensions as pseudo-political tools. ← Our science technology will Never progress if we blindly continue to believe in this "superstitious academia". = Deadly diseases won't be cured, either.
See this week physics is still useless.
Quantum mechanics fails to explain physical phenomena.
[ Topological insulator is caused by unreal quasi-particle ? ]
(T-1) ↓ Unphysical reason = quasiparticle, time symmetry
Coronavirus reinfections raise the possibility that ongoing vaccine development for acquiring herd immunity is doomed to fail.
We need to find other treatments for damaging viruses as soon as possible.
But the current outdated "science" is completely helpless except for protecting academia's old vested interests at the sacrifice of students.
Because the current mainstream science = quantum mechanics, dominant pseudo-atomic theory is far from clarifying actual physical phenomena.
The current basic physics refuses to use real physical objects as a tool to unravel true underlying mechanism.
As long as we blindly embrace the current fictional quantum mechanics as an absolute "science", we could never find a true cure or develop effective drugs for deadly diseases and viruses.
Here are some typical examples of us worshiping quantum mechanical falsehood even in the highest ranking academia. Nobel prize physics in 2016 was awarded to exotic matter called "topological insulator".
Getting the most prestigious Nobel prize means this "topological material" has actually improved our daily life or cured some diseases ? Unfortunately, this exotic matter did nothing. It is still ( and forever ) an impractical matter.
Nobel committee used vague phrases " opened the door on an unknown world where matter can assume strange states ? .. Many people are hopeful of future applications ?" ← so, still No practical application.
"Topology" is originally the branch of mathematics concerned with abstract doughnut shape, Not a physical entity.
In physics, topological insulator is used just as a general name for some "composite material" that behaves as an insulator in its interior but whose surface contains conducting states. ← That's all. No more detailed physical meaning or explanation.
Topological insulator is often said to have the "potential (= Not realized yet )" to be an "energy-efficient device" in the future. ← What can cause this "energy-efficiency" ?
Surprisingly, quantum mechanics says fictional "massless quasiparticle" may cause this strange "energy-efficiency" (= just an unsubstantial theory, No experimental proof ), because they say "massless particle" would travel faster with less energy loss, if it really existed. ← Of course, we cannot see such a fictional ghost-like quasi-particle.
So the present only atomic theory = quantum mechanics can Not explain any actually-happening physical phenomena or its underlying mechanism using real particles or objects. ← The current "science" intentionally avoids physical "reality".
Instead, quantum mechanics has to rely on unreal (= unseen, intangible due to "nonexistent" ) quasiparticle model which pseudo-particles allegedly have fake mass (= called "effective mass" ) and charge to explain macroscopic phenomena.
It means even if we continue to use this unrealistic quantum mechanics, underlying atomic behavior and true mechanism will be unknown forever, because this mainstream "science" firmly forbids us from using real particles as basic atomic tool to describe actual many-particle physical phenomena.
Schrödinger equation = fundamental calculation tool of quantum mechanics can neither distinguish different individual electrons nor handle multi-electron materials in a practical and realistic way. ← Completely useless equation as an atomic tool.
Therefore, the present physics has to treat any many-electron materials as a single pseudo-electron model called band theory where a fictitious quasi-electron with fake (= effective this p.5 ) mass (= which could even be unreal negative mass ) is expressed as nonphysical linear bands including fictitious quasi-momentum (= instead of real particle's motion ) and pseudo-potential energy ( this p.2 ).
The media repeatedly emphasizes this topological insulator would be "robust" because it is protected by nonphysical thing called "time reversal symmetry". ← What the heck is this "time symmetry" ?
This "time reversal symmetry" is just an artificially-introduced, meaningless concept which has nothing to do with our real world. ← So, quantum mechanics losing touch with reality can Not give any logical reason why topological insulator could be "robust" in the "imaginary" future.
"Time reversal" is a kind of "unrealistic situation" where clock time is literally reversed using fictional "time machine" and each particle starts to move or spin in the opposite direction like in Alice's wonderland.
Quantum mechanics insists illusory quasiparticles called "massless Dirac, Weyl, Majorana fermions" with the opposite spins are moving in the opposite directions on the surface of topological insulator ( this p.9 ). ← Of course, we cannot directly see these fictional quasi-particles. These are just "imagination".
All real particles such as electrons, protons and nuclei inside material have their peculiar definite mass, so these uncanny massless quasiparticles allegedly popping up on the surface of topological insulator are purely illusionary objects.
This quasiparticle's imaginary motion in two opposite directions allegedly possessing unphysical spins (= represented just as two arrows with No physical shape ) are expressed as abstract ad-hoc band model which tells us nothing about what real shape each particle has or how they are actually moving inside material.
If the clock time (= t ) could be reversed in imaginary world, the direction of motion (= momentum and spin ) of this fictitious quasiparticle should also change.
So massless quasiparticle with up (or down ) spin moving rightward (or leftward ) should change into a particle with down (or up ) spin moving leftward (or rightward ) under this "imaginary" time reversal. ← The initial state is supposed to contain quasiparticles with both right- and leftward motions, up and down spins, hence, the whole state is unchanged under time revsersal, which they call "time reversal symmetry ( this p.7 )".
Though physicists argue this nonphysical "time reversal symmetry" is the only reason for "robust" topological insulator (← ? ), this logic is nonsense, telling us nothing about real physical reason or concrete mechanism of what exactly causes conducting surface of topological insulator, covered up by fictional quasi-particles.
This time reversal and quasiparticle model are just illusory concepts existing only inside theoretical physicists' heads, Not in the real world.
Physicists needed to make up fictitious concepts of unreal massless quasiparticles and artificial nonsense rule (= time reversal symmetry ) just to explain special conductivity of material (= actually irrelevant to unreal quasiparticle ) in the current irrational mainstream quantum "science".
Furthermore, they recklessly try to use this illogical topological insulator allegedly consisting of fictitious quasiparticle for pie-in-the-sky quantum computer. ← How can fictitious ghost-like quasi-particles be applied to a practical sophisticated computer ? ← Impossible !
Recently, the media started to sensationalize new crystal called "time crystal (= time travel ? )" which was first proposed by Nobel prize winner Frank Wilczek.
When time crystals ( or ions ) are oscillated at a certain frequency by some external stimulus ( such as laser ), time crystals are supposed to respond and faintly ocsillate with a different frequency or period from the external oscillation.
That's all. Nothing mysterious. This time crystal just "faintly oscillating at a peculiar frequency" is completely useless with No practical application ( this 5th last paragraph ) yet.
So the media and academia seem to try to use the name of prestigious "Nobel prize" in order just to attract public attention and make intrinsically-useless time crystal "look" meaningful (even by exploiting irrelevant fictional time travel, this 1st sentence ).
What causes this time crystal ? Again, quantum mechanics tries to explain this faint oscillation of time crystal as the result of nonphysical "time symmetry breaking" without showing any more detailed realistic mechanism. ← nonsense theory.
Actually, the original paper of Nobel laureate Frank (= first advocator of still-useless time crystal ) just mentioned nonphysical abstract concept (= "time reversal symmetry, this p.2 left )" with No detailed real physical mechanism behind time crystal.
And Frank unscientifically tries to connect this time crystal to irrelevant fictional extra-dimensions. ← Like in this way, the current "mainstream basic science" has pursued only meaningless imaginary concepts since its foundation 100 years ago.
Even in the latest research on this time crystal, physicists try to explain helium-3 faint magnetic oscillation using unreal quasiparticle model. ← nonsense.
As long as quantum mechanics relies on fictitious quasi-particle or nonphysical fabricated concept = symmetry, true physical mechanism will never be clarified covered up under fictitious concepts.
Energy-efficient solar cell is crucial for the so-called "Green-new-deal" to evolve from the current radical crazy impossible dream into some realistic one.
But the basic atomic theory = quantum mechanics always tries to explain the underlying mechanism of solar cell using unreal quasiparticle exciton model with fake (= effective ) mass ( this p.4, this p.3 ) which tells us nothing about detailed real mechanism of what's going on inside solar cell.
Hence, ironically, the current mainstream "science" worshiped by greeners makes Green-new-deal an useless radical impossible dream, forever.
Green-politicians intentionally ignore the fact that colder weather could increase more victims of coronavirus and damage economy next winter = far earlier than imaginary warmer future based on dubious data of climate change whose technology makes No progress due to the current fictional "science", as I said.
Actually, the recent world's weather is not so hot as the media, "scientists" and Greta make a big fuss about it.
If "scientists" brainwashing people with unreal extra-dimenions pull the strings behind presidential candidates, we would be forced into the world of more "darkness" and ignorance where only science fiction dominates, pretending to be "science".
See this week physics is still useless.
Physical meaning is still unknown.
[ Scientists blindly use irrational Schrödinger equation. ]
(S-1) ↓ Quantum mechanics uses unreal equation
Quantum mechanical Schrödinger equation has No ability to describe reality.
See this week physics is still useless.
Continued from
See previous version of criticizing top journals.
2020/8/21 updated. Feel free to link to this site. |
f7dae7f4d98b098b | Archive for December, 2018
Incompleteness ex machina
Sunday, December 30th, 2018
I have a treat with which to impress your friends at New Year’s Eve parties tomorrow night: a rollicking essay graciously contributed by a reader named Sebastian Oberhoff, about a unified and simplified way to prove all of Gödel’s Incompleteness Theorems, as well as Rosser’s Theorem, directly in terms of computer programs. In particular, this improves over my treatments in Quantum Computing Since Democritus and my Rosser’s Theorem via Turing machines post. While there won’t be anything new here for the experts, I loved the style—indeed, it brings back wistful memories of how I used to write, before I accumulated too many imaginary (and non-imaginary) readers tut-tutting at crass jokes over my shoulder. May 2019 bring us all the time and the courage to express ourselves authentically, even in ways that might be sneered at as incomplete, inconsistent, or unsound.
Thursday, December 27th, 2018
I’m planning to be in Australia soon—in Melbourne January 4-10 for a friend’s wedding, then in Sydney January 10-11 to meet colleagues and give a talk. It will be my first trip down under for 12 years (and Dana’s first ever). If there’s interest, I might be able to do a Shtetl-Optimized meetup in Melbourne the evening of Friday the 4th (or the morning of Saturday the 5th), and/or another one in Sydney the evening of Thursday the 10th. Email me if you’d go, and then we’ll figure out details.
The National Quantum Initiative Act is now law. Seeing the photos of Trump signing it, I felt … well, whatever emotions you might imagine I felt.
Frank Verstraete asked me to announce that the University of Vienna is seeking a full professor in quantum algorithms; see here for details.
Why are amplitudes complex?
Monday, December 17th, 2018
[By prior agreement, this post will be cross-posted on Microsoft’s Q# blog, even though it has nothing to do with the Q# programming language. It does, however, contain many examples that might be fun to implement in Q#!]
Why should Nature have been quantum-mechanical? It’s totally unclear what would count as an answer to such a question, and also totally clear that people will never stop asking it.
Short of an ultimate answer, we can at least try to explain why, if you want this or that piece of quantum mechanics, then the rest of the structure is inevitable: why quantum mechanics is an “island in theoryspace,” as I put it in 2003.
In this post, I’d like to focus on a question that any “explanation” for QM at some point needs to address, in a non-question-begging way: why should amplitudes have been complex numbers? When I was a grad student, it was his relentless focus on that question, and on others in its vicinity, that made me a lifelong fan of Chris Fuchs (see for example his samizdat), despite my philosophical differences with him.
It’s not that complex numbers are a bad choice for the foundation of the deepest known description of the physical universe—far from it! (They’re a field, they’re algebraically closed, they’ve got a norm, how much more could you want?) It’s just that they seem like a specific choice, and not the only possible one. There are also the real numbers, for starters, and in the other direction, the quaternions.
Quantum mechanics over the reals or the quaternions still has constructive and destructive interference among amplitudes, and unitary transformations, and probabilities that are absolute squares of amplitudes. Moreover, these variants turn out to lead to precisely the same power for quantum computers—namely, the class BQP—as “standard” quantum mechanics, the one over the complex numbers. So none of those are relevant differences.
Indeed, having just finished teaching an undergrad Intro to Quantum Information course, I can attest that the complex nature of amplitudes is needed only rarely—shockingly rarely, one might say—in quantum computing and information. Real amplitudes typically suffice. Teleportationsuperdense coding, the Bell inequality, quantum money, quantum key distribution, the Deutsch-Jozsa and Bernstein-Vazirani and Simon and Grover algorithms, quantum error-correction: all of those and more can be fully explained without using a single i that’s not a summation index. (Shor’s factoring algorithm is an exception; it’s much more natural with complex amplitudes. But as the previous paragraph implied, their use is removable even there.)
It’s true that, if you look at even the simplest “real” examples of quantum systems—or as a software engineer might put it, at the application layers built on top of the quantum OS—then complex numbers are everywhere, in a way that seems impossible to remove. The Schrödinger equation, energy eigenstates, the position/momentum commutation relation, the state space of a spin-1/2 particle in 3-dimensional space: none of these make much sense without complex numbers (though it can be fun to try).
But from a sufficiently Olympian remove, it feels circular to use any of this as a “reason” for why quantum mechanics should’ve involved complex amplitudes in the first place. It’s like, once your OS provides a certain core functionality (in this case, complex numbers), it’d be surprising if the application layer didn’t exploit that functionality to the hilt—especially if we’re talking about fundamental physics, where we’d like to imagine that nothing is wasted or superfluous (hence Rabi’s famous question about the muon: “who ordered that?”).
But why should the quantum OS have provided complex-number functionality at all? Is it possible to answer that question purely in terms of the OS’s internal logic (i.e., abstract quantum information), making minimal reference to how the OS will eventually get used? Maybe not—but if so, then that itself would seem worthwhile to know.
If we stick to abstract quantum information language, then the most “obvious, elementary” argument for why amplitudes should be complex numbers is one that I spelled out in Quantum Computing Since Democritus, as well as my Is quantum mechanics an island in theoryspace? paper. Namely, it seems desirable to be able to implement a “fraction” of any unitary operation U: for example, some V such that V2=U, or V3=U. With complex numbers, this is trivial: we can simply diagonalize U, or use the Hamiltonian picture (i.e., take e-iH/2 where U=e-iH), both of which ultimately depend on the complex numbers being algebraically closed. Over the reals, by contrast, a 2×2 orthogonal matrix like $$ U = \left(\begin{array}[c]{cc}1 & 0\\0 & -1\end{array}\right)$$
has no 2×2 orthogonal square root, as follows immediately from its determinant being -1. If we want a square root of U (or rather, of something that acts like U on a subspace) while sticking to real numbers only, then we need to add another dimension, like so: $$ \left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & -1 & 0\\0 & 0&-1\end{array}\right)=\left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & 0 & 1\\0 & -1 & 0\end{array}\right) ^{2} $$
This is directly related to the fact that there’s no way for a Flatlander to “reflect herself” (i.e., switch her left and right sides while leaving everything else unchanged) by any continuous motion, unless she can lift off the plane and rotate herself through the third dimension. Similarly, for us to reflect ourselves would require rotating through a fourth dimension.
One could reasonably ask: is that it? Aren’t there any “deeper” reasons in quantum information for why amplitudes should be complex numbers?
Indeed, there are certain phenomena in quantum information that, slightly mysteriously, work out more elegantly if amplitudes are complex than if they’re real. (By “mysteriously,” I mean not that these phenomena can’t be 100% verified by explicit calculations, but simply that I don’t know of any deep principle by which the results of those calculations could’ve been predicted in advance.)
One famous example of such a phenomenon is due to Bill Wootters: if you take a uniformly random pure state in d dimensions, and then you measure it in an orthonormal basis, what will the probability distribution (p1,…,pd) over the d possible measurement outcomes look like? The answer, amazingly, is that you’ll get a uniformly random probability distribution: that is, a uniformly random point on the simplex defined by pi≥0 and p1+…+pd=1. This fact, which I’ve used in several papers, is closely related to Archimedes’ Hat-Box Theorem, beloved by friend-of-the-blog Greg Kuperberg. But here’s the kicker: it only works if amplitudes are complex numbers. If amplitudes are real, then the resulting distribution over distributions will be too bunched up near the corners of the probability simplex; if they’re quaternions, it will be too bunched up near the middle.
There’s an even more famous example of such a Goldilocks coincidence—one that’s been elevated, over the past two decades, to exalted titles like “the Axiom of Local Tomography.” Namely: suppose we have an unknown finite-dimensional mixed state ρ, shared by two players Alice and Bob. For example, ρ might be an EPR pair, or a correlated classical bit, or simply two qubits both in the state |0⟩. We imagine that Alice and Bob share many identical copies of ρ, so that they can learn more and more about it by measuring this copy in this basis, that copy in that basis, and so on.
We then ask: can ρ be fully determined from the joint statistics of product measurements—that is, measurements that Alice and Bob can apply separately and locally to their respective subsystems, with no communication between them needed? A good example here would be the set of measurements that arise in a Bell experiment—measurements that, despite being local, certify that Alice and Bob must share an entangled state.
If we asked the analogous question for classical probability distributions, the answer is clearly “yes.” That is, once you’ve specified the individual marginals, and you’ve also specified all the possible correlations among the players, you’ve fixed your distribution; there’s nothing further to specify.
For quantum mixed states, the answer again turns out to be yes, but only because amplitudes are complex numbers! In quantum mechanics over the reals, you could have a 2-qubit state like $$ \rho=\frac{1}{4}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1& 0 & 0 & 1\end{array}\right) ,$$
which clearly isn’t the maximally mixed state, yet which is indistinguishable from the maximally mixed state by any local measurement that can be specified using real numbers only. (Proof: exercise!)
In quantum mechanics over the quaternions, something even “worse” happens: namely, the tensor product of two Hermitian matrices need not be Hermitian. Alice’s measurement results might be described by the 2×2 quaternionic density matrix $$ \rho_{A}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -i\\i & 1\end{array}\right), $$
and Bob’s results might be described by the 2×2 quaternionic density matrix $$ \rho_{B}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -j\\j & 1\end{array}\right), $$
and yet there might not be (and in this case, isn’t) any 4×4 quaternionic density matrix corresponding to ρA⊗ρB, which would explain both results separately.
What’s going on here? Why do the local measurement statistics underdetermine the global quantum state with real amplitudes, and overdetermine it with quaternionic amplitudes, being in one-to-one correspondence with it only when amplitudes are complex?
We can get some insight by looking at the number of independent real parameters needed to specify a d-dimensional Hermitian matrix. Over the complex numbers, the number is exactly d2: we need 1 parameter for each of the d diagonal entries, and 2 (a real part and an imaginary part) for each of the d(d-1)/2 upper off-diagonal entries (the lower off-diagonal entries being determined by the upper ones). Over the real numbers, by contrast, “Hermitian matrices” are just real symmetric matrices, so the number of independent real parameters is only d(d+1)/2. And over the quaternions, the number is d+4[d(d-1)/2] = 2d(d-1).
Now, it turns out that the Goldilocks phenomenon that we saw above—with local measurement statistics determining a unique global quantum state when and only when amplitudes are complex numbers—ultimately boils down to the simple fact that $$ (d_A d_B)^2 = d_A^2 d_B^2, $$
but $$\frac{d_A d_B (d_A d_B + 1)}{2} > \frac{d_A (d_A + 1)}{2} \cdot \frac{d_B (d_B + 1)}{2},$$
and conversely $$ 2 d_A d_B (d_A d_B – 1) < 2 d_A (d_A – 1) \cdot 2 d_B (d_B – 1).$$
In other words, only with complex numbers does the number of real parameters needed to specify a “global” Hermitian operator, exactly match the product of the number of parameters needed to specify an operator on Alice’s subsystem, and the number of parameters needed to specify an operator on Bob’s. With real numbers it overcounts, and with quaternions it undercounts.
A major research goal in quantum foundations, since at least the early 2000s, has been to “derive” the formalism of QM purely from “intuitive-sounding, information-theoretic” postulates—analogous to how, in 1905, some guy whose name I forget derived the otherwise strange-looking Lorentz transformations purely from the assumption that the laws of physics (including a fixed, finite value for the speed of light) take the same form in every inertial frame. There have been some nontrivial successes of this program: most notably, the “axiomatic derivations” of QM due to Lucien Hardy and (more recently) Chiribella et al. Starting from axioms that sound suitably general and nontechnical (if sometimes unmotivated and weird), these derivations perform the impressive magic trick of deriving the full mathematical structure of QM: complex amplitudes, unitary transformations, tensor products, the Born rule, everything.
However, in every such derivation that I know of, some axiom needs to get introduced to capture “local tomography”: i.e., the “principle” that composite systems must be uniquely determined by the statistics of local measurements. And while this principle might sound vague and unobjectionable, to those in the business, it’s obvious what it’s going to be used for the second it’s introduced. Namely, it’s going to be used to rule out quantum mechanics over the real numbers, which would otherwise be a model for the axioms, and thus to “explain” why amplitudes have to be complex.
I confess that I was always dissatisfied with this. For I kept asking myself: would I have ever formulated the “Principle of Local Tomography” in the first place—or if someone else had proposed it, would I have ever accepted it as intuitive or natural—if I didn’t already know that QM over the complex numbers just happens to satisfy it? And I could never honestly answer “yes.” It always felt to me like a textbook example of drawing the target around where the arrow landed—i.e., of handpicking your axioms so that they yield a predetermined conclusion, which is then no more “explained” than it was at the beginning.
Two months ago, something changed for me: namely, I smacked into the “Principle of Local Tomography,” and its reliance on complex numbers, in my own research, when I hadn’t in any sense set out to look for it. This still doesn’t convince me that the principle is any sort of a-priori necessity. But it at least convinces me that it’s, you know, the sort of thing you can smack into when you’re not looking for it.
The aforementioned smacking occurred while I was writing up a small part of a huge paper with Guy Rothblum, about a new connection between so-called “gentle measurements” of quantum states (that is, measurements that don’t damage the states much), and the subfield of classical CS called differential privacy. That connection is a story in itself; here’s our paper and here are some PowerPoint slides.
Anyway, for the paper with Guy, it was of interest to know the following: suppose we have a two-outcome measurement E (let’s say, on n qubits), and suppose it accepts every product state with the same probability p. Must E then accept every entangled state with probability p as well? Or, a closely-related question: suppose we know E’s acceptance probabilities on every product state. Is that enough to determine its acceptance probabilities on all n-qubit states?
I’m embarrassed to admit that I dithered around with these questions, finding complicated proofs for special cases, before I finally stumbled on the one-paragraph, obvious-in-retrospect “Proof from the Book” that slays them in complete generality.
Here it is: if E accepts every product state with probability p, then clearly it accepts every separable mixed state (i.e., every convex combination of product states) with the same probability p. Now, a well-known result of Braunstein et al., from 1998, states that (surprisingly enough) the separable mixed states have nonzero density within the set of all mixed states, in any given finite dimension. Also, the probability that E accepts ρ can be written as f(ρ)=Tr(Eρ), which is linear in the entries of ρ. OK, but a linear function that’s determined on a subset of nonzero density is determined everywhere. And in particular, if f is constant on that subset then it’s constant everywhere, QED.
But what does any of this have to do with why amplitudes are complex numbers? Well, it turns out that the 1998 Braunstein et al. result, which was the linchpin of the above argument, only works in complex QM, not in real QM. We can see its failure in real QM by simply counting parameters, similarly to what we did before. An n-qubit density matrix requires 4n real parameters to specify (OK, 4n-1, if we demand that the trace is 1). Even if we restrict to n-qubit density matrices with real entries only, we still need 2n(2n+1)/2 parameters. By contrast, it’s not hard to show that an n-qubit real separable density matrix can be specified using only 3n real parameters—and indeed, that any such density matrix lies in a 3n-dimensional subspace of the full 2n(2n+1)/2-dimensional space of 2n×2n symmetric matrices. (This is simply the subspace spanned by all possible tensor products of n Pauli I, X, and Z matrices—excluding the Y matrix, which is the one that involves imaginary numbers.)
But it’s not only the Braunstein et al. result that fails in real QM: the fact that I wanted for my paper with Guy fails as well. As a counterexample, consider the 2-qubit measurement that accepts the state ρ with probability Tr(Eρ), where $$ E=\frac{1}{2}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1 & 0 & 0 & 1\end{array}\right).$$
I invite you to check that this measurement, which we specified using a real matrix, accepts every product state (a|0⟩+b|1⟩)(c|0⟩+d|1⟩), where a,b,c,d are real, with the same probability, namely 1/2—just like the “measurement” that simply returns a coin flip without even looking at the state at all. And yet the measurement can clearly be nontrivial on entangled states: for example, it always rejects $$\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}},$$ and it always accepts $$ \frac{\left|00\right\rangle-\left|11\right\rangle}{\sqrt{2}}.$$
Is it a coincidence that we used exactly the same 4×4 matrix (up to scaling) to produce a counterexample to the real-QM version of Local Tomography, and also to the real-QM version of the property I wanted for the paper with Guy? Is anything ever a coincidence in this sort of discussion?
I claim that, looked at the right way, Local Tomography and the property I wanted are the same property, their truth in complex QM is the same truth, and their falsehood in real QM is the same falsehood. Why? Simply because Tr(Eρ), the probability that the measurement E accepts the mixed state ρ, is a function of two Hermitian matrices E and ρ (both of which can be either “product” or “entangled”), and—crucially—is symmetric under the interchange of E and ρ.
Now it’s time for another confession. We’ve identified an elegant property of quantum mechanics that’s true but only because amplitudes are complex numbers: namely, if you know the probability that your quantum circuit accepts every product state, then you also know the probability that it accepts an arbitrary state. Yet, despite its elegance, this property turns out to be nearly useless for “real-world applications” in quantum information and computing. The reason for the uselessness is that, for the property to kick in, you really do need to know the probabilities on product states almost exactly—meaning (say) to 1/exp(n) accuracy for an n-qubit state.
Once again a simple example illustrates the point. Suppose n is even, and suppose our measurement simply projects the n-qubit state onto a tensor product of n/2 Bell pairs. Clearly, this measurement accepts every n-qubit product state with exponentially small probability, even as it accepts the entangled state
$$\left(\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}}\right)^{\otimes n/2}$$
with probability 1. But this implies that noticing the nontriviality on entangled states, would require knowing the acceptance probabilities on product states to exponential accuracy.
In a sense, then, I come back full circle to my original puzzlement: why should Local Tomography, or (alternatively) the-determination-of-a-circuit’s-behavior-on-arbitrary-states-from-its-behavior-on-product-states, have been important principles for Nature’s laws to satisfy? Especially given that, in practice, the exponential accuracy required makes it difficult or impossible to exploit these principles anyway? How could we have known a-priori that these principles would be important—if indeed they are important, and are not just mathematical spandrels?
But, while I remain less than 100% satisfied about “why the complex numbers? why not just the reals?,” there’s one conclusion that my recent circling-back to these questions has made me fully confident about. Namely: quantum mechanics over the quaternions is a flaming garbage fire, which would’ve been rejected at an extremely early stage of God and the angels’ deliberations about how to construct our universe.
In the literature, when the question of “why not quaternionic amplitudes?” is discussed at all, you’ll typically read things about how the parameter-counting doesn’t quite work out (just like it doesn’t for real QM), or how the tensor product of quaternionic Hermitian matrices need not be Hermitian. In this paper by McKague, you’ll read that the CHSH game is winnable with probability 1 in quaternionic QM, while in this paper by Fernandez and Schneeberger, you’ll read that the non-commutativity of the quaternions introduces an order-dependence even for spacelike-separated operations.
But none of that does justice to the enormity of the problem. To put it bluntly: unless something clever is done to fix it, quaternionic QM allows superluminal signaling. This is easy to demonstrate: suppose Alice holds a qubit in the state |1⟩, while Bob holds a qubit in the state |+⟩ (yes, this will work even for unentangled states!) Also, let $$U=\left(\begin{array}[c]{cc}1 & 0\\0 & j\end{array}\right) ,~~~V=\left(\begin{array}[c]{cc}1 & 0\\0& i\end{array}\right).$$
We can calculate that, if Alice applies U to her qubit and then Bob applies V to his qubit, Bob will be left with the state $$ \frac{j \left|0\right\rangle +
k \left|1\right\rangle}{\sqrt{2}}.$$
By contrast, if Alice decided to apply U only after Bob applied V, Bob would be left with the state
$$ \frac{j \left|0\right\rangle – k \left|1\right\rangle}{\sqrt{2}}.$$
But Bob can distinguish these two states with certainty, for example by applying the unitary $$ \frac{1}{\sqrt{2}}\left(\begin{array}[c]{cc}j & k\\k & j\end{array}\right). $$
Therefore Alice communicated a bit to Bob.
I’m aware that there’s a whole literature on quaternionic QM, including for example a book by Adler. Would anyone who knows that literature be kind enough to enlighten us on how it proposes to escape the signaling problem? Regardless of the answer, though, it seems worth knowing that the “naïve” version of quaternionic QM—i.e., the version that gets invoked in quantum information discussions like the ones I mentioned above—is just immediately blasted to smithereens by the signaling problem, without the need for any subtle considerations like the ones that differentiate real from complex QM.
Update (Dec. 20): In response to this post, Stephen Adler was kind enough to email me with further details about his quaternionic QM proposal, and to allow me to share them here. Briefly, Adler completely agrees that quaternionic QM inevitably leads to superluminal signaling—but in his proposal, the surprising and nontrivial part is that quaternionic QM would reduce to standard, complex QM at large distances. In particular, the strength of a superluminal signal would fall off exponentially with distance, quickly becoming negligible beyond the Planck or grand unification scales. Despite this, Adler says that he eventually abandoned his proposal for quaternionic QM, since he was unable to make specific particle physics ideas work out (but the quaternionic QM proposal then influenced his later work).
Unrelated Update (Dec. 18): Probably many of you have already seen it, and/or already know what it covers, but the NYT profile of Donald Knuth (entitled “The Yoda of Silicon Valley”) is enjoyable and nicely written.
The NP genie
Tuesday, December 11th, 2018
Hi from the Q2B conference!
|
0c01b1632fef5ac9 | Stepping into the quantum world
Part 1 of the quest for the hydrogen molecule.
There is no way around it: we are going to do quantum mechanics. So let’s start with an introduction to establish a starting point and get ourselves familiarized with the general ideas. After that, we are going to start calculating!
There are several ways you can start building a theory of quantum mechanics. Basically these are the main options:
1. Schrödinger equation
2. Hilbert space states
3. Path integrals
This is also the order by which they were discovered, and of course it also became the order in which quantum mechanics is usually taught. Following the exact footsteps of previous physicists is a great way not to make progress, since you will build the same concepts and misunderstandings in your mind as the people before you. We will try to choose our own path, where possible.
So, let’s compare them.
1. The normal procedure would be to start with the Schröding equation and then go calculate stuff. Which is fine, but it is also not very illuminating. Where does this equation come from? What does it tell us about the world, really? It does have the advantage that you can actually calculate things, so we will have to use it at some point. It is not much use when you go to more advanced topics (relativistic quantum mechanics, quantum field theory).
2. Hilbert space states are nice but a a little abstract in the beginning. They are good to develop first ideas and make you feel closer to “the truth”.
3. Path integrals are very cool, and quite useful for keeping things simple (relatively speaking) when you go to more advanced physics. They are mathematically fuzzy and strange. They are not simple to calculate and a bit frightening. We can take a look later if it is possible to calculate the hydrogen atom with path integrals, because I don’t know really.
So let’s start with the second idea, and focus on quantum states. We will learn to reason from the abstract to the concrete and we will also learn that the world is a bit more… orthogonal (?) than what we think it is.
A state is a description of a situation. We will use this notation to write down a state: \(|\rm{state}\rangle\). Furthermore the states can be added and subtracted, and they can be multiplied by a number. Don’t worry if this feels a little abstract, it’s supposed to be that way. We don’t make any assumption about the underlying structure of such a state.
Let’s start with an example. If I am feeling happy, I could describe my state as
Since states can be multiplied with -1, this is another state:
Question: which of these states is happier? Answer: they are both equally happy. An unhappy state would be an entirely different one:
What if I am both happy and tired, how can I describe that? We could write it down like this:
where I have given the combined state a name “\(|\psi\rangle\)“. The simple rule is: all states can be added. So that means we can also have the state:
Really? It looks confusing, so we must be doing quantum mechanics!
Next time, we are going to derive the Schröding equation, although it is generally agreed that this is not possible.
The quest to understand the hydrogen molecule
I admit that I don’t really understand the hydrogen molecule.
It is the simplest molecule but it seems so complicated to have a feeling for how it really works, or to explain it well. Wouldn’t it be great to understand it thoroughly? And with it all the physical laws that make it the way it is? It would give a great view on the way things really are, to grasp the underlying simplicity that is so strange to us.
Perhaps we should go on a quest. A journey with many steps, and at each one we learn something that is real and that is true. It will be a difficult quest. We may not even make it. Not all of us will make it. And we may take wrong turns or have to come back to a point where we were before. We may take more difficult steps where we could have taken a simpler path. Who knows where we will end up. But it will be quite educational. If you can follow the path. We’ll see.
Save money by driving
Do you enjoy driving? Do you care about pollution? Do you like money? If your answer to one of these questions is yes, the following may be of interest to you.
Last summer it was quiet on the road during my daily commute. This enabled me to do a little experiment that I have been wanting to do. I know my car uses less fuel when driving slower, but how much less?
To test this, every day I would drive 100 km/h on the highway in the morning and 120 km/h in the evening when coming back, or vice versa, chosen more or less randomly. Every time I recorded fuel consumption for my Fiat 500, travel time, and temperature.
This gave me some data points, that I could analyze. Now to be fair, I have only 13 data points, so it is not all very scientific. But citizen science is mostly about the fun, right?
Here are the data points I recorded, plotted against temperature:
Fuel consumption versus temperature
Now be careful, the fuel consumption is for the whole trip, while the change in driving speed was only applied to the highway part. Let’s see if the trip time correlates with the speed.
Time vs inverse speed
(Intercept) 21.143 3.391 6.235 6.39e-05 ***
inverse_speed 40.714 6.098 6.677 3.48e-05 ***
Residual standard error: 1.096 on 11 degrees of freedom
Multiple R-squared: 0.8021, Adjusted R-squared: 0.7841
F-statistic: 44.58 on 1 and 11 DF, p-value: 3.483e-05
(sorry for being confusing with the inverse speed, but it is the physically correct quantity to use). That correlation is certainly significant, but there is some variation. Probably this is due to random events, like traffic lights or small traffic jams. The correlation is 40.714 km, meaning that effectively 40.7 km of the 60 km drive was on highways.
The question rises whether it is better to use the correlation of fuel consumption with speed, or with travel time. Because of the variations mentioned above, I actually find a more significant correlation with speed. I will use this to calculate how much money you save when driving slower.
Here are the correlations with all parameters that seem relevant:
(Intercept) 2.055750 0.655437 3.136 0.011996 *
speed 0.028449 0.004505 6.316 0.000138 ***
temperature -0.028659 0.027249 -1.052 0.320333
direction 0.290119 0.144276 2.011 0.075223 .
Residual standard error: 0.1496 on 9 degrees of freedom
Multiple R-squared: 0.8734, Adjusted R-squared: 0.8312
F-statistic: 20.7 on 3 and 9 DF, p-value: 0.0002236
All correlations for fuel consumption
Indeed, the correlation with speed is very significant. There is also a correlation with direction (morning or afternoon drive), probably due to typical wind conditions. The correlation with temperature is not very significant, but in my experience it really becomes significant when you compare summer and winter driving conditions.
The speed / fuel consumption coefficient is 0.028449, or, in other words, by driving 20 km/h slower I save 0.56898 liter per 100 km. But I assumed that the savings were only due to the highway part of our trip. So I should calculate the absolute numbers: the full trip is 59.9 km, so the saving is 0.3408 liter. For just the highway part, this is 0.8374 liter per 100 km.
The average price of Euro95 right now is 1.600 euro/liter. This means that I save 1.34 euro per 100 km. When driving 120 km/h, driving 100 km takes 50 minutes, but at 100 km/h, it takes 60 minutes, a difference of 10 minutes.
Of course, I assume here that all the correlations are linear, but I know that they are not. In fact, if you drive faster, say 130 km/h or 140 km/h, your fuel consumption will increase more dramatically, and the savings will be bigger.
By driving slower on the highway, you can save 1.34 euro, but you arrive 10 minutes later. This means you save 8 euro by driving for one hour. Maybe not enough to earn a living, but if you have the time: relax and drive slower!
If you are interested, you can get the data and the R script that I used. |
7c6b636a5e0f586c | LOG#114. Bohr’s legacy (II).
Dedicated to Niels Bohr
and his atomic model
2nd part: Electron shells,
Quantum Mechanics
and The Periodic Table
Niels Bohr (1923) was the first to propose that the periodicity in the properties of the chemical elements might be explained by the electronic structure of the atom. In fact, his early proposals were based on his own “toy-model” (Bohr atom) for the hydrogen atom in which the electron shells were orbits at a fixed distance from the nucleus. Bohr’s original configurations would seem strange to a present-day chemist: the sulfur atom was given a shell structure of (2,4,4,6) instead of 1s^22s^22p^63s^23p^4, the right structure being (2,8,6).
The following year, E.C.Stoner incorporated the Sommerfeld’s corrections to the electron configuration rules, and thus, incorporating the third quantum number into the description of electron shells, and this correctly predicted the shell structure of sulfur to be the now celebrated sulfur shell structure (2,8,6). However neither Bohr’s system nor Stoner’s could correctly describe the changes in atomic spectra in a magnetic field (known as the Zeeman effect). We had to wait to the complete Quantum Mechanics formalist to arise in order to give a description of this atomic phenomenon an many others (like the Stark’s effect, spectrum split due to an electric field).
Bohr was well aware of all this stuff. Indeed, he had written to his friend Wolfgang Pauli to ask for his help in saving quantum theory (the system now known as “old quantum theory”). Pauli realized that the Zeeman effect could be due only to the outermost electrons of the atom, and was able to reproduce Stoner’s shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his famous exclusion principle (for fermions like the electrons theirselves) around 1925. He said:
It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k [l], j [ml] and m [ms].
The next step was the Schrödinger equation. Firstly published by E. Schrödinger in 1926, it gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom: his solution yields the (quantum mechanical) atomic orbitals which are shown today in textbooks of chemistry (and above). The careful study of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung’s rule (1936) for the order in which atomic orbitals are filled with electrons. The Madelung’s law is generally written as a formal sketch (picture):
Shells and subshells versus orbitals
In the picture of the atom given by Quantum Mechanics, the notion of trajectory looses its meaning. The description of electrons in atoms are given by “orbitals”. Instead of orbits, orbitals arise as the zones where the probability of finding an electron is “maximum”. The classical world seems to vanish into the quantum realm. However, the electron configuration was first conceived of under the Bohr model of the (hydrogen) atom, and it is still common to speak of shells and subshells (imagine an onion!!!) despite the advances in understanding of the quantum-mechanical nature of electrons (both, wave and particles, due to the de Broglie hypothesis). Any particle (e.g. an electron) does have wave and particle features. The de Broglie hypothesis says that to any particle with linear momentum p=mv corresponds a wave length (or de Broglie wavelength) given by
Remark: this formula can be easily generalized to the relativistic domain by a simple shift from the classical momentum to the relativistic momentum P=m\gamma v, so
\lambda =\dfrac{h\sqrt{1-\beta^2}}{mv} with \beta=v/c
An electron shell is the set of energetic allowed states that electrons may occupy which share the same principal quantum number n (the number before the letter in the orbital label), and which gives the energy of the shell (or the orbital in the language of QM). An atom’s nth electron shell can accommodate 2n^2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, and the third shell 18 electrons, the fourth 32, the fifth 50, the sixth 72, the seventh 92, the eighth 128, the ninth 162, the tenth 200, the eleventh 242, the twelfth 288 and so on. This sequence of “atomic numbers” is well known
In fact, I have to be more precise with the term “magic number”. Magic number (atomic or even nuclear physics), in the shell models of both atomic and nuclear structure, IS any of a series of numbers that connote stable structure.
The magic numbers for atoms are 2,10,18, 36, 54, and 86, 118, 168, 218, 290, 362,… They correspond to the total number of electrons in filled electron shells (having ns^2np^6 as electron configuration ). Electrons within a shell have very similar energies and are at similar distances from the nucleus, i.e., inert gases!
The factor of two above arises because the allowed states are doubled due to the electron spin —each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually noted by an up-arrow) and one with a spin −1/2 (with a down-arrow).
An atomic subshell is the set of states defined by a common secondary quantum number, also called azimutahl quantum number, ℓ, within a shell. The values ℓ = 0, 1, 2, 3 correspond to the spectroscopic values s, p, d, and f , respectively. The maximum number of electrons which can be placed in a subshell is given by 2(2ℓ + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell. Therefore, subshells “close” after the addition of 2,8,10,18, 36,50,72,… electrons. That is, atomic shells close after we reach ns^2np^6, with n>1, i.e., shells close after reaching the inert gas electron configuration.
The numbers of electrons that can occupy each shell and each subshell arise from the equations of quantum mechanics,in particular the Pauli exclusion principle: no two electrons in the same atom can have the same values of the four quantum numbers stated above. The energy associated to an electron is that of its orbital. The energy of any electron configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground (a.k.a. fundamental) state.
Aufbau principle and Madelung rule
The Aufbau principle (from the German word Aufbau, “building up, construction”) was an important part of Bohr’s original concept of electron configuration. It may be stated as:
a maximum of two electrons are put into orbitals in the order of increasing orbital energy: the lowest-energy orbitals are filled before electrons are placed in higher-energy orbitals.
The approximate order of filling of atomic orbitals, following the sketch given above arrows from 1s to 7p. After 7p the order includes orbitals outside the range of the diagram, starting with 8s.
The principle works very well (for the ground states of the atoms) for the first 18 elements, then decreasingly well for the following 100 elements. The modern form of the Aufbau principle describes an order of orbital energies given by Madelung’s rule (also referred as the Klechkowski’s rule). This rule was first stated by Charles Janet in 1929, rediscovered by E. Madelung in 1936, and later given a theoretical justification by V.M.Klechkowski. In modern words, it states that:
A) Orbitals are filled in the order of increasing n+l.
B) Where two orbitals have the same value of n+l, they are filled in order of increasing n.
This gives the following order for filling the orbitals:
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, (8s, 5g, 6f, 7d, 8p, and 9s)
In this list the orbitals in parentheses are not occupied in the ground state of the heaviest atom now known (circa 2013, July), the ununoctiom (Uuo), an atom with Z=118 protons in its nucleus and thus, 118 electrons in its ground state.
The Aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the atomic shell model. The nuclear shell model predicts the magic numbers at Z,N=2, 8, 20, 28, 50, 82, 126 (and Z,N=184 and 258 for spherical symmetry, but it does not seem to be the case for “deformed” nuclei at high values of Z and N).
Shortcomings of the Aufbau principle
The Aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; neither of these is true (although they are approximately true enough for the principle to be useful). It considers atomic orbitals as “boxes” of fixed energy into which can be placed two electrons and no more. However the energy of an electron “in” an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no “one-electron solutions” for systems of more than one electron, only a set of many-electron solutions which cannot be calculated exactly. The fact that the Aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbital is always filled before the p-orbitals. In a hydrogenic (hydrogen-like) atoms , which only has one electron, the s-orbital and the p-orbitals of the same shell have exactly the same energy, to a very good approximation in the absence of external electromagnetic fields. (However, in a real hydrogen atom, the energy levels are slightly split by the magnetic field of the nucleus, and by the quantum electrodynamic effects like the Lamb shift).
Exceptions to Madelung’s rule
There are several more exceptions to Madelung’s rule among the heavier elements, and it is more and more difficult to resort to simple explanations such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations, which are an approximate method for taking account of the effect of the other electrons on orbital energies. For the heavier elements, it is also necessary to take account of the effects of Special Relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light . In general, these relativistic effects tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals. The electron-shell configuration of elements beyond rutherfodium (Z=104) has not yet been empirically verified, but they are expected to follow Madelung’s rule without exceptions until the element Ubn (Unbinillium, Z=120). Beyond that number, there is no accepted viewpoint (see below my discussion of Pykko’s model for the extended periodic table).
from the Greeks to Mendeleiev and Seaborg
Atoms and their existence from Greeks to Mendeleiev have suffered historical evolution. In this section, I am going to give you a visual tour from the “ancient elements” until their current classifications via Periodic Tables (Mendeleiev’s being the first one!).
Some early elements and periodic tables:PT0ancientelementsFromGreeks PT0bisbisbisbisbisChineseelements2 PT0bisbisbisbisChineseElements PT0bisbisbisMendeleievsAsZeusperiodictablemonument PT0bisbisElementsknownToFirstHumans PT0bisElementsCirca1800
Just for fun, Feng Shui elements are…PT0Chinese-methaphysicsFengShuiElements
And you can also find today apps/games with elements as “key” pieces…Gamelogy! LOL…PT0elementsAndGamelogy
Turning back to Chemistry…Or Alchemy (Modern Chemistry is an evolution from Alchemy in which we take the scientific method seriously, don’t forget it!)PT0elementsInAstrology PT0theFiveElements
After the chemical revolution in the 18th and 19th century, we also have these pictures (note the evolution of the chemical elements, their geometry and classification):
PT1daltonsTable1808 PT2lavoisierList PT3oldElements PT4a3dTable PT5oldsymbolsAndElements PT6oldperiodictableOctaveLaw1865newlands PT7bayleysPeriodicTable PT8MeyerPeriodicTable PT9atomicmassesCirca1850 PT10oldelementNotations PT11mendeleievsConjectureInGerman PT11oldPeriodicTableAndPicture PT12MendeleievsVerticalPeriodicPTable PT13moreaboutMendeleievsTrick PT14mendeleievsPredictions PT15rangsperiodicTable PT16metalloidsVersusMetals PT17oldelementsAnotherPicture PT18mendeleievspredictionsAndtheircontext PT19periodictableAndPeriodicFeaturesofchemicalelements
Some interesting pictures about “new tables” and geometries of some periodic tables and its “make-up” process:
PT20spiralperiodictable PT21schaltenbrandsperiodictable PT22mayanperiodictable
The following one is just for fun (XD): PT23periodictableGeekTVseriesAndMovies PT24afun3dPeriodicTableModel PT25ellipticalperiodictable PT26periodictableAnotherVariationincludingsuperactinides PT27periodictableCylinder PT28spherePeriodicTableDream PT29infinitePeriodicTable PT30periodicTableAndElectronShells PT31otherPeriodicTableGeometry PT31periodictableArch PT32stowePeriodicTable PT33lavoisierCompleteList
Extended periodic tables
and the island of stability
Seaborg conjectured that the 8th period elements were an interesting “laboratory” to test quantum mechanical and physical principles from relativity and quantum physics. He claimed that there could be possible that around some (high) values of Z, N (122, 126 in Z, and about 184 in N), some superheavy elements could be stable enough to be produced. This topic is yet controversial by the same reasons I mentioned in the previous post: finite size of the nucleus, relativistic effects make the nuclei to be deformed, and likely, some novel effects related to nonpertubative issues (like pair creation in strong fields, as Greiner et al. have remarked) should be taken into account. Anyway, the existence of the so-called island of stability is a hot topic in both theoretical chemistry and experimental chemistry (at the level of the synthesis of superheavy elements). It is also relevant for (quantum and relativistic) physics. However, we will have to wait to be able to find those elements in laboratories or even in the outer space!
Some extended periodic tables were proposed by theoretical chemists like Seaborg and many others:
PT34islandofStabilitySeaborgHypothesis PT35galacticPeriodicTable PT36circularExtendedPtable PT37extendedPeriodicTableSeaborgStabilityIsland PT41extPtableWithHblock PT42extendedPtable
Pykko’s model and beyond
The finnish chemist Pekka Pykko has produced a beautiful modern extended periodic table from his numerical calculations. He has discovered that the Madelung’s law is modified and then, the likely correct superheavy element included Periodic Table should be something like this (with Z less or equal than 172):
PT38pykkosPtable1 PT39pykkosTable2
You can visit P. Pykko homepage’s here http://www.chem.helsinki.fi/~pyykko/I urge to do it. He has really cool materials! The abstract of his periodic table paper deserves to be inserted here:
PTpykkopaperAbstractand some of his interesting results from it are the modified electron configurations with respect to the normal Madelung’s rule (as I remarked above):
PTextraPykkoGoodElConfUntilE140 PTextraPykkoGoodElConfUntilE149 PTextraPykkoGoodElConfUntilE168
Indeed, Pykko is able to calculate some “simple” and “stable” molecules made of superheavy elements!
It is interesting to compare Pykko’s table with other extended periodic tables out there, like this one:
His extended periodic paper can be dowloadad here
and you can also watch a periodic table video by the most famous chemist in youtube talking about it here
We have already seen about the feynmanium in the last paper, but what is its electron configuration? It is not clear since we have up most theoretical predictions since NO atoms from E137 have been produced yet. Thus, Feynmanium’s electron configuration is assumed to be \left[Ms\right] 5g^{17}8s^2, but due to smearing of the orbitals due to the small separation between the orbitals, the electron configuration is believed to be \left[Ms\right] 5g^{11}6f^{3}7d^18s^28p^2. The hyperphysics web page also discusses this problem. It says:
“(…)Dirac showed that there are no stable electron orbits for more than 137 electrons, therefore the last chemical element on the periodic table will be untriseptium (137Uts) also known informally as feynmanium _{137}Fy. It’s full electron configuration would be something like …
1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 7s2 5f14 6d10 7p6 8s2 5g17
or is it …
1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 7s2 5f14 6d10 7p6 8s1 5g18 ?(…)”
What is the right electron configuration? Without a synthesized element, we do not know…
Even more, you can have fun with this page and references therein http://planetstar.wikia.com/wiki/Feynmanium
There, you can even find that there are proposals for almost every superheavy element (SHE) name! Let me remark that today, circa 2013, 10th July, we have named every chemical element till Z=112 (Copernicium), plus Z=114 (Flerovium) and Z=116 (Livermorium) “offitially”. Feynmanium, neutronium, and any other superheavy element name is not offitial. The IUPAC recommends to use a systematic name until the discoverers have proposed the name and it is “offitially” accepted. Thus, feynmanium should be called untriseptium until we can produce it!
More Periodic Table limits? What about a 0th element with Z=0? Sometimes it is called “neutronium” or “neutrium”. More details here
Of course it is an speculative idea or concept. Indeed, in japanese culture, the void is the 5th element! It is closer to the picture we get from particle physics today in which “elementary particles” are excitations from some vacuum for certain (spinorial, scalar, tensor,…) field. We could see the “voidium” (no, it is no the dalekenium! LOL) as the fundamental “element” for particle physics. And yet, we have that only a 5% of the known Universe are “radiation” and “known elements”. What a shock!
PT43knownElementsAndItsWeightInOurCurrentCosmodels PT44quintessenceElementsAndCosmicDestiny PT45finalComparisonInBasicElementsPastAndNow
Just for fun, again, the anime Saint Seiya Omega uses 7 fundamental “elements” (yes, I am a geek, I recognize it!)PT46saintSeiyaOmegaElements
The Seaborg’s original proposal was something like the next table:PTextra2+Superactinides
And you see, it is quite a different from the astrological first elements from myths and superstitions:PTextra3fengshuiElements PTExtra4ElementsAngGeometricalForms PTExtra5ElementsAndSpiritAnd finally, let me show you the presently known elementary particles again, the smallest “elements” from which matter is believed to made of (till now, of course):
modeloestandar2012Remark: Chemistry is about atoms. High Energy Physics is about elementary particles.
Final questions:
1st. What is your favorite (theoretical or known to exist) chemical element?
2nd. What is your favorite elementary particle (theoretical or known to exist in the Standard Model)?
May The Chemical Elements and the Elementary Particles be with YOU!
LOG#113. Bohr’s legacy (I).
Dedicated to Niels Bohr
and his atomic model
1st part: A centenary model
This is a blog entry devoted to the memory of a great scientist, N. Bohr, one of the greatest master minds during the 20th century, one of the fathers of the current Quantum model of atoms and molecules.
One century ago, Bohr was the pioneer of the introduction of the “quantization” rules into the atomic realm, 8 years after the epic Annus Mirabilis of A. Einstein (1905). Please, don’t forget that Einstein himself was the first physicist to consider Planck hypothesis into “serious” physics problems, explaining the photoelectric effect in a simple way with the aid of “quanta of light” (a.k.a. photons!). Therefore, it is not correct to assest that N.Bohr was the “first” quantum physicist. Indeed, Einstein or Planck were the first. Said, this, Bohr was the first to apply the quantum hypothesis into the atomic domain, changing forever the naive picture of atoms coming from the “classical” physics. I decided that this year I would be writting something in to honour the centenary of his atomic model (for the hydrogen atom).
I wish you will enjoy the next (short) thread…
Atomic mysteries
When I was young, and I was explained and shown the Periodic Table (the ordered list or catalogue of elements) by the first time, I wondered how many elements could be in Nature. Are they 103? 118?Maybe 212? 1000? 10^{23}? Or 10^{100}? \infty, Infinity?
We must remember what an atom is…Atom is a greek word \alpha\tau o\mu o\sigma meaning “with no parts”. That is, an atom is (at least from its original idea), something than can not be broken into smaller parts. Nice concept, isn’t it?
Greek philosophers thought millenia ago if there is a limit to the divisibility of matter, and if there is an “ultimate principle” or “arche” ruling the whole Universe (remarkably, this is not very different to the questions that theoretical physicists are trying to solve even now or the future!). Different schools and ideas arose. I am not very interested today into discussing Philosophy (even when it is interesting in its own way), so let me simplify the general mainstream ideas several thousands of years ago (!!!!):
1st. There is a well-defined ultimate “element”/”substance” and an ultimate “principle”. Matter is infinitely divisible. There are deep laws that govern the Universe and the physical Universe, in a cosmic harmony.
2nd. There is a well-defined ultimate “element”/”substance” and an ultimate “principle”. Matter is FINITELY divisible. There are deep laws that govern the Universe and the physical Universe, in a cosmic harmony.
3rd. There is no a well-defined ultimate “element”/”substance” or an ultimate principle. Chaos rules the Universe. Matter is infinitely divisible.
4th. There is no a well-defined ultimate “element”/”substance” or an ultimate principle. Chaos rules the Universe. Matter is finitely divisible.
Remark: Please, note the striking “similarity” with some of the current (yet) problems of Physics. The existence of a Theory Of Everything (TOE) is the analogue to the question of the first principle/fundamental element quest of ancient greek philosophers or any other philosophy in all over the world. S.W. Hawking himself provided in his Brief Story of Time the following (3!) alternative approaches
1st. There is not a TOE. There is only a chaotic pattern of regularities we call “physical laws”. But Nature itself is ultimately chaotic and the finite human mind can not understand its ultimate description.
2nd. There is no TOE. There are only an increasing number of theories more and more precise or/and more and more accurate without any limit. As we are finite beings, we can only try to guess better and better approximations to the ultimate reality (out of our imagination) and the TOE can not be reached in our whole lifetime or even in the our whole species/civilization lifetime.
3rd. There is a well defined TOE, with its own principles and consequences. We will find it if we are persistent enough and if we are clever enough. All the physical events could be derived from this theory. If we don’t find the “ultimate theory and its principles” is not because it is non-existent, it is only that we are not smart enough. Try harder (If you can…)!
If I added another (non Greek) philosophies, I could create some other combinations, but, as I told you above, I am not going to tell you Philosophy here, not at least more than necessary.
As you probably know, the atomic idea was mainly defended by Leucippus and Democritus, based on previous ideas by Anaxagoras. It is quite likely that Anaxagoras himself learned them from India (or even from China), but that is quite speculative… Well, the keypoint of the atomic idea is that you can not smash into smaller pieces forever smaller and smaller bits of matter. Somewhere, the process of breaking down the fundamental constituents of matter must end…But where? And mostly, how can we find an atom or “see” what an atom looks like? Obviously, ancient greeks had not idea of how to do that, or even knowing the “ground idea” of what a atom is, they had no experimental device to search for them. Thus, the atomic idea was put into the freezer until the 18th and 19th century, when the advances in experimental (and theoretical) Chemistry revived the concept and the whole theory. But Nature had many surprises ready for us…Let me continue this a bit later…
In the 19th century, with the discovery of the ponderal laws of Chemistry, Dalton and other chemists were stunned. Finally, Dalton was the man who recovered the atomism into “real” theoretical Science. But their existence was controversial until the 20th century. However, Dalton concluded that there was a unique atom for each element, using Lavoisier’s definition of an element as a substance that could not be analyzed into something simpler. Thus, Dalton arrived to an important conclusion:
The reality of atoms was a highly debated topic during all the 19th century. It is worthy to remark that was Einstein himself (yes, he…agian) who went further and with his studies about the Brownian motion established their physical existence. It was a brillian contribution to this area, even when, in time, he turned against the (interpretation of) Quantum Mechanics…But that is a different story not to be told today.
Dalton’s atoms or Dalton atomic model was very simple.
Atoms had no parts and thus, they were truly indivisible particles. However, the electrical studies of matter and the electromagnetic theory put this naive atomic model into doubt. After the discovery of “the cathode” rays (1897) and the electron by J.J.Thomson (no, it is not J.J.Abrahams), it became clear that atoms were NOT indivisible after all! Surprising, isn’t it? It is! Chemical atoms are NOT indivisible. They do have PARTS.
Thomson’s model or “plum pudding” model, came into the rescue…Dalton believed that atoms were solid spheres, but J.J.Thomson was forced (due to the electron existence) to elaborate a “more complex” atomic model. He suggested that atoms were a spherical “fluid” mass with positive charge, and that electrons were placed into that sphere as in a “plum pudding” cake. I have to admit that I were impressed by this model when I was 14…It seemed too ugly for me to be true, but anyway it has its virtues (it can explain the cathode ray experiment!).cathode-rays-formation
The next big step was the Rutherford experiment! Thomson KNEW that electrons were smaller pieces inside the atom, but despite his efforts to find the positive particles (and you see there he had and pursued his own path since he discovered the reason of the canal rays), he could not find it (and they should be there since atoms were electrically neutrial particles). However, clever people were already investigating radioactivity and atomic structure with other ideas…In 1911, E. Rutherford, with the aid of his assistants, Geiger and Marsden, performed the celebrated gold foil experiment.
To his surprise (Rutherford’s), his assistants and collaborators provided a shocking set of results. To explain all the observations, the main consequences of the Rutherford’s experiment were the next set of hypotheses:
1st. Atoms are mostly vacuum space.
2nd. Atoms have a dense zone of positive charge, much smaller than the whole atom. It is the atomic nucleus!
3rd. Nuclei had positive charge, and electrons negative charge.
He (Rutherford) did not know from the beginning how was the charge arranged and distributed into the atom. He had to improve the analysis and perform additional experiment in order to propose his “Rutherford” solar atomic model and to get an estimate of the nuclei size (about 1fm or 10^{-15}m). In fact, years before him, the japanase Nagaoka had proposed a “saturnian” atomic model with a similar looking. It was unstable, though, due to the electric repulsion of the electronic “rings” (previously there was even a “cubic” model of atom, but it was unsuccessful too to explain every atomic experiment) and it had been abandoned.
And this is the point where theory become “hard” again. Rutherford supposed that the electron orbits around nuclei were circular (or almost circular) and then electrons experimented centripetal forces due to the electrical forces of the nucleus. The classical electromagnetic theory said that any charged particle being accelerated (and you do have acceleration with a centripetal force) should emit electromagnetic waves, losing energy and, then, electrons should fall over the the nuclei (indeed, the time of the fall down was ridiculously small and tiny). We do not observe that, so something is wrong with our “classical” picture of atoms and radiation (it was also hinted with the photoelectric effect or the blackbody physics, so it was not too surprising but challenging to find the rules and “new mechanics” to explain the atomic stability of matter). Moeover, the atomic spectra was known to be discrete (not continuous) since the 19th century as well. To find out the new dynamics and its principles became one of the oustanding issues in the theoretical (and experimental) community. The first scientist to determine a semiclassical but almost “quantum” and realistic atomic spectrum (for the simpler atom, the hydrogen) was Niels Bohr. The Bohr model of the hydrogen atom is yet explained at schools not only due to its historical insterest, but to the no less important fact that it provides right answers (indeed, Quantum Mechanics reproduces its features) for the simplest atom and that its equations are useful and valid from a quantitative viewpotint (as I told you, Quantum Mechanics reproduces Bohr formulae). Of course, Bohr model does not explain the Stark effect, the Zeeman effect, or the hyperfine structure of the hydrogen atom and some other “quantum/relativistic” important effects, but it is a really useful toy model and analytical machine to think about the challenges and limits of Quantum Mechanics of atoms and molecules. Bohr model can not be applied to helium and other elements in the Periodic Table of the elements (its structure is described by Quantum Mechanics), so it can be very boring but, as we will see, it has many secrets and unexpected surprises in its core…
Bohr model for the hydrogen atom
Bohr model hypotheses/postulates:
1st. Electrons describe circular orbits around the proton (in the hydrogen atom). The centripetal force is provided by the electrostatic force of the proton.
2nd. Electrons, while in “stationary” orbits with a fixed energy, do NOT radiate electromagnetic waves ( note that this postulate is againsts the classical theory of electromagnetics as it was known in the 19th century).
3rd. When a single electron passes from one energetic level to another, the energy transitions/energy differences satisfy the Planck law. That is, during level transitions, \Delta E=hf.
In summary, we have:
Firstly, we begin with the equality between the electron-proton electrostatic force and the centripetal force in the atom:
\begin{pmatrix}\mbox{Centripetal}\\ \mbox{Force}\end{pmatrix}=\begin{pmatrix}\mbox{Electron-proton}\\ \mbox{electric force}\end{pmatrix}
Mathematically speaking, this first postulate/ansatz requieres that q_1=q_2=e, where e=1\mbox{.}602\cdot 10^{-19}C is the elementary electric charge of the electron (and equal in absolute value to the proton charge) and m_e=9.11\cdot 10^{-31}kg is the electron mass:
F_c=\dfrac{m_ev^2}{R} and F_C=K_C\dfrac{q_1q_2}{R^2}=K_C\dfrac{e^2}{R^2} implies that
(1) \boxed{F_c=F_{el,C}}\leftrightarrow \boxed{\dfrac{m_ev^2}{R}=\dfrac{K_Ce^2}{R^2}}\leftrightarrow \boxed{v^2=\left(\dfrac{K_C}{m_e}\right)\left(\dfrac{e^2}{R}\right)}
Remark: Instead of having the electron mass, it would be more precise to use the “reduced” mass for this two body problem. The reduced mass is, by definition,
However, it is easy to realize that the reduced mass is essentially the electron mass (since m_p\approx 1836m_e)
\mu=\dfrac{m_e}{1+\left(\dfrac{m_e}{m_p}\right)}\approx m_e(1-\dfrac{m_e}{m_p}+\ldots)=m_e+\mathcal{O} \left(\dfrac{m_e^2}{m_p}\right)
The second Bohr’s great idea was to quantize the angular momentum. Classically, angular momentum can take ANY value, Bohr great’s intuition suggested that it could only take multiple values of some fundamental constant, the Planck’s constant. In fact, assuming orbitar stationary orbits, the quantization rule provides
(2) \boxed{L=m_ev(2\pi R)=nh} or \boxed{L=m_evR=n\dfrac{h}{2\pi}=n\hbar} with \hbar=\dfrac{h}{2\pi} and n=1,2,3,\ldots,\infty a positive integer.
Remark: h=6\mbox{.}63\cdot 10^{-34}Js and \hbar=\dfrac{h}{2\pi}=1\mbox{.}055\cdot 10^{-34}Js are the Planck constant and the reduced Planck constant, respectively.
From this quantization rule (2), we can easily get
vR=\left(\dfrac{n\hbar}{m_e}\right) and then v^2R^2=\left(\dfrac{n\hbar}{m_e}\right)^2
Thus, we have
Using the result we got in (1) for the squared velocity of the electron in the circular orbit, we deduce the quantization rule for the orbits in the hydrogen atom according to Bohr’s hypotheses:
(3) \boxed{R_n=R(n)=\left(\dfrac{\hbar^2}{m_eK_Ce^2}\right)n^2}\leftrightarrow \boxed{R_n=a_Bn^2}
where n=1,2,3,\ldots,\infty again and the Bohr radius a_B is defined to be
(4) \boxed{a_B=\dfrac{\hbar^2}{m_eK_Ce^2}}
Inserting values into (4), we obtain the celebrated value of the Bohr radius
a_B\approx 0\mbox{.}53\AA=53pm=5\mbox{.}3\cdot 10^{-11}m
The third important consequence in the spectrum of energy levels in the hydrogen atom. To obtain the energy spectrum, there is two equivalent paths (in fact, they are the same): use the virial theorem or use (1) into the total energy for the electron-proton system. The total energy of the hydrogen atom can be written
E=\mbox{Kinetic Energy}+\mbox{(electrostatic) Potential Energy}
Substituting (1) into this, we get exactly the expected expression for the virial theorem to a 1/r^2 potential (i.e. E=E_p/2):
(5) \boxed{E=-K_C\dfrac{e^2}{2R}}
Inserting into (5) the quantized values of the orbit, we deduce the famous and well-known formula for the spectrum of the hydrogen atom (known to Balmer and the spectroscopists at the end of the 19th century and the beginning of the 20th century):
(6) \boxed{E_n=E(n)=-\dfrac{m_eK_C^2e^4}{2\hbar^2n^2}=-\dfrac{m_e}{2}\left(\dfrac{K_Ce^2}{n\hbar}\right)^2=-\dfrac{\mbox{Ry}}{n^2}} \;\;\forall n=1,2,3,\ldots,\infty
and where we have defined the Rydberg (constant) as
(7) \boxed{\mbox{Ry}=\dfrac{m_e(K_Ce^2)^2}{2\hbar^2}=\dfrac{m_eK_C^2e^4}{2\hbar^2}=\dfrac{1}{2}\alpha^2 m_ec^2}
Its value is Ry=R_H=2.18\cdot 10^{-18}J=13\mbox{.}6eV. Here, the electromagnetic fine structure constant (alpha) is
\alpha=K_C\dfrac{e^2}{\hbar c}
and c is the speed of light. In fact, using the quantum relation
we can deduce that the Rydberg corresponds to a wavenumber
k=1\mbox{.}097\cdot 10^{7}m^{-1}
or a frequency
f=\nu=3\mbox{.}29\cdot 10^{15}Hz
and a wavelength
\lambda =912\AA=91\mbox{.}2nm
Please, check it yourself! :D.
The above results allowed Bohr to explain the spectral series of the hydrogen atom. He won the Nobel Prize due to this wonderful achievement…
Hydrogenic atoms
(and positronium, muonium,…)
In fact, it is easily straightforward to extend all these results to “hydrogenic” (“hydrogenoid”) atoms, i.e., to atoms with only a single electron BUT a nucleus with charge equal to Ze, and Z>1 is an integer (atomic) number greater than one! The easiest way to obtain the results is not to repeat the deduction but to make a rescaling of the proton charge, i.e., you plug q_2=Ze or/and make a rescaling of the electric charge q_2=e\longrightarrow Ze (be aware of making the right scaling in the formulae). The final result for the radius and the energy spectrum is as follows:
A) From R_n=\left(\dfrac{\hbar^2}{m_eK_Ce^2}\right)n^2, with e\longrightarrow Ze, you get
(8) \boxed{\bar{R}_n=\bar{R}(n)=\dfrac{\hbar^2}{m_eK_CZe^2}n^2=\dfrac{a_Bn^2}{Z}}
B) From E_n=-m_e\dfrac{(K_Ce^2)^2}{2\hbar^2n^2}, with the rescaling e\longrightarrow Ze, you get
(9) \boxed{\bar{E}_n=\bar{E}(n)=-m_e\dfrac{Z^2(K_Ce^2)^2}{2\hbar^2n^2}=-\dfrac{Z^2\alpha^2m_ec^2}{2n^2}=-\dfrac{Z^2Ry}{n^2}}
Therefore, the consequence of the rescaling of the nuclear charge is that energy levels are “enlarged” by a factor Z^2 and that the orbits are “squeezed” or “contracted” by a factor 1/Z.
Exercise: Can you obtain the energy levels and the radius for the positronium (an electron and positron system instead an electron a positron). What happens with the muonium (strange substance formed by electron orbiting and antimuon)?And the muonic atom (muon orbiting an proton)? And a muon orbiting an antimuon? And the tau particle orbiting an antitau or the electron orbiting an antitau or a tau orbiting a proton(supposing that it were possible of course, since the tau particle is unstable)? Calculate the “Bohr radius” and the “Rydberg” constant for the positronium, the muonium, the muonic atom (or the muon-antimuon atom) and the tauonium (or the tau-antitau atom). Hint: think about the reduced mass for the positronium and the muonium, then make a good mass/energy or radius rescaling.
Now, we can also calculate the velocity of an electron in the quantized orbits for the Bohr atom and the hydrogenic atom. Using (3) and (8),
mvR=n\hbar\leftrightarrow mR=\dfrac{n\hbar}{m_e}\leftrightarrow v^2R^2=\dfrac{n^2\hbar^2}{m_e^2}
and inserting the quantized values of the orbit radius
so, for the Bohr atom (hydrogen)
(10) \boxed{v_n=v(n)=\dfrac{K_Ce^2}{\hbar n}=\dfrac{\alpha c}{n}}
In the case of hydrogenic atoms, the rescaling of the electric charge yields
(11) \boxed{\bar{v}_n=\bar {v}(n)=\dfrac{ZK_Ce^2}{\hbar n}=\dfrac{Z\alpha c}{n}}
so, the hydrogenic atoms have a “enlarged” electron velocity in the orbits, by a factor of Z.
The feynmanium
This result for velocities is very interesting. Suppose we consider the fundamental level n=1 (or the orbital 1s in Quantum Mechanics, since, magically or not, Quantum Mechanics reproduces the results for the Bohr atom and the hydrogenic atoms we have seen here, plus other effects we will not discuss today relative to spin and some energy splitting for perturbed atoms). Then, the last formula yield, in the hydrogenic case,
v_1=Z\alpha c
Furthermore, suppose now in addition that we have some “superheavy” (hydrogenic) atom with, say, Z>137 (note that \alpha\approx 1/137 at ordinary energies), say Z=138 or greater than it. Then, the electron moves faster than the speed of light!!!!! That is, for hydrogenic atoms, with Z>137 and considering the fundalmental level, the electron would move with v>c. This fact is “surprising”. The element with Z=137 is called untriseptium (Uts) by the IUPAC rules, but it is often called the feynmanium (Fy), since R.P. Feynman often remarked the importance of this result and mystery. Of course, Special Relativity forbids this option. Therefore, something is wrong or Z=137 is the last element allowed by the Quantum Rules (or/and the Bohr atom). Obviously, we could claim that this result is “wrong” since we have not consider the relativistic quantum corrections or we have not made a good relativistic treatment of this system. It is not as simple as you can think or imagine, since using a “naive” relativistic treatment, e.g., using the Dirac equation , we obtain for the fundamental level of the hydrogenic atom the spectrum
(12) \boxed{E_1=E=m_ec^2\sqrt{1-Z^2\alpha^2}}. This result can be obtained from the Dirac equation spectrum for the hydrogen atom (in a Coulomb potential):
(13) \boxed{E_{n,k;Z,\alpha}=E(n,k;Z,\alpha)=mc^2\left[1+\left(\dfrac{Z\alpha}{n-\vert k\vert+\sqrt{k^2-Z^2\alpha^2}}\right)^2\right]^{-1/2}}
where n is a nonnegative integer number n=N+\vert k\vert and k^2=(j+\frac{1}{2})^2. Putting these into numbers, we get
HydrogenAtomSpectrumDiracEquationFirstLevelsor equivalently (I add comments from the slides)
If you plug Z=138 or more into the above equation from the Dirac spectrum, you obtain an imaginary value of the energy, and thus an oscillating (unbound) system! Therefore, the problem for atoms with high Z even persist taking the relativistic corrections! What is the solution? Nobody is sure. Greiner et al. suggest that taking into account the finite (extended) size of the nuclei, the problem is “solved” until Z\approx 172. Beyond, i.e., with Z>172, you can not be sure that quantum fluctuations of strong fields introduce vacuum pair creation effects such as they make the nuclei and thus atoms to be unstable at those high values of Z. Some people believe that the issues arise even before, around Z=150 or even that strong field effects can make atoms even below of Z=137 to be non-existent. That is why the search for superheavy elements (SHE) is interesting not only from the chemical viewpoint but also to the fundamental physics viewpoint: it challenges our understanding of Quantum Mechanics and Special Relativity (and their combination!!!!).
Is the feynmanium (Z=137) the last element? This hypothetical element and other superheavy elements (SHE) seem to hint the end of the Periodic Table. Is it true? Options:
1st. The feynmanium (Fy) or Untriseptrium (Uts) is the last element of the Periodic Table.
2nd. Greiner et al. limit around Z=172. References:
(i) B Fricke, W Greiner and J T Waber,Theor. Chim. Acta, 1971, 21, 235.
(ii)W Greiner and J Reinhardt, Quantum Electrodynamics, 4th edn (Springer, Berlin, 2009).
3rd. Other predictions of an end to the periodic table include Z = 128 (John Emsley) and Z = 155 (Albert Khazan). Even Seaborg, from his knowledge and prediction of an island of stability around Z,N= 126, 184,\ldots , left this question open to interpretation and experimental search!
4th. There is no end of the Periodic Table. According to Greiner et al. in fact, even when superheavy nuclei can produce a challenge for Quantum Mechanics and Special Relativity, indeed, since there is always electrons in the orbitals (a condition to an element to be a well-defined object), there is no end of The Periodic Table (even when there are probabilities to a positron-electron pair to be produced for a superheavy nuclei, the presence of electrons does not allow for it; but strong field effects are important there, and it should be great to produce these elements and to know their properties, both quantum and relativistic!). Therefore, it would be very, very interesting to test the superheavy element “zone” of the Periodic Table, since it is a place where (strong) quantum effects and (non-negligible) relativistic effects both matter. Then, if both theories are right, superheavy elements are a beautiful and wonderful arena to understand how to combine together the two greatest theories and (unfinished?) revolutions of the 20th century. What awesome role for the “elementary” and “fundamental” superheavy (composite) elements!
Probably, there is no limit to the number of (chemical) elements in our Universe… But we DO NOT KNOW!
In conclusion: what will happen for superheavy elements with >173 (or Z>126, 128, 137, etc.) remains unresolved with our current knowledge. And it is one of the last greatest mysteries in theoretical Chemistry!
More about the fine structure constant, the Sommerfeld corrections and the Dirac equation+QED (Quantum ElectroDynamics) corrections to the hydrogen spectrum, in slides (think it yourself!):
Final remarks (for experts only): Some comments about the self-adjointness of the Dirac equation for high value of Z in Coulombian potentials. It is a well known fact that the Dirac operator for the hydrogen problem is essentially self-adjoint if Z<119. Therefore, it is valid for all the currently known elements (circa 2013, June, every element in the Periodic Table, for the 7th period, has been created and then, we know that chemical elements do exist at least up to Z=118 and we have tried to search for superheavy elements beyond that Z with negative results until now). However, for 119\leq Z\leq 137 any “self-adjoint extension” requires a precise physical meaning. A good idea could be that the expectation value of every component of the Hamilton is finite in the selected basis. Indeed, the solution to the Coulombian potential for the hydrogenic atom using the Dirac equation makes use of hypergeometric functions that are well-posed for any Z\leq 137. If Z is greater than that critical value, we face the oscillating energy problem we discussed above. So, we have to consider the effect of the finite size of the nucleus and/or handle relativistic corrections more carefully. It is important to realize this and that we have to understand the main idea of all this crazy stuff. This means that the s states start to be destroyed above Z = 137, and that the p states begin being destroyed above Z = 274. Note that this differs from the result of the Klein-Gordon equation, which predicts s states being destroyed above Z = 68 and p states destroyed above Z = 82. In summary, the superheavy elements are interesting because they challenge our knowledge of both Quantum Mechanics and Special Relativity. What a wonderful (final) fate for the chemical elements: the superheavy elements will test if the “marriage” between Quantum Mechanics or Special Relativity is going further or it ends into divorce!
Epilogue: What do you think about the following questions? This is a test for you, eager readers…
1) Is there an ultimate element?
2) Is there a theory of everything (TOE)?
3) Is there an ultimate chemical element?
4) Is there a single “ultimate” principle?
5) How many elements does the Periodic Table have?
6) Is the feynmanium the last element?
7) Are Quantum Mechanics/Special relativity consistent to each other?
8) Is Quantum Mechanics a fundamental and “ultimate” theory for atoms and molecules?
9) Is Special Relativity a fundamental and “ultimate” theory for “quick” particles?
10) Are the atomic shells and atomic structure completely explained by QM and SR?
11) Are the nuclei and their shell structure xompletely explained by QM and SR?
12) Do you think all this stuff is somehow important and relevant for Physics or Chemistry (or even for Mathematics)?
13) Will we find superheavy elements the next decade?
14) Will we find superheavy elements this century?
15) Will we find that there are some superheavy elements stable in the island of stability (Seaborg) with amazing properties and interesting applications?
16) Did you like/enjoy this post?
17) When you was a teenager, how many chemical elements did you know? How many chemical elements were known?
18) Did you learn/memorize the whole Periodic Table? In the case you did not, would you?
19) What is your favourite chemical element?
20) Did you know that every element in the 7th period of the Periodic table has been established to exist but th elements E113, E115,E117 and E118 are not named yet (circa, 2013, 30th June) and they keep their systematic (IUPAC) names ununtrium, ununpentium, ununseptium and ununoctium? By the way, the last named elements were the coperninicium (E112, Cn), the flerovium (Fl, E114) and the livermorium (Lv, E116)… |
1395cf253e6ef6f2 | General relativity ( GR , also known as the general theory of relativity or GTR ) is the geometric theory of gravitation published by Albert Einstein in 1915 [13] and the current description of gravitation in modern physics . General relativity has been described as the most beautiful of all existing physical theories. General relativity generalizes special relativity and Newton's law of universal gravitation , providing a unified description of gravity as a geometric property of space and time , or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations , a system of partial differential equations .
During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity , being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein himself had shown in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters (" fudge factors "). Similarly, a 1919 expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of May 29, 1919 , making Einstein instantly famous. Yet the theory entered the mainstream of theoretical physics and astrophysics only with the developments between approximately 1960 and 1975, now known as the golden age of general relativity . [16] Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology, too, became amenable to direct observational tests.
From classical mechanics to general relativity
Geometry of Newtonian gravity
Relativistic generalization
Einstein's equations
Einstein's field equations
On the right-hand side, is the energy–momentum tensor. All tensors are written in abstract index notation . Matching the theory's prediction to observational results for planetary orbits (or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics), the proportionality constant can be fixed as κ = 8π G / c 4 , with G the gravitational constant and c the speed of light. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations,
Alternatives to general relativity
Definition and basic applications
Definition and basic properties
As it is constructed using tensors, general relativity exhibits general covariance : its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems . Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent . It thus satisfies a more stringent general principle of relativity , namely that the laws of physics are the same for all observers. Locally , as expressed in the equivalence principle, spacetime is Minkowskian , and the laws of physics exhibit local Lorentz invariance .
Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution , the Reissner–Nordström solution and the Kerr metric , each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes , each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub-NUT solution (a model universe that is homogeneous , but anisotropic ), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture ).
Consequences of Einstein's theory
Gravitational time dilation and frequency shift
Gravitational redshift has been measured in the laboratory [14] and using astronomical observations. [14] Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks , [14] while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). [14] Tests in stronger gravitational fields are provided by the observation of binary pulsars . [14] All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.
Light deflection and gravitational time delay
Gravitational waves
Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes , varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.
Orbital effects and the relativity of direction
Precession of apsides
The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism . It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude .
In general relativity the perihelion shift σ, expressed in radians per revolution, is approximately given by: [27]
Orbital decay
Geodetic precession and frame-dragging
Several relativistic effects are directly related to the relativity of direction. One is geodetic precession : the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible (" parallel transport "). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging . More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.
Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere , rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used. [30] [33]
Astrophysical applications
Gravitational lensing
The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring , or partial rings called arcs. The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such " microlensing events" have been observed.
Gravitational wave astronomy
Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay , above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600 , LIGO (two detectors), TAMA 300 and VIRGO . Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10 −9 to 10 −6 Hertz frequency range, which originate from binary supermassive blackholes. [36] A European space-based detector, eLISA / NGO , is currently under development, with a precursor mission ( LISA Pathfinder ) having launched in December 2015. [37]
Observations of gravitational waves promise to complement observations in the electromagnetic spectrum . They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string . In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger. [19] [24]
Black holes and other compact objects
Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution , neutron stars of around 1.4 solar masses , and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.
Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion , the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets , focused beams of highly energetic particles that are being flung into space at almost light speed. General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.
is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis , the large-scale structure of the universe, [16] and the existence and properties of a " thermal echo" from the early cosmos, the cosmic background radiation . [16]
Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. [16] There is no generally accepted description of this new kind of matter, within the framework of known particle physics [16] or otherwise. [16] Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state , known as dark energy , the nature of which remains unclear. [16]
An inflationary phase , [16] an additional phase of strongly accelerated expansion at cosmic times of around 10 −33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. [16] Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. [16] However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. [16] An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity . An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity , below).
Time travel
Advanced concepts
Causal structure and global geometry
Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems , proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture ). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.
Evolution equations
To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism . These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist , and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity.
Global and quasi-local quantities
Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" ( ADM mass ) or suitable symmetries ( Komar mass ). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics , it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems , such as a more precise formulation of the hoop conjecture.
Relationship with quantum theory
If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid state physics , would be the other. However, how to reconcile quantum theory with general relativity is still an open question.
Quantum field theory in curved spacetime
Ordinary quantum field theories , which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above , this radiation plays an important role for the thermodynamics of black holes.
Quantum gravity
The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.
Artist's impression of the space-borne gravitational wave detector LISA
Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability ").
One attempt to overcome these limitations is string theory , a quantum theory not of point particles , but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution , it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory , which would constitute a uniquely defined and consistent theory of quantum gravity.
Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation ) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables , this leads to a promising model known as loop quantum gravity . Space is represented by a web-like structure called a spin network , evolving over time in discrete steps.
Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge Calculus , dynamical triangulations , causal sets , twistor models or the path integral based models of quantum cosmology .
Current status
General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics. [13] Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, [13] while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). [13] In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on September 14, 2015. [25] [13] [13] A century after its introduction, general relativity remains a highly active area of research. [13]
See also
1. . . Retrieved 18 April 2016 .
2. O'Connor, J.J. and Robertson, E.F. (1996), . , , , Scotland. Retrieved 2015-02-04.
5. , ch. 9 to 15, ; an up-to-date collection of current research, including reprints of many of the original articles, is ; an accessible overview can be found in , pp. 110ff. Einstein's original papers are found in , volumes 4 and 6. An early key article is , cf. , ch. 9. The publication featuring the field equations is , cf. , ch. 11–15
6. , and (later complemented in )
7. , cf. , ch. 15e
8. Hubble's original article is ; an accessible overview is given in , ch. 2–4
9. As reported in . Einstein's condemnation would prove to be premature, cf. the section Cosmology , below
10. , pp. 253–254
11. ,
12. , ch. 16
13. Thorne, Kip (2003). . Cambridge University Press. p. 74. ISBN 0-521-82081-2.
14. , ch. 7.8–7.10, , ch. 3–9
16. Section Cosmology and references therein; the historical development is in
17. The following exposition re-traces that of , sec. 1
18. , ch. 1
19. , pp. 5f
20. , sec. 2.4, , sec. 2
21. , ch. 2
22. , sec. 1.2, , . The simple thought experiment in question was first described in
23. , pp. 10f
24. Good introductions are, in order of increasing presupposed knowledge of mathematics, , , and ; for accounts of precision experiments, cf. part IV of
25. An in-depth comparison between the two symmetry groups can be found in
26. , sec. 22, , ch. 1 and 2
27. , sec. 2.3
28. , sec. 1.4, , sec. 5.1
29. , pp. 17ff; a derivation can be found in , ch. 12. For the experimental evidence, cf. the section Gravitational time dilation and frequency shift , below
30. , sec. 1.13; for an elementary account, see , ch. 2; there are, however, some differences between the modern version and Einstein's original concept used in the historical derivation of general relativity, cf.
32. , p. 16, , sec. 7.2, , sec. 2.8
33. , pp. 19–22; for similar derivations, see sections 1 and 2 of ch. 7 in . The Einstein tensor is the only divergence-free tensor that is a function of the metric coefficients, their first and second derivatives at most, and allows the spacetime of special relativity as a solution in the absence of sources of gravity, cf. . The tensors on both side are of second rank, that is, they can each be thought of as 4×4 matrices, each of which contains ten independent terms; hence, the above represents ten coupled equations. The fact that, as a consequence of geometric relations known as Bianchi identities , the Einstein tensor satisfies a further four identities reduces these to six independent equations, e.g. , sec. 8.3
34. , sec. 7.4
35. , , sec. 3 in ch. 7, , sec. 7.2, and , respectively
36. , ch. 4, , ch. 7 or, in fact, any other textbook on general relativity
37. At least approximately, cf.
38. , p. xi
39. , sec. 4.4
40. , sec. 4.1
42. section 5 in ch. 12 of
43. Introductory chapters of
44. A review showing Einstein's equation in the broader context of other PDEs with physical significance is
45. For background information and a list of solutions, cf. ; a more recent review can be found in
46. , ch. 3,5,6
47. , ch. 4, sec. 3.3
48. Brief descriptions of these and further interesting solutions can be found in , ch. 5
49. For instance , sec. 4.4
50. , sec. 4.1 and 4.2
51. , sec. 3.2, , ch. 4
52. , pp. 24–26 vs. pp. 236–237 and , pp. 164–172. Einstein derived these effects using the equivalence principle as early as 1907, cf. and the description in , pp. 196–198
53. , pp. 24–26; , § 38.5
54. Pound–Rebka experiment , see , ; ; a list of further experiments is given in , table 4.1 on p. 186
55. ; the most recent and most accurate Sirius B measurements are published in .
56. Starting with the Hafele–Keating experiment , and , and culminating in the Gravity Probe A experiment; an overview of experiments can be found in , table 4.1 on p. 186
57. GPS is continually tested by comparing atomic clocks on the ground and aboard orbiting satellites; for an account of relativistic effects, see and
58. and
59. General overviews can be found in section 2.1. of Will 2006; Will 2003, pp. 32–36; , sec. 4.2
60. , pp. 164–172
61. Cf. for the classic early measurements by Arthur Eddington's expeditions. For an overview of more recent measurements, see , ch. 4.3. For the most precise direct modern observations using quasars, cf.
62. This is not an independent axiom; it can be derived from Einstein's equations and the Maxwell Lagrangian using a WKB approximation , cf. , sec. 5
63. , sec. 1.3
64. , sec. 1.16; for the historical examples, , pp. 202–204; in fact, Einstein published one such derivation as . Such calculations tacitly assume that the geometry of space is Euclidean , cf.
65. From the standpoint of Einstein's theory, these derivations take into account the effect of gravity on time, but not its consequences for the warping of space, cf. , sec. 11.11
66. For the Sun's gravitational field using radar signals reflected from planets such as Venus and Mercury, cf. , , ch. 8, sec. 7; for signals actively sent back by space probes ( transponder measurements), cf. ; for an overview, see , table 4.4 on p. 200; for more recent measurements using signals received from a pulsar that is part of a binary system, the gravitational field causing the time delay being that of the other pulsar, cf. , sec. 4.4
67. , sec. 7.1 and 7.2
68. Einstein, A (June 1916). . Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften Berlin . part 1: 688–696.
69. Einstein, A (1918). . Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften Berlin . part 1: 154–167.
70. Castelvecchi, Davide; Witze, Witze (February 11, 2016). . Nature News . doi : . Retrieved 2016-02-11 .
71. B. P. Abbott; et al. (LIGO Scientific Collaboration and Virgo Collaboration) (2016). . Physical Review Letters . 116 (6): 061102. arXiv : Freely accessible . Bibcode :. doi :. PMID .
72. . . Retrieved 2016-02-11 .
73. Most advanced textbooks on general relativity contain a description of these properties, e.g. , ch. 9
74. For example
75. , ch. 13
76. ,
77. See for a brief introduction to the methods of numerical relativity, and for the connection with gravitational wave astronomy
78. , pp. 48–49, , pp. 253–254
79. , sec. 11.9
80. , pp. 177–181
81. In consequence, in the parameterized post-Newtonian formalism (PPN), measurements of this effect determine a linear combination of the terms β and γ, cf. , sec. 3.5 and , sec. 7.3
82. The most precise measurements are VLBI measurements of planetary positions; see , ch. 5, , sec. 3.5, ; for an overview, , pp. 406–407
83. Dediu, Adrian-Horia; Magdalena, Luis; Martín-Vide, Carlos (2015). (illustrated ed.). Springer. p. 141. ISBN 978-3-319-26841-5.
84. A figure that includes error bars is fig. 7 in , sec. 5.1
85. , , pp. 317–321, , pp. 70–86
86. ; for the pulsar discovery, see ; for the initial evidence for gravitational radiation, see
87. , §14.5, , §11.4
88. , sec. 9.6, , sec. 7.8
89. ,
90. A mission description can be found in ; a first post-flight evaluation is given in ; further updates will be available on the mission website .
91. , sec. 4.2.1, , pp. 469–471
92. , sec. 4.7, , sec. 9.7; for a more recent review, see
93. , ,
94. Image
Penrose–Carter diagram of an infinite Minkowski universe
95. Iorio L. (June 2010), "On the Lense–Thirring test with the Mars Global Surveyor in the gravitational field of Mars", Central European Journal of Physics , 8 (3): 509–513, arXiv : Freely accessible , Bibcode :, doi :
96. For overviews of gravitational lensing and its applications, see and
97. For a simple derivation, see , ch. 23; cf. , sec. 3
98. Images of all the known lenses can be found on the pages of the CASTLES project,
99. , sec. 3.7
100. , ,
101. Image
102. . ESA . Retrieved 2012-04-23 .
103. . . Retrieved 2016-02-11 .
104. , lectures 19 and 21
105. , sec. 3
106. and the accompanying summary
107. , sec. 8.2.4
108. For the basic mechanism, see , sec. 17.2; for more about the different types of astronomical objects associated with this, cf.
109. For a review, see . To a distant observer, some of these jets even appear to move faster than light ; this, however, can be explained as an optical illusion that does not violate the tenets of relativity, see
110. For stellar end states, cf. or, for more recent numerical work, , sec. 4.1; for supernovae, there are still major problems to be solved, cf. ; for simulating accretion and the formation of jets, cf. , sec. 4.2. Also, relativistic lensing effects are thought to play a role for the signals received from X-ray pulsars , cf.
111. The evidence includes limits on compactness from the observation of accretion-driven phenomena (" Eddington luminosity "), see , observations of stellar dynamics in the center of our own Milky Way galaxy, cf. , and indications that at least some of the compact objects in question appear to have no solid surface, which can be deduced from the examination of X-ray bursts for which the central compact object is either a neutron star or a black hole; cf. for an overview, , sec. 5. Observations of the "shadow" of the Milky Way galaxy's central black hole horizon are eagerly sought for, cf.
112. Originally ; cf. , pp. 285–288
113. , ch. 2
115. E.g. with WMAP data, see
116. These tests involve the separate observations detailed further on, see, e.g., fig. 2 in
117. ; for a recent account of predictions, see ; an accessible account can be found in ; compare with the observations in , , , and
118. , ,
119. , for a pedagogical introduction, see , ch. 11; for the initial detection, see and, for precision measurements by satellite observatories, ( COBE ) and (WMAP). Future measurements could also reveal evidence about gravitational waves in the early universe; this additional information is contained in the background radiation's polarization , cf. and
120. Evidence for this comes from the determination of cosmological parameters and additional observations involving the dynamics of galaxies and galaxy clusters cf. , ch. 18, evidence from gravitational lensing, cf. , sec. 4.6, and simulations of large-scale structure formation, see
121. , ch. 12, ; in particular, observations indicate that all but a negligible portion of that matter is not in the form of the usual elementary particles ("non- baryonic matter"), cf. , ch. 12
123. ; an accessible overview is given in . Here, too, scientists have argued that the evidence indicates not a new form of energy, but the need for modifications in our cosmological models, cf. , sec. 10; aforementioned modifications need not be modifications of general relativity, they could, for example, be modifications in the way we treat the inhomogeneities in the universe, cf.
124. A good introduction is ; for a more recent review, see
125. More precisely, these are the flatness problem , the horizon problem , and the monopole problem ; a pedagogical introduction can be found in , sec. 6.4, see also , sec. 9.1
126. , sec. 5,6
128. , sec. 2
129. , , sec. 11.1, , sec. 6.8, 6.9
130. , sec. 9.2–9.4 and , ch. 6
131. ; for more recent numerical studies, see , sec. 2.1
132. . A more exact mathematical description distinguishes several kinds of horizon, notably event horizons and apparent horizons cf. , pp. 312–320 or , sec. 12.2; there are also more intuitive definitions for isolated systems that do not require knowledge of spacetime properties at infinity, cf.
133. For first steps, cf. ; see , sec. 9.3 or , ch. 9 and 10 for a derivation, and as well as as overviews of more recent results
134. The laws of black hole mechanics were first described in ; a more pedagogical presentation can be found in ; for a more recent review, see , ch. 2. A thorough, book-length introduction including an introduction to the necessary mathematics . For the Penrose process, see
135. ,
136. The fact that black holes radiate, quantum mechanically, was first derived in ; a more thorough derivation can be found in . A review is given in , ch. 3
137. , sec. 4.4.4, 4.4.5
138. Horizons: cf. , sec. 12.4. Unruh effect: , cf. , ch. 3
139. , sec. 8.1, , sec. 9.1
140. , ch. 2; a more extensive treatment of this solution can be found in , ch. 3
141. , ch. 4; for a more extensive treatment, cf. , ch. 6
142. ; a closer look at the singularity itself is taken in , sec. 1.2
144. Namely when there are trapped null surfaces , cf.
145. The conjecture was made in ; for a more recent review, see . An accessible exposition is given by
146. The restriction to future singularities naturally excludes initial singularities such as the big bang singularity, which in principle be visible to observers at later cosmic time. The cosmic censorship conjecture was first presented in ; a textbook-level account is given in , pp. 302–305. For numerical results, see the review , sec. 2.1
147. , sec. 7.1
148. ; for a pedagogical introduction, see , §21.4–§21.7
149. and ; for a pedagogical introduction, see , ch. 10; an online review can be found in
151. , §20.4
152. ; for a pedagogical introduction, see , sec. 11.2; although defined in a totally different way, it can be shown to be equivalent to the ADM mass for stationary spacetimes, cf.
153. For a pedagogical introduction, see , sec. 11.2
155. , ch. 5
157. An overview of quantum theory can be found in standard textbooks such as ; a more elementary account is given in
158. , , ; a more accessible overview is
159. ,
160. For Hawking radiation , ; an accessible introduction to black hole evaporation can be found in
161. , ch. 3
162. Put simply, matter is the source of spacetime curvature, and once matter has quantum properties, we can expect spacetime to have them as well. Cf. , sec. 2
163. , p. 407
164. A timeline and overview can be found in
165. In particular, a perturbative technique known as renormalization , an integral part of deriving predictions which take into account higher-energy contributions, cf. , ch. 17, 18, fails in this case; cf. , ; for a recent comprehensive review of the failure of perturbative renormalizability for quantum gravity see
166. An accessible introduction at the undergraduate level can be found in ; more complete overviews can be found in and
167. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different ( electric and other) charges , e.g. . The theory is successful in that one mode will always correspond to a graviton , the messenger particle of gravity, e.g. , sec. 2.3, 5.3
168. , sec. 4.2
169. , ch. 31
170. ,
171. , sec. 3
172. These variables represent geometric gravity using mathematical analogues of electric and magnetic fields ; cf. ,
173. For a review, see ; more extensive accounts can be found in , as well as in the lecture notes
174. ,
175. , ch. 33 and refs therein
176. ,
177. , pp. 52–59, 98–122; , sec. 34.1, ch. 30
178. section Quantum gravity , above
179. section Cosmology , above
180. A review of the various problems and the techniques being developed to overcome them, see
181. See for an account up to that year; up-to-date news can be found on the websites of major detector collaborations such as 2007-02-18 at the Wayback Machine . and
182. For the most recent papers on gravitational wave polarizations of inspiralling compact binaries, see , and ; for a review of work on compact binaries, see and ; for a general review of experimental tests of general relativity, see
183. See, e.g., the electronic review journal |
238a43b9ecec6685 | A Concise Introduction to Geometric Numerical Integration by Sergio Blanes, Fernando Casas
By Sergio Blanes, Fernando Casas
Discover How Geometric Integrators defend the most Qualitative houses of continuing Dynamical Systems
A Concise creation to Geometric Numerical Integration provides the most issues, recommendations, and functions of geometric integrators for researchers in arithmetic, physics, astronomy, and chemistry who're already conversant in numerical instruments for fixing differential equations. It additionally deals a bridge from conventional education within the numerical research of differential equations to knowing fresh, complicated learn literature on numerical geometric integration.
The booklet first examines high-order classical integration tools from the constitution protection viewpoint. It then illustrates tips to build high-order integrators through the composition of simple low-order tools and analyzes the assumption of splitting. It subsequent reports symplectic integrators built at once from the speculation of producing capabilities in addition to the $64000 classification of variational integrators. The authors additionally clarify the connection among the upkeep of the geometric houses of a numerical process and the saw favorable mistakes propagation in long-time integration. The e-book concludes with an research of the applicability of splitting and composition easy methods to sure periods of partial differential equations, equivalent to the Schrödinger equation and different evolution equations.
The motivation of geometric numerical integration isn't just to advance numerical tools with enhanced qualitative habit but additionally to supply extra actual long-time integration effects than these acquired through general-purpose algorithms. obtainable to researchers and post-graduate scholars from varied backgrounds, this introductory ebook will get readers on top of things at the principles, tools, and purposes of this box. Readers can reproduce the figures and effects given within the textual content utilizing the MATLAB® courses and version documents to be had online.
Show description
Read Online or Download A Concise Introduction to Geometric Numerical Integration PDF
Best popular & elementary books
Numerical embedded computing
Mathematical algorithms are crucial for all meeting language and embedded method engineers who increase software program for microprocessors. This publication describes options for constructing mathematical exercises - from easy multibyte multiplication to discovering roots to a Taylor sequence. All resource code is offered on disk in MS/PC-DOS structure.
Morse Theory
Essentially the most mentioned books in arithmetic, John Milnor's exposition of Morse thought has been an important publication at the topic for greater than 40 years. Morse idea used to be built within the Twenties via mathematician Marston Morse. (Morse used to be at the college of the Institute for complex learn, and Princeton released his Topological equipment within the concept of services of a fancy Variable within the Annals of arithmetic experiences sequence in 1947.
A History of the mathematical theory of probability from the time of Pascal to that of Laplace
This can be a replica of a booklet released ahead of 1923. This booklet could have occasional imperfections comparable to lacking or blurred pages, terrible photos, errant marks, and so on. that have been both a part of the unique artifact, or have been brought by means of the scanning approach. We think this paintings is culturally vital, and regardless of the imperfections, have elected to carry it again into print as a part of our carrying on with dedication to the maintenance of published works around the globe.
Additional info for A Concise Introduction to Geometric Numerical Integration
Sample text
10. Given the quadratic function T (p) = 21 pT M −1 p, where M is a positive definite matrix, and a general function V (q), compute the following Poisson brackets: (i) {V (q), T (p)}, (ii) {V (q), {V (q), T (p)}}, (iii) {V (q), {V (q), {V (q), T (p)}}}. 11. 56) is a second-order symplectic integrator. 12. Given the domain D0 = B1/2 (3/2, 0) of area S(D0 ) = π/4, apply 2 steps of length h = π/6 of the explicit Euler method for the pendulum with k = 1 and compute S(D2 ). Hint: ∂(q2 , p2 ) ∂(q1 , p1 ) ∂(q1 , p1 ) = (1 + h2 cos q1 )(1 + h2 cos q0 ); ∂(q0 , p0 ) next, write q1 in terms of q0 , p0 and then use polar coordinates.
This can be used in computer-assisted proofs in dynamical systems. , the degree r of the polynomial approximating the solution. If r is large enough, the error in the preservation of invariants will be below the round-off error. 1. 12) with k = 1 reads as follows for the step (qn , pn ) → (qn+1 , pn+1 ). 1 −pn sin qn − −pn cos qn p2n + cos qn sin qn h2 2 sin qn pn cos qn + . Runge–Kutta methods Introduction For simplicity in the presentation we take a constant step size h in what follows. 2) up to terms of O(hr ), r > 1, without computing any derivative of f , but only reevaluating f at 44 A Concise Introduction to Geometric Numerical Integration intermediate points between (tn , xn ) and (tn+1 , xn+1 ) [130].
The motivation for developing such structure-preserving algorithms arises independently in areas of research as diverse as celestial mechanics, molecular dynamics, control theory, particle accelerators physics and numerical analysis [121, 139, 160, 181, 182]. Although diverse, the systems appearing in these areas have one important common feature. They all preserve some underlying geometric structure which influences the qualitative nature of the phenomena they produce. In the field of geometric numerical integration these properties are built into the numerical method, which gives the method an improved qualitative behavior, but also allows for a significantly more accurate longtime integration than with general-purpose methods.
Download PDF sample
Rated 4.46 of 5 – based on 22 votes |
593e7b8d2e777b4c | Acta Physica Sinica
Citation Search Quick Search
ISSN 1000-3290
CN 11-1958/O4
» About APS
» Editorial Board
» SCI IF
» Staff
» Contact
Browse APS
» Accepts
» In Press
» Current Issue
» Past Issues
» View by Fields
» Top Downloaded
» Sci Top Cited
» Submit an Article
» Manuscript Tracking
» Call for Papers
» Scope
» Instruction for Authors
» Copyright Agreement
» Templates
» Author FAQs
» PACS
» Review Policy
» Referee Login
» Referee FAQs
» Editor in Chief Login
» Office Login
HighLights More»
Acta Phys. Sin
Acta Phys. Sin--2013, 62 (11) Published: 05 June 2013
Select | Export to EndNote
Super-resolution focusing of time reversal electromagnetic waves in metal wire array medium
Zhou Hong-Cheng, Wang Bing-Zhong, Ding Shuai, Ou Hai-Yan
Acta Physica Sinica. 2013, 62 (11): 114101 doi: 10.7498/aps.62.114101
Full Text: [PDF 6061 KB] Download:(920)
Show Abstract
With an existing metal wire array structure, this paper verifies the temporal and spatial focusing properties of the time reversal techniques, and confirms the super-resolution focusing properties of the time reversal electromagnetic waves. Since the metal wire array can provide an evanescent-wave channel, by changing the signal incentives to the time reversal mirror (TRM), this paper achieves some satisfactory simulation results of subwavelength off-site imaging. The analysis and simulation results indicate that with time reversal technique, we can use traditional materials and equipment to achieve super-resolution confocal imaging in the far field, and extract and analyze the source signal at different positions.
Shielding effectiveness of an apertured rectangular cavity against the near-field electromagnetic waves
Jiao Chong-Qing, Niu Shuai
Acta Physica Sinica. 2013, 62 (11): 114102 doi: 10.7498/aps.62.114102
Full Text: [PDF 912 KB] Download:(1016)
Show Abstract
The shielding effectiveness of an apertured rectangular cavity against the near-field waves of both electric and magnetic dipoles is investigated theoretically by using an extended equivalent circuit method. Both electric and magnetic shielding effectivenesses are calculated as functions of distance between the dipoles and the enclosure. It is shown that the near-field shielding effectiveness is lower than the far-field (plane-wave) shielding effectiveness. Also, in the near-field region, the shielding effectiveness will reduce obviously with the decrease of the source-to-enclosure distance. Based on Bethe's small aperture coupling theory, analytical formulas are presented to describe the quantitative relation between the near-field and the far-field shielding effectivenesses. It is shown that the results from equivalent circuit method are in good agreement with the relation obtained from the Bethe's theory.
Simulation of electromagnetic soliton radiography under laser-produced proton beam
Teng Jian, Zhu Bin, Wang Jian, Hong Wei, Yan Yong-Hong, Zhao Zong-Qing, Cao Lei-Feng, Gu Yu-Qiu
Acta Physica Sinica. 2013, 62 (11): 114103 doi: 10.7498/aps.62.114103
Full Text: [PDF 4235 KB] Download:(593)
Show Abstract
During propagating through an underdense plasma, a laser will experience significant energy loss and will be trapped in the plasma as the frequency undergoing a redshift. Thus the electromagnetic (EM) soliton is formed. EM field distribution at different stage is constructed for the soliton in terms of primary theory and particle in cell (PIC) simulation. Radiography of solitons produced by laser accelerated MeV protons is investigated using Monte Carlo methods. The influencing fact or such as proton energy and source size is analyzed. Time-resolved radiography of the soliton is also carried out as the protons accelerated by the target normal sheath acceleration (TNSA) mechanism have a wide energy spectrum. Results validate the static electric field model of the soliton, and provide the basis for the future experiments.
Scattering of the Laguerre-Gaussian beam by a homogeneous spheroid
Ou Jun, Jiang Yue-Song, Shao Yu-Wei, Qu Xiao-Sheng, Hua Hou-Qiang, Wen Dong-Hai
Acta Physica Sinica. 2013, 62 (11): 114201 doi: 10.7498/aps.62.114201
Full Text: [PDF 2021 KB] Download:(480)
Show Abstract
The scattering features of a spheroidal particle illuminated by the Laguerre- Gaussian (LG) beam have been studied based on the generalized Lorenz-Mie theory. By using the localized approximations method, the beam shape coefficients are evaluated and the results obtained agree with the cases of on-axis incidence. Calculations of the far-field scattering intensity are performed to study the LG beam scattered by spheroids, of different size parameters and eccentricities. The simulation results show that when the particle's size parameter is within the range that can be compared to the wavelength of the incident light, the magnitude of the scattering intensity will increase as the particle's size parameter increases, and it will decrease as the ratio of the spheroid's major axis to minor axis increases. Comparisons between LG beams with different topological charges illumination are made and explained physically. It turns out that the magnitude of the scattering intensity decreases as the topological charge increases. The theoretical investigation in this paper may provide a more accurate particle model and reference for applications of LG beams in areas such as particle size measurement, atmospheric laser communication, atmospheric remote sensing and so on.
Phase sensitive spectral domain optical coherence tomography for latent fingerprint detection
Bao Wen, Ding Zhi-Hua, Wang Chuan, Mei Sheng-Tao
Acta Physica Sinica. 2013, 62 (11): 114202 doi: 10.7498/aps.62.114202
Full Text: [PDF 5369 KB] Download:(867)
Show Abstract
Despite the advances made in areas such as DNA profiling, fingerprints are still considered to be the best form of personal identification for criminal investigation purposes. A variety of physical chemical and optical techniques are available for the enhancement and detection of latent fingerprints. However, existing frequently-used fingerprint detection methods show some disadvantages such as harm to fingerprints, slow extraction, potential side effects, leaving trails, high contrast demand and so on. A new method based on phase sensitive spectral domain optical coherence tomography (SD-OCT) for latent fingerprints detection is proposed. This method has advantages of non-contact non-destructive, high-speed and high-sensitivity. The experimentel results demonstrate that using this method to deal with fingerprints of low contrast also leads to satisfactory results, proving that the sensitivity of SD-OCT can be used for accurate and reliable patent fingerprint recognition.
An ameliorated fast phase retrieval iterative algorithm based on the angular spectrum theory
Liu Hong-Zhan, Ji Yue-Feng
Acta Physica Sinica. 2013, 62 (11): 114203 doi: 10.7498/aps.62.114203
Full Text: [PDF 3601 KB] Download:(1023)
Show Abstract
For designing the light beam shaper in the optical transmitting antenna in inter-satellite optical communication system, the essential problem is, according to the input optical field and ideal output light field, to determine the shaper of phase distribution, in which its core is in phase recovery. Based on the traditional Gerchberg-Saxton (G-S) iterative algorithm, using the angular spectrum propagation theory, a kind of amplitude gradient addition iterative algorithm is proposed, and the detailed algorithm process and analysis are given. Compared with G-S algorithm, the new one using iterative process, constructs the light field amplitude feedback loop, and searches out the optimal iteration path using the gradient; their joint action accelerates the convergence process. Numerical simulation shows that the iterative error-drop speed caused by unit iterations in the new algorithm is 1.7 times that of the G-S algorithm, its convergence rate is obviously superior to the G-S algorithm; for different random initial phase, the new algorithm can effectively carry out iteration, show the advantages of the strong adaptability and the good convergent consistency. Amplitude gradient addition iterative algorithm gives a new effective way of recovering the complex optical field phase, and provides technical support for designing all kinds of diffraction optical element.
Wigner function of N00N state and quantum interference with N00N state as input
Xu Xue-Xiang, Zhang Ying-Kong, Zhang Hao-Liang, Chen Yuan-Yuan
Acta Physica Sinica. 2013, 62 (11): 114204 doi: 10.7498/aps.62.114204
Full Text: [PDF 5604 KB] Download:(458)
Show Abstract
Using the formula of Wigner function in coherent representation, we have obtained the analytical expression for Wigner function of N00N state. Based on phase space method, we study the quantum interference with N00N state as input. We derive the analytical expression of conditional probability related with the input parameter N and phase parameter φ and analyze it numerically. It is shown that, when φ is 0 or π, the output is just N00N state. It is also shown that, for 2002 state as input, the output must be 2002 state, which is independent of phase parameters. Moreover, as the number of input photon N increases, the phase probability distributions remain to have one, two, three and four peaks and get narrower. All these results can offer theoretical reference for experiments.
Spin coherent-state transformation and analytical solutions of ground-state based on variational-method for spin-Bose models
Yang Xiao-Yong, Xue Hai-Bin, Liang Jiu-Qing
Acta Physica Sinica. 2013, 62 (11): 114205 doi: 10.7498/aps.62.114205
Full Text: [PDF 200 KB] Download:(1084)
Show Abstract
We present a variational method for the ground-state solutions of the spin-Bose models by means of the spin coherent-state transformation. For the Jaynes-Cummings (J-C) models with and without the rotating-wave approximation, the ground-state energies obtained by this method are in perfect agreement with the results from numerical diagonalization in the whole region of the coupling between a light field and an atom. The present variational-method can be directly used to solve the ground-state energies of the Dicke models with arbitrary atom-numbers and furoher study the quantum phase transition; while the variational-method based on the Holstein-Primakoff transformation is only valid in principle for the thermodynamic limit with the atom-number tending to infinity.
The modeling of end pumping Yb3+:YVO4 quasi-three-level laser
Xia Zhong-Chao, Yang Fu-Gui, Qiao Liang
Acta Physica Sinica. 2013, 62 (11): 114206 doi: 10.7498/aps.62.114206
Full Text: [PDF 402 KB] Download:(734)
Show Abstract
In a Yb3+ laser, the two-energy level structure is close to the quasi-three-level model, but different from that for the Nd and Tm lasers, so it is necessary to investigate the quasi-three-level modeling that will be applied to the Yb3+ laser. Based on the energy level structure, cavity gain and loss as well as population distribution, we present the modeling. Introducing an effective cavity length factor, the laser intensity is calculated and the threshold is obtained. Comparison with the experiments, indicates that the effective cavity length ratio changes the fractional population function and loss, which would influence the threshold and output in turn. Applied to investigate the laser property in the process of end pumping Yb3+:YVO4 laser, we get the threshold being 1.1 W, corresponding to the L=1 mm and T=1%; whereas the threshold is 3.9 W, corresponding to the L=2 mm and T=10%.
Fabrication of cylindrical opals and inverse opals and their optical properties
Chen Wei, Wang Ming, Ni Hai-Bin
Acta Physica Sinica. 2013, 62 (11): 114207 doi: 10.7498/aps.62.114207
Full Text: [PDF 24413 KB] Download:(423)
Show Abstract
Hollow and solid cylindrical opals and inverse opals have been made by the self-assembly method in a capillary. The mechanism as well as the assembly process of monodispersed microspheres self-assembly in a capillary has been investigated. By the vertical self-assembly method, hollow cylindrical polystyrene opals and silica inverse opals of different radii have been made in capillaries; whereas cylindrical solid opals and inverse opals have been prepared under the interactions of gravity sedimentation, evaporation induced micro-flow, liquid surface tension and capillary tension. The growth process of producing solid photonic crystals in capillaries have been described and discussed. By scanning electron microscope, we characterize the internal structure of the samples and with spectrometer we test the reflection spectra of these films. Results show that the substrate curvature radius and microsphere size are the main factors that affect the quality of hollow cylindrical opal and inverse opal films while microsphere size has influence on the internal structure of solid cylindrical opals and inverse opals.
Design of an evanescent-coupled GeSi electro-absorption modulator based on Franz-Keldysh effect
Li Ya-Ming, Liu Zhi, Xue Chun-Lai, Li Chuan-Bo, Cheng Bu-Wen, Wang Qi-Ming
Acta Physica Sinica. 2013, 62 (11): 114208 doi: 10.7498/aps.62.114208
Full Text: [PDF 2691 KB] Download:(1066)
Show Abstract
We present a novel GeSi electro-absorption (EA) modulator design on a silicon-on-insulator platform. The GeSi EA modulator is constructed based on the Franz-Keldysh (FK) effect. The light is evanescent-coupled into the GeSi absorption layer from the rib Si waveguide. A contnet of 1.19% Si in SiGe absorption layer is chosen for C (1528–1560 nm) band operation. Simulation shows a high (3 dB) bandwidth of ~ 64 GHz and extinction ratio of 8.8 dB. Especially the insertion loss is as low as 2.7 dB.
A combined scheme of polarization mode dispersion compensation and polarization de-multiplexing in a polarization division multiplexing system with direct detection
Lin Jia-Chuan, Xi Li-Xia, Zhang Xia, Tian Feng, Liang Xiao-Chen, Zhang Xiao-Guang
Acta Physica Sinica. 2013, 62 (11): 114209 doi: 10.7498/aps.62.114209
Full Text: [PDF 4369 KB] Download:(463)
Show Abstract
A model of polarization mode dispersion (PMD) and state of polarization (SOP) variation induced coherent crosstalk is established in a polarization division multiplexing system. The properties of radio frequency power of one channel in the presence of PMD are investigated. A combined scheme of PMD compensation and polarization de-multiplexing in optical domain is proposed, which is based on monitoring of the feedback signal of RF power. A modified particle swarm optimization algorithm is also used for the adaptive polarization control. The validity of the PMD compensation and polarization de-multiplexing scheme proposed here is demonstrated in a 112Gb/s-PDM-DQPSK simulation system. Results show that the PMD tolerance of the transmission system is increased by 20 ps with 1 dB OSNR margin and the channel separation is accomplished as well.
Nonlinear forced oscillations of gaseous bubbles in elastic microtubules Hot!
Wang Cheng-Hui, Cheng Jian-Chun
Acta Physica Sinica. 2013, 62 (11): 114301 doi: 10.7498/aps.62.114301
Full Text: [PDF 474 KB] Download:(611)
Show Abstract
The wall of elastic microtubules can be described as a membrane-type elastic structure. An oscillating system driven by ultrasound consists of liquid columns, a bubble and elastic wall of the tube. The nonlinear properties of this system are explored. Based on the successive approximation method, the nonlinear resonance frequencies, the amplitude response of fundamental and third oscillations to driving acoustic wave, and the mechanism of resonance response related to the driving wave whose frequency is lower than the resonant frequency are analyzed theoretically. The nonlinear system is oscillating in two directions: the axial and radial directions of the bubble in the microtubule. Numerical results show that the resonance responses cannot be present simultaneously. It has been found that the amplitudes of the fundamental and third harmonic oscillation are multivalued, which may lead to instable response. The third harmonic oscillation is stronger in the region of lower frequencies.
Effect of wall friction on subharmonic bifurcations of impact in vertically vibrated granular beds
Han Hong, Jiang Ze-Hui, Li Xiao-Ran, Lü Jing, Zhang Rui, Ren Jie-Ji
Acta Physica Sinica. 2013, 62 (11): 114501 doi: 10.7498/aps.62.114501
Full Text: [PDF 5773 KB] Download:(725)
Show Abstract
Granular materials consist of a large number of discrete solid particles. When subjected to external vibrations, they exhibit various intricate dynamical behaviors, Which usually depend in a complicated way on many physical factors, such as air dragging, friction from the container wall and so forth. In this work, vertical vibrations are applied to a bed of stainless-steel spheres contained in a glass tube, and the subharmonic bifurcations of impact of particles on the container bottom are investigated. To eliminate the effects of air dragging, we evacuate the container or perforate the container bottom to make it quite permeable to the air. Experiments performed in such containers reveal that the impact bifurcations are controlled solely by the normalized vibration acceleration, but independent of the particle size, the filling height of particles, and the frequency of forced vibration. The sliding friction from the container wall is treated as a constant one with the direction opposite to the velocity relative to the container wall. By involving this damping term into the completely inelastic bouncing ball model, an explanation for the experimental results is made. Simulations on the averaged experimental bifurcation points indicate that the magnitude of wall friction is about 10% of the total weight of the particles.
Numerical investigation on the characteristics of the mushroom-like vortex structure generated by a submerged laminar round jet
Chen Yun-Xiang, Chen Ke, You Yun-Xiang, Hu Tian-Qun
Acta Physica Sinica. 2013, 62 (11): 114701 doi: 10.7498/aps.62.114701
Full Text: [PDF 3135 KB] Download:(969)
Show Abstract
A numerical investigation on the evolution mechanism and characteristics of the submerged laminar round jet in a viscous homogenous fluid is conducted by using the computational fluid dynamics method based on the incompressible Navier-Stokes equation. Three non-dimensional parameters for the mushroom-like vortex structure, including the length of the jet L*, the radius of the mushroom-like vortex R* and the length of vortex circulation d*, are introduced and the variation characteristics of these parameters with the non-dimensional time t* are quantitatively analyzed. Results show that there exist three distinct stages in the formation and evolution procedures of the mushroom-like vortex structure, including the starting, developing and decaying stages. In the starting stage, L* and d* increase linearly with t*, while R* approximately remains to be a constant; in the developing stage, a considerable self-similarity is confirmed, and L*, R*, d* display the same proportional relationship to t*1/2 regardless of the variations of Reynolds number and injection duration; in the decaying stage, L* and R*are approximately proportional to t*1/5, while d* nearly levels off as a constant. Moreover, velocity characteristics at the secondary backflow point and the momentum and geometry centers, the distribution features of the vertical vorticity, as well as the vorticity-stream function relationship are analyzed for the mushroom-like vortex structure.
The motion and acoustic radiation characteristics for cavitation in the compressible vortex fluid
Ye Xi, Yao Xiong-Liang, Zhang A-Man, Pang Fu-Zhen
Acta Physica Sinica. 2013, 62 (11): 114702 doi: 10.7498/aps.62.114702
Full Text: [PDF 3221 KB] Download:(581)
Show Abstract
Based on the compressible fluid theory, the boundary integral equation is used to solve the motion law of cavitation in vortex flow within different surface pressure models. The time-domain sound pressure characteristics induced by cavitation in vortex field are obtained by the moving surface Kirchhoff formulation. With the surface discretion and coordinate transformation, the cavitation surfaces are treated as the moving deformable boundary and the acoustic source directly. The influence of vortex field parameters on motion and radiation of cavitation is analyzed. Results show that with the consideration of compression, the amplitude of cavitation's pulsation as well as the sound pressure will be decreased. In the vortex fluid, cavitation will be extended, necked and splitted, and may generate a jet in sub-bubbles. While the pressure is reduced in the fluid field, the maximum radius and length before splitting of the cavitation will be enlarged. The number of sub-bubbles will increase when the pressure is small in the fluid field. The directive property of cavitation is weak. And the splitting of cavitation will generate a great peak value of sound pressure. With the increase in vortex flux or the decrease in the cavity number, the period of the cavitation oscillation and its radiation sound pressure are elongated, and the peak of sound pressure is retarded and reduced. The results in this paper could be used as the reference data for the research about the motion and sound radiation characteristics of cavitation in vortex fluid.
Research on SRAM functional failure mode induced by total ionizing dose irradiation
Zheng Qi-Wen, Yu Xue-Feng, Cui Jiang-Wei, Guo Qi, Ren Di-Yuan, Cong Zhong-Chao
Acta Physica Sinica. 2013, 62 (11): 116101 doi: 10.7498/aps.62.116101
Full Text: [PDF 9785 KB] Download:(915)
Show Abstract
In the present paper, function test of different test pattern was used to investigate function failure of static random access memory (SRAM) induced by the total dose effect. By comparing the function test results of different test pattern and single error bit, it is shown that the failure mode of the device is data retention fault, and different storage cell had diverse data retention time, the fault module of device is the storage cell. We discussed the reason for these phenomena in detail using simple circuit model of storage cell, and also analyzed the influence of these phenomena on test method to evaluate the total dose radiation damage of SRAM.
First-principles study on elastic properties of hexagonal phase ErAx (A=H, He)
Fan Kai-Min, Yang Li, Sun Qing-Qiang, Dai Yun-Ya, Peng Shu-Ming, Long Xing-Gui, Zhou Xiao-Song, Zu Xiao-Tao
Acta Physica Sinica. 2013, 62 (11): 116201 doi: 10.7498/aps.62.116201
Full Text: [PDF 2045 KB] Download:(759)
Show Abstract
The elastic properties of hexagonal phase ErAx (A=H, He) have been calculated by the first-principles method, where x=0, 0.0313, 0.125, 0.25. Effects of different concentrations of hydrogen and helium on the elastic properties of ErAx systems have been investigated in detail. Results show that the elastic constants, Young's modulus, bulk modulus and shear modulus of ErHx systems increase mainly with increasing hydrogen concentration, whereas, those elastic properties of ErHex systems almost decrease with increasing helium concentration. We have investigated the changes in the charge densities of Er atoms produced by A atoms. It was found that the mechanism for the change of the elastic properties of hexagonal phase ErHx with increasing hydrogen atoms is different from that of ErHex with increasing helium atoms.
Experimental diagnostic of melting fragments under explosive loading
Chen Yong-Tao, Ren Guo-Wu, Tang Tie-Gang, Hu Hai-Bo
Acta Physica Sinica. 2013, 62 (11): 116202 doi: 10.7498/aps.62.116202
Full Text: [PDF 1605 KB] Download:(412)
Show Abstract
We have conducted experiments to study the melting fragments from explosively shocked melting Pb targets. Based on the traditional Asay-Window technique, Asay-F-window was designed, which is suitable to investigate high-density melting fragments of metal sample. The areal mass and volume density of melting fragments from the Pb target were presented, which is also compared with that of micro-jetting and solid spallation. The results may contribute to the understanding of physical mechanism and construction of dynamic model for melting fragments. Additionally, the Asay-F-Window technique is an effective supply for proton photography technique to study the dynamic fragmentation of melting metal.
Experimental study of friction effect under impact loading
Jiang Guo-Ping, Hao Hong, Zeng Chun-Hang, Hao Yi-Fei, Wu Ru-Jun, Liu Ji-Chao
Acta Physica Sinica. 2013, 62 (11): 116203 doi: 10.7498/aps.62.116203
Full Text: [PDF 4363 KB] Download:(508)
Show Abstract
When testing impact dynamics of concrete, usually a variety of kinetic effects can be seen, such as the axial and lateral inertial confinement effects, the effects of stress wave propagation and the final friction effects, etc. Some of these are the material is nature itself such as the size effect, some are experimental errors, etc., but all the dynamic effects, may enter the final test results so that unnecessary errors or even wrong values may be brought into experiments. Due to the mechanism of friction effect, we have designed three different sizes of specimen for SHPB test. The quantitative values of the friction effect are obtained. The DIF is corrected, which is the basis for concrete impact engineering design.
Effect of thickness on the properties of Cu(Inx,Ga1-x)Se2 back conduct Mo thin films prepared by DC sputtering
Tian Jing, Yang Xing, Liu Shang-Jun, Lian Xiao-Juan, Chen Jin-Wei, Wang Rui-Lin
Acta Physica Sinica. 2013, 62 (11): 116801 doi: 10.7498/aps.62.116801
Full Text: [PDF 5166 KB] Download:(552)
Show Abstract
In this study, Mo thin films which used in Cu(Inx Ga1-x)Se2 (CIGS) thin film solar cells as back conduct were deposited on soda-lime glass substrates via DC magnetron sputtering under certain conditions. A series of Mo thin films prepared of various thicknesses was obtained in different sputtering deposition times. The microstructure, electrical resistivity and mechanical strain property of Mo thin films, which may be varied by controlling the thickness, were investigated by XRD, SEM, four probes technology and Scotch tape test. As the results showed, the thicknesses of the films increased linearly with the sputtering time. With increasing thickness, the films' crystal growth showed a change from (110) preferred orientation to (211) preferred orientation. The sheet resistance sharply reduced to 2 Ω/⇑ with the increase of (110) peak height and the resistivity linearly decreased to 0.96×10-4 Ω·cm due to the level of (110) preferred orientation. The films surface has porous (fish-like) grain morphology and intergranular voids. All the films are in a tensile state, and the inner strain decreased with the increase of the thickness.
Effects of parameter modifications on phase transition properties of ferroelectric thin films
Lu Zhao-Xin
Acta Physica Sinica. 2013, 62 (11): 116802 doi: 10.7498/aps.62.116802
Full Text: [PDF 509 KB] Download:(367)
Show Abstract
Within the framework of effective-field theory with correlations, phase transition properties of ferroelectric thin films with different symmetrical surfaces described by the spin-1/2 transverse field Ising model are studied systematically by the differential operator technique. According to the coupling equations with the layer polarization average, the analytical general equations for phase diagrams of multiple-surface ferroelectric thin films with different surface layers have been derived. Then, effects of various parameter modifications on the crossover values from the FPD (ferroelectric-dominant phase diagram) to the PPD (paraelectric-dominant phase diagram) and phase transition regions in the parameter space are discussed in detail. In comparison with the mean-field approximation, the results indicate that the effective-field theory with correlations maybe reduce the ferroelectricity of the ferroelectric thin films more exaggeratedly than the mean-field approximation to some extent.
The site preference of refractory element W in NiAl dislocation core and its effects on bond characters
Chen Li-Qun, Yu Tao, Peng Xiao-Fang, Liu Jian
Acta Physica Sinica. 2013, 62 (11): 117101 doi: 10.7498/aps.62.117101
Full Text: [PDF 377 KB] Download:(410)
Show Abstract
The site occupancy of refractory element W in the <100> (010) edge dislocations of NiAl intermetallic compounds and its effect on NiAl properties are studied by the first-principles discrete variational method. The energetic parameters (binding energy, the impurity segregation energy and the interatomic energy), the density of states and the charge density are calculated respectively for the clean dislocation system and the doped dislocation system. The calculated results of the binding energy and the impurity segregation energy suggest that W exhibits a strong Al site preference. The interactions between the refractory elements W and the neighbouring host atoms are strengthened due mainly to the hybridization of 4d orbital of impurity atom and 3d orbital of host Ni atoms (3p orbital of host Al atom). Meanwhile, some charge accumulations appear between impurity atom and neighbouring host atoms in the dislocation core, indicating that strong bonding states are formed between the impurity atom and its neighbouring host atoms. The refractory element W greatly affects the energy and the electronic structure of NiAl intermetallic compounds, and in turn influences the motion of dislocation and the properties of NiAl compound.
The first-principles study on properties of B-doped at interstitial site of Cu5 grain boundary
Meng Fan-Shun, Zhao Xing, Li Jiu-Hui
Acta Physica Sinica. 2013, 62 (11): 117102 doi: 10.7498/aps.62.117102
Full Text: [PDF 3933 KB] Download:(751)
Show Abstract
The uniaxial tensile and compression tests of the Cu 5 grain boundary (GB) with and without segregated interstitial boron have been performed using first principles method based on density functional theory. Results show that boron enhances the cohesion of Cu5 GB and improves the mechanical property of Cu significantly. The clean boundary has lower density of valence electrons than perfect lattices and will be the point for fracture to start under sufficiently high tensile stress. The Cu5 GB with segregated boron has strengthened the cohesion across the boundary because of the strong B-Cu bond. Charge accumulated to Cu-B decreases slightly the strength of neighboring Cu-Cu bonds, which will be the weak point for fracture to initiate. The ultimate tensile stress is enlarged by the addition of boron. There is no significant effects occurring within 20% of the compression strain due to B-doping.
Study on proton irradiation induced defects in GaN thick film
Zhang Ming-Lan, Yang Rui-Xia, Li Zhuo-Xin, Cao Xing-Zhong, Wang Bao-Yi, Wang Xiao-Hui
Acta Physica Sinica. 2013, 62 (11): 117103 doi: 10.7498/aps.62.117103
Full Text: [PDF 253 KB] Download:(825)
Show Abstract
Proton-irradiation-induced defects threaten seriously the stable performance of GaN-based devices in harsh environments, such as outer space. It is therefore urgent to understand the behaviors of proton-irradiation-induced defects for improving the radiation tolerance of GaN-based devices. Positron annihilation spectroscopy (PAS) has been used to study proton-induced defects in GaN grown by HVPE. The result shows that VGa is the main defects and no (VGaVN) or (VGaVN)2 is formed in 5 MeV proton-irradiated GaN. Photoluminescence (PL) spectrum is carried out at 10K. After irradiation, the band edge shows a blue-shift, but the donor-acceptor pair (DAP) emission band and its LO-phonon replicas is kept at the original position. The intensity of yellow luminescence (YL) band is decreased, which means that the origin of YL band has no relation with VGa. The increased FWHM of GaN (0002) peak in proton-irradiated GaN indicates a degradation of crystal quality.
A first-principle study on the interfacial properties of Cu/CeO2(110)
Lu Zhan-Sheng, Li Sha-Sha, Chen Chen, Yang Zong-Xian
Acta Physica Sinica. 2013, 62 (11): 117301 doi: 10.7498/aps.62.117301
Full Text: [PDF 4112 KB] Download:(1623)
Show Abstract
Cu-CeO2 systems are widely used in solid oxide fuel cells and water gas shift reaction because of its special catalytic ability. The interfacial properties of the Cu/CeO2 (110) with the adsorption of Cu atom and Cu cluster are investigated in terms of first-principles based on density functional theory. It is found that: 1) the single Cu adatom prefers to be adsorbed on the oxygen bridge site; 2) the adsorbed tetrahedron structure of Cu4 cluster is the most stable cluster configuration on CeO2(110) surface; 3) the metal-introduced gap states in the gap area are mainly from the adsorbed Cu (cluster), its neighboring oxygcr and the reduced cerium ion(s), indicating that the activity of CeO2(110) surface is improved by copper adsorption; 4) the adsorbed Cu adatom and Cu4 cluster are oxidized to Cuδ+ and Cu4δ+ by their neighboring Ce ion(s) with the formation of Ce3+ ion(s), the reaction could be summarized as Cux/Ce4+→ Cuxδ+/Ce3+; 5) the adsorption of small clusters introduces more Ce3+ ions than a single Cu atom does, indicating that more Cuδ+-Ce3+ catalytic active centers are formed. The current study on Cu/CeO2(110) together with our previous results on Cu/CeO2(111) presents a good understanding of the synergies between Cu and ceria, and reveals the improvement of the activity of ceria by Cu adsorption.
Low-temperature growth of AlN thin films by plasma-enhanced atomic layer deposition
Feng Jia-Heng, Tang Li-Dan, Liu Bang-Wu, Xia Yang, Wang Bing
Acta Physica Sinica. 2013, 62 (11): 117302 doi: 10.7498/aps.62.117302
Full Text: [PDF 5658 KB] Download:(1963)
Show Abstract
The crystalline AlN thin film was fabricated on Si(100) substrates by plasma-enhanced atomic layer deposition. Its growth rate was illustrated by spectroscopic ellipsometer. And the surface morphology, crystal structure and composition were characterized by atomic force microscopy, X-ray diffraction, high-resolution transmission electron microscopy and X-ray photoelectron spectroscopy. Results show that the lowest temperature for deposition of the crystalline AlN thin film is 200 ℃, and the film coverage on the substrate surface is continuous and homogeneous. The film prepared with a homogeneous concentration distribution is polycrystalline with a hexagonal wurtzite structure. High resolution Al2p and N1s spectra confirm the presence of AlN with peaks located at 74.1 eV and 397.0 eV, respectively.
Effect of co-implantation of nitrogen and fluorine on the fixed positive charge density of the buried oxide layer in SIMOX SOI materials
Zhang Bai-Qiang, Zheng Zhong-Shan, Yu Fang, Ning Jin, Tang Hai-Ma, Yang Zhi-An
Acta Physica Sinica. 2013, 62 (11): 117303 doi: 10.7498/aps.62.117303
Full Text: [PDF 388 KB] Download:(568)
Show Abstract
Nitrogen ions implanted into the buried oxide layer can increase the total dose radiation hardness of silicon on insulator (SOI) materials. However, the obvious increase in positive charge density in the buried layer with high dose of nitrogen implantation leads to a negative effect on the technology of nitrogen implantation into buried oxide. In order to suppress the increase in positive charge density in the nitrogen-implanted buried layer, co-implantation of nitrogen and fluorine is used to implant fluorine into the nitrogen-implanted buried layer. High-frequency voltage-capacitance (C-V) technique is used to characterize the positive charge density in the buried layer. Results show that, in most cases, using the co-implantation of nitrogen and fluorine can significantly reduce the positive charge density in the nitrogen-implanted buried layer. At the same time, it is also found that further increase of the positive charge density induced by fluorine implantation in the nitrogen-implanted buried layer can occur in particular cases. It is proposed that the decrease in the positive charge density in the fluorine and nitrogen-implanted buried layer is due to the introduction of electron traps into the buried layer through fluorine implantation.
The spectrum-control of dual-wavelength LED with quantum dots planted in quantum wells
Zhang Pan-Jun, Sun Hui-Qing, Guo Zhi-You, Wang Du-Yang, Xie Xiao-Yu, Cai Jin-Xin, Zheng Huan, Xie Nan, Yang Bin
Acta Physica Sinica. 2013, 62 (11): 117304 doi: 10.7498/aps.62.117304
Full Text: [PDF 647 KB] Download:(1207)
Show Abstract
A theoretical simulation of electrical and optical characteristics of GaN-based dual-wavelength light-emitting diodes (LED) with high In content in the quantum dots (QDs) which are planted in quantum wells is conducted with APSYS software. The adjustment and contrast of the structure of the devices showed that the blue and green dual-wavelength LEDs will have a broader radiation spectrum and a higher color rendering index when QDs are planted in the green quantum wells. QDs have strong blinding capacity with the carriers, and the carriers at the QDs have shorter lifetime than they are in the wetting layers, so the carrier recombination will give preference to the QDs. It is shown that the distribution of the carriers could be easily controlled by adjusting the spacing layer thickness and the spacing layer doping concentration, so as to control the radiation rate of the two active layers of the dual-wavelength LEDs. Therefore, the spectrum-control of the dual-wavelength LED with QDs planted in QWs could be realized by adjusting the concentration of quantum dots, the thickness of the spacing layer and the doping concentration in the spacing layer. This article can provide guidance for the realization of the non-phosphor white LED.
Ultralow-voltage in-plane-gate indium-tin-oxide thin-film transistors made of P-doped SiO2 dielectrics
Zhu De-Ming, Men Chuan-Ling, Cao Min, Wu Guo-Dong
Acta Physica Sinica. 2013, 62 (11): 117305 doi: 10.7498/aps.62.117305
Full Text: [PDF 1661 KB] Download:(369)
Show Abstract
A new kind of indium-tin-oxide thin-film transistors made of P-doped SiO2 dielectrics in an in-plane-gate structure is fabricated at room temperature. Indium-tin-oxide (ITO) channel and ITO electrodes (gate, source, and drain) can be deposited simultaneously without precise photolithography and alignment process by using only one nickel shadow mask. So the thin film transistors (TFTs) have a lot of advantages, such as the simple device process、low cost etc. Such TFTs exhibit a good performance at an ultralow operation voltage of 1 V, a high field-effect mobility of 18.35 cm2/Vs , a small subthreshold swing of 82 mV/decade, and a large on-off ratio of 1.1×106, because of the huge electric-double-layer (EDL) capacitance (8 μF/cm2) between the interface of P-doped SiO2 dielectrics and ITO channel. So the TFTs are very promising for the application of low-power and portable electronic products and sensors in the future.
First-principles calculation of preferential site occupation of Dy ions in Nd2Fe14B lattice and its effect on local magnetic moments of Fe ions
Hao Hong-Fei, Wang Jing, Sun Feng, Zhang Lan-Ting
Acta Physica Sinica. 2013, 62 (11): 117501 doi: 10.7498/aps.62.117501
Full Text: [PDF 5422 KB] Download:(1494)
Show Abstract
The ground states of lattice properties, formation energy and magnetizations of R2Fe14B (R: rare-earth element) were calculated by the first-principles method based on the generalized gradient approximation (PAW-GGA). GGA+U method was applied to deal with local magnetic moments from 4f shell of rare-earth elements. Magnetic moments were calculated with and without spin-orbital interactions (SOI). Site occupation of Dy ions in Nd2Fe14B lattice is studied by partial substitution of Dy for Nd on different lattice sites. Calculated substitution energy indicates that the Dy2Fe14B is more stable than Nd2Fe14B and the Dy ions prefer to occupy the 4f sites in Nd2Fe14B lattice. It is also found that rare-earth ions occupying the 4f sites will interact more strongly with Fe ions and thus show a greater impact on the local magnetization of Fe. The interaction between rare-earth ions and Fe ions is positively correlated with distance.
Calculation and analysis of surface acoustic wave properties of ZnO film on diamond under different excitation conditions
Qian Li-Rong, Yang Bao-He
Acta Physica Sinica. 2013, 62 (11): 117701 doi: 10.7498/aps.62.117701
Full Text: [PDF 5481 KB] Download:(17161)
Show Abstract
In the last twenty years, the ZnO/diamond layered structure for surface acoustic wave (SAW) devices have been widely studied and have attracted great attention, due to its advantages of high acoustic velocity, high electromechanical coupling coefficient and high power durability. Distinguished from the conventional single-crystal substrate (such as quartz, lithium niobate), ZnO/diamond layered structure shows dispersive SAW properties, which can be excited by four ways: interdigital transducer (IDT)/ZnO/diamond, IDT/ZnO/shorting metal/diamond, ZnO/IDT/diamond, and shorting metal/ ZnO/IDT/diamond. In this paper, the formulation based on the stiffness matrix method for calculating the effective permittivity of ZnO/diamond layered structure under four excitation conditions is given first. Then, by using this formulation, the SAW properties of the monocrystalline ZnO (002) film on polycrystalline diamond and the polycrystalline ZnO (002) film on polycrystalline diamond are calculated respectively. Based on the results of calculation, the ZnO film thicknesses qualified to design and fabricate SAW device are analyzed in detail. Finally, we discuss the function of diamond film thickness of ZnO/diamond/Si layered structure so as to avoid the influence of the silicon substrate on the SAW properties.
Resonant frequency temperature stability of CaTiO3 based microwave dielectric ceramics
Shen Jie, Zhou Jing, Shi Guo-Qiang, Yang Wen-Cai, Liu Han-Xing, Chen Wen
Acta Physica Sinica. 2013, 62 (11): 117702 doi: 10.7498/aps.62.117702
Full Text: [PDF 335 KB] Download:(980)
Show Abstract
The determinants of resonant frequency temperature coefficient (τf) have been analyzed by the approximate treatment of Clausius-Mossotti equation. It is suggested that the value of τf can be adjusted by changing the contribution proportion of ionic polarization or electronic polarization for the dielectric constant. Results of electronic structure calculation and tolerance factor analysis show that B-site substitution of CaTiO3 with (Zn1/3Nb2/3)4+ could turn the value of τf from positive to negative, by enhancing the covalency in the BO6 octahedron and improving the contribution proportion of electronic polarization. Ca[(Zn1/3Nb2/3)xTi(1-x)]O3 dielectric ceramics were prepared by solid-state reaction method with niobate as precursor. Results of structure analysis and property measurement conform to the theoretical analysis. The Ca[(Zn1/3Nb2/3)0.7Ti0.3]O3 dielectric ceramic with near-zero τf was obtained.
Effects of sefl-reduction of glass matrix on the broadband near infrared emissions from Bi-doped alkali earth aluminoborosilicate glasses
Li Yong-Jin, Song Zhi-Guo, Li Chen, Wan Rong-Hua, Qiu Jian-Bei, Yang Zheng-Wen, Yin Zhao-Yi, Wang Xue, Wang Qi, Zhou Da-Cheng, Yang Yong
Acta Physica Sinica. 2013, 62 (11): 117801 doi: 10.7498/aps.62.117801
Full Text: [PDF 339 KB] Download:(350)
Show Abstract
We report the effects of self-reduction of glass matrix on the broadband near infrared (NIR) emissions from Bi-doped alkali earth aluminoborosilicate glasses. Bi2O3 -doped as well as Eu2O3, -doped as a comparison, 35SiO2-25AlPO4-12.5Al2O3-12.5B2O3-15RO (R=Ca,Sr,Ba) glasses were prepared in air. Results show that the self-reduction process of Eu3+→Eu2+ occurs in this glass matrix. Meanwhile the intensity of NIR emission peaked at about 1300nm increases with the increase in the radius of alkali earth ions, while the intensity of both NIR emission peaked at about 1100nm and the red emission from Bi2+decreases. Then the origins of infrared-emitting bismuth centers were discussed according to the correlation of the conversion of Bi ions with the size of alkali earth ions. The results of this work is helpful for understanding the nature of Bi-NIR-emission and may be a guide for the selection of composition of high performance Bi-doped glass.
Study on the photoluminescence properties of InN films
Wang Jian, Xie Zi-Li, Zhang Rong, Zhang Yun, Liu Bin, Chen Peng, Han Ping
Acta Physica Sinica. 2013, 62 (11): 117802 doi: 10.7498/aps.62.117802
Full Text: [PDF 366 KB] Download:(843)
Show Abstract
The photoluminescence (PL) properties of InN films grown by metal organic chemical vapor deposition (MOCVD) have been investigated. InN has a high level of background carrier concentration, which makes the Fermi level lie above the conduction band. By nonlinear fitting of the PL results, along with the energy band relations, we calculated the band gap of InN film to be 0.67 eV and the carrier concentration n=5.4×1018 cm-3. Thus we found a connection between PL results and the carrier concentration of InN films. In addition, we also studied the dependence of peak position and intensity of PL on temperature: the intensity of photoluminescence decreases as the temperature increases, and the peak position shows a red shift instead of an S-shape variation. Such a difference may be explained by a huge full width at half maximum of PL spectra. Also the concentration of carriers and the magnitude of the built-in electric field in the material may have influence on such a result.
Spectroscopic properties and energy transfer of Ce3+/Eu2+ codoped oxide glasses with high Gd2O3 concentration
Shen Ying-Long, Tang Chun-Mei, Sheng Qiu-Chun, Liu Shuang, Li Wen-Tao, Wang Long-Fei, Chen Dan-Ping
Acta Physica Sinica. 2013, 62 (11): 117803 doi: 10.7498/aps.62.117803
Full Text: [PDF 422 KB] Download:(644)
Show Abstract
Eu2+/Ce3+ single doped and co-doped oxide glasses with high Gd2O3 concentration were prepared in a reducing atmosphere using a high-temperature glass melting method. Excitation and emission spectra measurements indicated that Ce3+ can enhance the luminescence intensity of Eu2+ effectively, which exhibited a 2.3 times increasa. The difference between the luminescence lifetimes of Eu2+ with and without Ce3+ doping indicated that the energy transfer efficiency could reach 61.5%, for which the energy transfer mechanism was investigated further. These researches suggest that co-doping method can significantly improve the luminescence capabilities of Eu3+ in the oxide glasses with high Gd2O3 concentration.
Effects of annealing temperature on the microstructure and p-type conduction of B-doped nanocrystalline diamond films
Gu Shan-Shan, Hu Xiao-Jun, Huang Kai
Acta Physica Sinica. 2013, 62 (11): 118101 doi: 10.7498/aps.62.118101
Full Text: [PDF 5219 KB] Download:(868)
Show Abstract
Annealing of different temperatures was performed on boron-doped nanocrystalline diamond (BDND) films synthesized by hot filament chemical vapor deposition (HFCVD). Effects of annealing temperature on the microstructural and electrical properties of BDND films were systematically investigated. The Hall-effect results show that smaller resistivity and Hall mobility values as well as higher carrier concentration exist in the 5000 ppm boron-doped nanocrystalline diamond film (NHB) as compared with those in 500 ppm boron-doped nanocrystalline diamond film (NLB). After 1000 ℃ annealing, the Hall mobility of NLB and NHB samples were 53.3 and 39.3 cm2·V-1·s-1, respectively, indicating that annealing increases the Hall mobility and decreases the resistivity of the films. HRTEM, UV, and visible Raman spectroscopic results show that the content of diamond phase in NLB samples is larger than that in NHB samples because higher B-doping concentration results in a greater lattice distortion. After 1000 ℃ annealing, the amount of nano-diamond phase of NLB and NHB samples both increase, indicating that a part of the amorphous carbon transforms into the diamond phase. This provides an opportunity for boron atoms located at the grain boundaries to diffuse into the nano-diamond grains, which increases the concentration of boron in the nano-diamond grains and improves the conductivity of nanocrystalline diamond grains. It is observed that 1000 ℃ annealing treatment is beneficial for lattice perfection of BDND films and reduction of internal stress caused by doping, so that the electrical properties of BDND films are improved. Visible Raman spectra show that the trans-polyacetylene (TPA) peak (1140 cm-1) disappears after 1000 ℃ annealing, which improves the electrical properties of BDND films. It is suggested that the larger the diamond phase content, the better lattice perfection and the less the TPA amount in the annealed BDND samples that prefer to improve the electrical properties of BDND films.
Phase field crystal simulation of microscopic deformation mechanism of reverse Hall-Petch effect in nanocrystalline materials
Zhao Yu-Long, Chen Zheng, Long Jian, Yang Tao
Acta Physica Sinica. 2013, 62 (11): 118102 doi: 10.7498/aps.62.118102
Full Text: [PDF 24888 KB] Download:(727)
Show Abstract
The nanocrystalline (NC) materials of several average grain sizes ranging from 11.61 to 31.32 nm were obtained by using the phase field crystal model (PFC), and the microscopic deformation mechanism of strengthening law for the uniaxial tensile deformation was discussed. Simulated results show that grain rotation and grain boundary (GB) migration are mainly responsible for the microscopic deformation. Since small grain size is favorable for grain rotation so that it can make the yield strength reduced; and the NC materials would show a reverse Hall-Petch effect. When the grain size is so small and the strain exceeds the yield point to about 4%, dislocation activities begin to occur. Mainly by the change of GB structure (disorganizing triple grain boundary junction and then promoting grain migration), the GB can play a finite contribution to deformation. With increasing grain size, grain rotation becomes difficult, and the grain serration and emission of dislocations are observed.
Influence of applied magnetic field on properties of silicon nitride thin film with light trapping structure prepared by R.F. magnetron sputtering
Jiang Qiang, Mao Xiu-Juan, Zhou Xi-Ying, Chang Wen-Long, Shao Jia-Jia, Chen Ming
Acta Physica Sinica. 2013, 62 (11): 118103 doi: 10.7498/aps.62.118103
Full Text: [PDF 8640 KB] Download:(741)
Show Abstract
In the applied magnetic field different magnetic intensities in the permanent magnet were introduced between the substrate and target, so as to study their influence on the properties of silicon thin films with light trapping structure prepared by R.F. magnetron sputtering. The microstructures, surface morphology and optical properties of the films were characterized by X-ray diffraction, atomic force microscope (AFM) and ultraviolet spectrophotometer separately. Results show that the silicon nitride thin films are still in amorphous state although an magnetic field was applied on them; however, when the magnetic field in the center is of 1.5 T, the surface morphology of the films has dramatically changed to a special peak structure, i.e. pyramid-like protuberances which are perpendicular to the basal surface; meanwhile, in the visible and near infrared range, the average transmittance of the sample is the highest, which is more than 90%, nearly twice as much as the transmittance of the sample without applied magnetic field, thus the light trapping effect is the great.
Electrical, optical properties and structure characterization of In-doped copper nitride thin film
Du Yun, Lu Nian-Peng, Yang Hu, Ye Man-Ping, Li Chao-Rong
Acta Physica Sinica. 2013, 62 (11): 118104 doi: 10.7498/aps.62.118104
Full Text: [PDF 907 KB] Download:(424)
Show Abstract
Thin films of ternary compounds CuxInyN were grown on Si (100) wafers by RF magnetron cosputtering at a low temperature, low power and pure N2 environment. The effect of In incorporation on the structure and physical properties of copper nitride was obvious, which was evaluated by characterizing the film chemical bonding state, structure, electrical and optical properties. In XPS, shift of binding energy, Auger peak and Auger chemical parameters all reflected the chemical changes in the environment. For samples with In content below 8.2 at.%, either the BE increasing of Cu 2p3/2 and In 3d5/2 or the decreasing of N1s could mainly contribute to the Cu-In-N bond formation. For the Cux InyN sample with 4.6% In, indium atoms were consistently confirmed to be incorporated into the body center of Cu3N anti-ReO3 structure as shown by XRD and TEM. The strong (001) preferred orientation of copper nitride crystalline phase was kept predominant in the films until the In content goes up to 10.8 at.%, the texture changed to (111) orientation. The R-T curves of CuxInyN films changed from typical exponential to linear with increasing In. Near constant electrical resistivity in a large temperature range with small TCR of -6/10000 was investigated in the CuxInyN sample with 47.9 at.% In. Moreover, the optical band gap, due to Burstein-Moss effect, was investigated to enhance from 1.02 to 2.51 eV with the In content increasing from 0% to 26.53%, accompanied with band-gap transition from direct to indirect.
Effects of system size on population behavior
Yi Qi-Zhi, Du Yan, Zhou Tian-Shou
Acta Physica Sinica. 2013, 62 (11): 118701 doi: 10.7498/aps.62.118701
Full Text: [PDF 824 KB] Download:(368)
Show Abstract
There are many factors to influence the population behavior of cells. Except for the ways of cellular communication and the cellular environment, Which have been considered in the previous studies, the number of cells (or system size) that has been little considered before is also an important factor. This article investigates effects of system size on clustering behavior in a synthetic multicellular system, where individual oscillators are an integration of repressilator and hysteresis-based oscillators and are coupled through a quorum-sensing mechanism. By bifurcation analysis and numerical simulation, we find that increasing the cell number not only can change the size of the stability interval of steady state clusters and induce new clustering behaviors, but also benefits the enlargement of the attraction basin of steady state clusters, implying that cell differentiation may be closely related to the system size. In addition, such an increase can greatly extend the kinds and coexisting modes of steady state and oscillatory clusters, which would provide a good basis for the adaptability of organisms to the environment. Our results have extended the connotation of dynamics of coupled systems and also may be the foundation for understanding multicellular phenomena.
A dynamic light scattering study of counter-ions condensation on DNA
Lin Yu, Yang Guang-Can, Wang Yan-Wei
Acta Physica Sinica. 2013, 62 (11): 118702 doi: 10.7498/aps.62.118702
Full Text: [PDF 5243 KB] Download:(785)
Show Abstract
The interaction between DNA and counter-ions of different valence, including sodium chloride (Na+), magnesium chloride (Mg2+), hexammine cobalt III ([Co(NH3)6]3+), and spermine ([C10N4H30]4+), is investigated by dynamic light scattering. It is found that the ratio of electrophoretic motilities of DNA in a buffer containing Na+ and Mg2+ is about 2:1, when the concentration of counter-ions c≥ 5 mM. But the ratio of DNA motilities in a buffer containing Na+ and [Co(NH3)6]3+ is about 4.5:1. When c<5 mM, the ratio grows with increasing concentration of counter-ions. DNA charge reversal can be observed in the case of quadrivalent counter-ion. The experimental results are in good agreement with the Manning counter-ions condensation theory for cases of monovalent or bivalent counter-ions. However, when the valency of counter-ions is equal to three, the experimental data deviates from the expectation of the theory significantly. For the quadrivalent counter-ions, the counter-ions condensation theory, which is based on the average field, fails. Furthermore, through the atomic force microscopy, it is found that DNA molecules will condense into compact structures when the valency of counter-ions is equal to or greater than three. Thus, the conformation of polyelectrolyte in free solution and the ion correlation play an important role in the migration process of polyelectrolyte.
MCG source reconstruction based on greedy sparse method Hot!
Bing Lu, Wang Wei-Yuan, Wang Yong-Liang, Jiang Shi-Qin
Acta Physica Sinica. 2013, 62 (11): 118703 doi: 10.7498/aps.62.118703
Full Text: [PDF 4856 KB] Download:(559)
Show Abstract
Current source reconstruction, i.e., reconstructing current dipole distribution through measured array signals of cardiac magnetic field on body surface, is a method for non-invasively study on the heart electrical activity. In this paper, the relationship between measured magnetic signals and current dipole distribution is described by a linear equation, and a sparse solution of current source reconstruction is achieved using a fast greedy method. This method can significantly decrease the computational complexity of or- thogonal matching pursuit (OMP) algorithm by means of approximating orthogonalisation and improving the selection vector strategy per iteration. Thereby, the sources with large dipole strength can be fast searched out with high accuracy. A set of magnetocardiogram (MCG) data of normal subject is used to demonstrate the effectiveness of this method that the trajectory of reconstructed dominant sources, whose strengths are more than 65%, is almost consistent with conduction process in depolarization and repolarization. The average goodness of fit (GOFs) of measured MCG and the magnetic field map generated by the reconstructed current sources during QRS complex and ST-T segment are 99.36% and 99.78%, respectively.
Construction and analysis of complex brain functional network under acupoint magnetic stimulation
Yin Ning, Xu Gui-Zhi, Zhou Qian
Acta Physica Sinica. 2013, 62 (11): 118704 doi: 10.7498/aps.62.118704
Full Text: [PDF 474 KB] Download:(868)
Show Abstract
Brain is a complex nonlinear dynamic system consisting of related functional regions that can be described by the complex network model. Acupoint magnetic stimulation is an equivalent external stimulus for brain, which can be used as an important technical method to study the regulation mechanism of complex nervous system. It is of great significance to research the effect of acupoint magnetic stimulation on the structure and characteristics of brain functional network. Magnetic stimulation was applied to Neiguan (PC6) and the acquired EEG data were analyzed using dual-channel nonlinear method of mutual information in time domain. The corresponding brain functional networks before, during and after a magnetic stimulation were constructed and the characteristic parameters were studied based on the complex network theory. Results show that the average degree, average clustering coefficient and global efficiency of the brain functional network were increased under magnetic stimulation frequency of 3 Hz, while the average path length was reduced. The small world attribution of the corresponding functional network was enhanced, which made the information transfer among brain regions more efficiently. The brain functional networks under acupoint magnetic stimulation is studied for the first time as far as we know, which provides a new idea and approach to investigate the effect and regulation mechanism of transcutaneous acupoint magnetic stimulation to the complex nervous system.
Effects of low-temperature annealing phosphorous gettering process on the electrical properties of multi-crystalline silicon with a low minority carrier lifetime
Jiang Li-Li, Lu Zhong-Lin, Zhang Feng-Ming, Lu Xiong
Acta Physica Sinica. 2013, 62 (11): 110101 doi: 10.7498/aps.62.110101
Full Text: [PDF 1910 KB] Download:(1456)
Show Abstract
A new low-temperature annealing phosphorous gettering process (LTAPGP) was developed to improve the electrical properties of multi-crystalline silicon which has a low minority carrier lifetime. LTAPGP combined a multi-plateau temperature phosphorous gettering process and a low-temperature annealing process. LTAPGP can remove the iron impurities and crystallographic defects of multi-crystalline silicon, and improve the electrical properties of silicon solar cells that were produced from low minority carrier lifetime silicon wafers. Compared with multi-plateau and two-plateau temperature phosphorous gettering process, LTAPGP was more effective in gettering iron impurities and repairing crystallographic defects. The multi-crystalline silicon wafers with a low minority carrier lifetime went through an LTAPGP process were utilized to produce solar cells. The IV-measurement data prove that the efficiency of the new solar cells is 0.2% higher than that of specimens subject to the multi-plateau and two-plateau temperature processes. The results indicat that LTAPGP can make the low minority carrier lifetime silicon wafers to be used in solar cell industry, improve the utilization ratio and reduce the production cost of cast polysilicon.
A type of the new exact and approximate conserved quantity deduced from Mei symmetry for a weakly nonholonomic system
Han Yue-Lin, Wang Xiao-Xiao, Zhang Mei-Ling, Jia Li-Qun
Acta Physica Sinica. 2013, 62 (11): 110201 doi: 10.7498/aps.62.110201
Full Text: [PDF 137 KB] Download:(411)
Show Abstract
A type of structural equation, new exact and approximate conserved quantity which are deduced from Mei symmetry of Lagrange equations for a weakly nonholonomic system, are investigated. First, Lagrange equations of weakly nonholonomic system are established. Next, under the infinitesimal transformations of Lie groups, the definition and the criterion of Mei symmetry for Lagrange equations in weakly nonholonomic systems and its first-degree approximate holonomic system are given. And then, the expressions of new structural equation and new exact and approximate conserved quantities of Mei symmetry for Lagrange equations in weakly nonholonomic systems are obtained. Finelly, an example is given to study the question of the exact and the approximate new conserved quantities.
Solution of the transfer models of femtosecond pulse laser for nano metal film
Han Xiang-Lin, Zhao Zhen-Jiang, Cheng Rong-Jun, Mo Jia-Qi
Acta Physica Sinica. 2013, 62 (11): 110202 doi: 10.7498/aps.62.110202
Full Text: [PDF 4673 KB] Download:(382)
Show Abstract
A class of transfer models for femtosecond pulse laser on nano metal film has been investigated. First, we solve the reduced solution. And then, the arbitrary order asymptotic solution of corresponding model is obtained by using the perturbation theory and method. Finally, the behavior of the solution is discussed.
Static and dynamic analysis of elastic shell structures with smoothed particle method
Ming Fu-Ren, Zhang A-Man, Yao Xiong-Liang
Acta Physica Sinica. 2013, 62 (11): 110203 doi: 10.7498/aps.62.110203
Full Text: [PDF 12573 KB] Download:(705)
Show Abstract
Meshfree smoothed particle method has great advantages in dealing with nonlinear problems of solid structures However, due to the instability and poor accuracy, it has been limited to the application in solid mechanics for a long time; especially the study on shell structure with smoothed particle method is even rarely reported on account of expensive three-dimensional continuum modeling and the phenomenon of numerical fracture in the traditional method The moving least square function and total Lagrangian equations are introduced as an approximation function and approximation equations respectively to improve the stability and numerical accuracy of smoothed particle method; on this basis, the method of static analysis is proposed, and meanwhile the dynamic analysis method is also refined. Finally, the internationally recognized standard test models on static and dynamic problems are adopted to verify the above shell theory, and the results are in good agreement with the existing data, which proves the validity and reliability of the present numerical model. This paper aims to provide a reference for the further research of smoothed particle method on nonlinear shell structures, such as crack, crushing, etc.
A molecular dynamics simulation on the relationship between contact angle and solid-liquid interfacial thermal resistance
Ge Song, Chen Min
Acta Physica Sinica. 2013, 62 (11): 110204 doi: 10.7498/aps.62.110204
Full Text: [PDF 2510 KB] Download:(1727)
Show Abstract
With the fast development of nanotechnology, the solid-liquid interfacial thermal resistance draws increasing research interest due to its importance in nanoscale energy transport. The contact angle is an important quantity characterizing the interfacial properties and is easy to be measured experimentally. Previous researchers have tried to correlate the contact angle to the interfacial thermal resistance. Using molecular dynamics simulation, we have calculated the contact angle and interfacial thermal resistance at a solid/liquid interface and discuss the relationship between the two quantities. The solid/liquid bonding strength and the solid properties are varied to test their effects on both contact angle and interfacial thermal resistance. The simulation results demonstrate that with increasing solid/liquid bonding strength, both the contact angle and interfacial thermal resistance decrease. However, the bonding strength between solid atoms and the solid atomic mass influence the interfacial resistance remarkably while they have little effect on the contact angle. It is because the variations of the solid atomic mass and the bonding strength between solid atoms change the frequency distribution of the vibration of the solid atoms, resulting in a difference in the thermal vibrational coupling between solid and liquid atoms. Our study indicates that the interfacial thermal resistance is not only related to the interfacial solid-liquid bonding strength which is characterized by the contact angle, but also the thermal vibrational coupling between solid and liquid atoms. There is not a simple relationship between the contact angle and the interfacial thermal resistance. The contact angle could not be used as an exclusive criterion for solid-liquid interfacial resistance estimation.
Optical lattice solitons in nonlinear media under the condition of hollow cylinder boundary
Jiang Xian-Ce, Xu Bin, Liang Jian-Chu, Yi Lin
Acta Physica Sinica. 2013, 62 (11): 110205 doi: 10.7498/aps.62.110205
Full Text: [PDF 4279 KB] Download:(429)
Show Abstract
By using the self-similar method to solve the nonlinear Schrödinger eguation with distributed coefficients, the self-similar solitons in Bessel lattice are studied under the hollow cylinder boundary conditions and the analytical solutions are obtained. Analytical solutions and numerical solutions are found to be identical. The result indicates that optical lattices induced by non-diffractive Bessel beams are possible to support stable self-similar soliton clusters.
Dynamical entanglement in the model of field interacted with atoms of a nonlinear medium
Lü Hai-Yan, Yan Yuan, Wei Hou
Acta Physica Sinica. 2013, 62 (11): 110301 doi: 10.7498/aps.62.110301
Full Text: [PDF 432 KB] Download:(469)
Show Abstract
The dynamical entanglement in a model of the field interacted with atoms in a nonlinear medium is studied in terms of concurrence and the reduced Neumann entropy for the generalized binomial state of the field and the ground state of atoms. It is shown that concurrence is dominantly-positively correlated with the reduced Neumann entropy. The entanglement under suitable condition is nearly unchanged for a long time. This is useful for quantum information processing.
Off-diagonal Berry phase in nonlinear systems
Yang Zhi-An
Acta Physica Sinica. 2013, 62 (11): 110302 doi: 10.7498/aps.62.110302
Full Text: [PDF 183 KB] Download:(413)
Show Abstract
In this paper, we have investigated the off-diagonal Berry phase of nonlinear systems and presented its explicit expression. The results show that, for nonlinear systems, the off-diagonal berry phase contains a new term in addition to the dynamical phase, the geometric phase and the nonlinear phase. This new term can describe a cross effect between the Bogoliubov excitation around one eigenstate and another instantaneous eigenstate, while the Bogoliubov excitations are found to be accumulated during the adiabatic evolution and contribute a finite phase of geometric nature. As an application, the off-diagonal Berry phase of a two-well trapped Bose-Einstein condensate system is calculated.
Thermal quantum discord in Heisenberg XXZ model under different magnetic field conditions
Xie Mei-Qiu, Guo Bin
Acta Physica Sinica. 2013, 62 (11): 110303 doi: 10.7498/aps.62.110303
Full Text: [PDF 404 KB] Download:(470)
Show Abstract
The quantum discord of a two-qubit one-dimonsional Heisenberg XXZ spinchain in thermal equilibrium depends on the temperature T, when subjected to different magnetic fields, with B1 and B2 acting separately on the qubit, is studied in this paper. Four cases are considered here: (1) B1=B2 = 0 (without magnetic field); (2) B1≠0,B2=0 (only one qubit in magnetic field); (3) B1=B2 (homogeneous magnetic field); (4) B1=-B2 (inhomogeneous magnetic field). The similarities and difference between quantum discord and quantum entanglement are calculated and discussed in detail. Results show that the quantum discord is more robust than quantum entanglement against temperature, and the effect of inhomogeneous magnetic field is preferable for the quantum communications and quantum information processing, as compared with the effect of homogeneous magnetic field.
Ultracold spin-1 atoms in three-well optical superlattice under a weak magnetic field
Qin Shuai-Feng, Zheng Gong-Ping, Ma Xiao, Li Hai-Yan, Tong Jing-Jing, Yang Bo
Acta Physica Sinica. 2013, 62 (11): 110304 doi: 10.7498/aps.62.110304
Full Text: [PDF 525 KB] Download:(434)
Show Abstract
Ultracold atoms trapped in an optical lattice of double-well potential, the so-called optical superlattice, have received much attention in the field of cold atoms. A protocol generalized to three-well optical superlattice is suggested in this paper. The ground-state diagrams of ultracold spin-1 atoms trapped in a symmetric three-well optical superlattice in a weak magnetic field are studied based on the exact diagonalization. It is shown that the ground-state diagrams are remarkably different for the ferromagnetic and antiferromagnetic atoms. There does not exist the type of ground state for the antiferromagnetic interaction atoms, where the magnetic quantum number of the total spin of the system along the external magnetic field are ±2. But for the ferromagnetic interaction atoms, there do exist. In addition, there exist only the fully polarized ground-states for the ferromagnetic atoms in the negative quadratic-Zeeman-energy region. The physicsal origin of the dependence of the ground states on the controllable parameters are analyzed. These quantum spin-states can be controlled easily and exactly by modulating the external magnetic field and the height of the optical barrier, which may be a tool for the study of spin-entanglement.
Nonliner Landau-Zener tunneling of a Bose-Fermi mixture
Zhang Heng, Wang Wen-Yuan, Meng Hong-Juan, Ma Ying, Ma Yun-Yun, Duan Wen-Shan
Acta Physica Sinica. 2013, 62 (11): 110305 doi: 10.7498/aps.62.110305
Full Text: [PDF 297 KB] Download:(370)
Show Abstract
In this paper we study the nonlinear Landau-Zener tunneling of a boson-fermion mixture in a double-well potential by adjusting the interaction parameter of its components. We find that the tunneling in the system can be affected by adjusting the interatomic self-interaction parameter. Moreover, we notice that the tunneling in the system show a critical phenomenon if variation of interatomic self-interaction, and critical point are given.
Soliton dynamical behavior of the condensates trapped in a square-well potential
Zhang Bo, Wang Deng-Long, She Yan-Chao, Zhang Wei-Xi
Acta Physica Sinica. 2013, 62 (11): 110501 doi: 10.7498/aps.62.110501
Full Text: [PDF 198 KB] Download:(587)
Show Abstract
Using multiple-scale method, we study analytically the soliton dynamical behaviors of the Bose-Einstein condensates trapped in a square-well potential. It is found that the square-well potential has important effects on the soliton dynamics. When the soliton goes into the square-well potential, its movement is accelerated; while it leaves the square-well potential, the soliton is decelerated. With the increase in depth of the square-well potential, the velocity of the soliton increases, and its amplitude becomes larger and its width decreases. This may serve as a reference effect for controlling the dynamical characteristics of the soliton in experiments.
Effect of non-Gaussian noise on negative mobliity
Yang Bo, Mei Dong-Cheng
Acta Physica Sinica. 2013, 62 (11): 110502 doi: 10.7498/aps.62.110502
Full Text: [PDF 491 KB] Download:(474)
Show Abstract
Effects of non-Gaussian noise on negative mobility in an inertial ratchet were investigated by means of stochastic simulation method. The absolute negative mobility (ANM), negative nonlinear mobility (NDM), and negative differential mobility (NNM) were simulated, separately. Results indicate that: (i) non-Gaussian noise can either enhance or diminish the phenomena of ANM, and non- Gaussian noise also can induce NDM and NNM in regions of parameter space. (ii) The average velocity-correlation time characteristics shift towards small value of correlation time. (iii) The absolute value of negative-valued minima decreases as the non-Gaussian noise parameter q increases.
Equivalent modeling and bifurcation analysis of V2 controlled buck converter
He Sheng-Zhong, Zhou Guo-Hua, Xu Jian-Ping, Bao Bo-Cheng, Yang Ping
Acta Physica Sinica. 2013, 62 (11): 110503 doi: 10.7498/aps.62.110503
Full Text: [PDF 7267 KB] Download:(722)
Show Abstract
After dimension reduction, two boundary voltages of V2 controlled buck converter are deduced under different operation mode, based on which, its equivalent one-dimensional discrete-time model is established and complex nonlinear bifurcation behaviors are emphatically studied. Two boundary conditions under which shift between stable period-one state and subharmonic oscillation state and shift between continuous conduction mode (CCM) and discontinuous conduction mode (DCM) take place are derived by analyzing stability and operation mode. The research results show that in V2 controlled buck converter period-doubling bifurcation and border-collision bifurcation can occur with varying circuit parameters and the converter has different bifurcation routes at different circuit parameters. Simulation and experiment platforms are implemented and the corresponding results verify the validity of equivalent discrete-time model and theoretical analysis.
Study on nonlinear phenomena in single phase H bridge inverter based on the periodic spread spectrum
Liu Hong-Chen, Li Fei, Yang Shuang
Acta Physica Sinica. 2013, 62 (11): 110504 doi: 10.7498/aps.62.110504
Full Text: [PDF 1009 KB] Download:(522)
Show Abstract
The periodic spread spectrum technologies used in converter to suppress its electromagnetic interference and noise have been applied widely, while the nonlinear phenomenon is ignored. Based on the analysis of the periodic spread spectrum technology and single-phase H bridge converter accurate stroboscopic map model, the bifurcation and chaos phenomenon of the single-phase H bridge circuit is studied and the discrete model of the H bridge sine inverter is established by means of periodic spread spectrum technology. The nonlinear phenomenon is analyzed using the time domain chart, folding map, bifurcation diagram and Lyapunov index spectrum. Results show that the H bridge sine inverter based on the spread spectrum technology can go into the chaotic region more easily when it is in the nonlinear region, and it is concluded that the frequency of cycle spread spectrum has important effect on the initial branch point position of system.
Dynamics of rumor spreading in mobile social networks Hot!
Wang Hui, Han Jiang-Hong, Deng Lin, Cheng Ke-Qing
Acta Physica Sinica. 2013, 62 (11): 110505 doi: 10.7498/aps.62.110505
Full Text: [PDF 20761 KB] Download:(3691)
Show Abstract
In this paper, we propose an improved CSR model for rumor spreading in mobile social networks. The dynamic equation of rumor spreading is modified to be suitable for user's habit in mobile social networks. In the acceptant probability model, negative and positive social reinforcements are considered. Furthermore, the people's accepting threshold for rumor accepting is taken into account. Analytically, a mean field theory is worked out by considering the influence of network topological structure as homogeneous. Under certain conditions, rumor spreads faster and wider in the new model than CSR rumor spreading model in homogeneous networks. Meanwhile, the multi-agent simulation results indicate that the information spreading process is sensitively dependent on initial conditions.
Delay time obtaining method using the maximum joint entroy on the basis of symbolic analysis
Zhang Shu-Qing, Li Xin-Xin, Zhang Li-Guo, Hu Yong-Tao, Li Liang
Acta Physica Sinica. 2013, 62 (11): 110506 doi: 10.7498/aps.62.110506
Full Text: [PDF 658 KB] Download:(597)
Show Abstract
In this paper, the local maximum of joint entroy was computed using the symbolic analysis method so as to determine the appropriate delay time of the phase space reconstruction. The numerical experiments for three typical chaotic systems show that the present method could reduce computation, increase the efficiency, and also could obtain the optimum delay time accurately and rapidly. And it could reconstruct the original phase space from the time series effectively. Thus it provides a fast and effective way to identify the chaotic signal.
Limited penetrable visibility graph from two-phase flow for investigating flow pattern dynamics
Gao Zhong-Ke, Hu Li-Dan, Zhou Ting-Ting, Jin Ning-De
Acta Physica Sinica. 2013, 62 (11): 110507 doi: 10.7498/aps.62.110507
Full Text: [PDF 4040 KB] Download:(588)
Show Abstract
We optimize and design a new half-ring conductance sensor for measuring two-phase flow in a small diameter pipe. Based on the experimental signals measured from the designed sensor, we using the limited penetrable visibility graph we proposed construct complex networks for different flow patterns. Through analyzing the constructed networks, we find that the joint distribution of the allometric scaling exponent and the average degree of the network allows distinguishing different gas-liquid flow patterns in a small diameter pipe. The curve peak of the degree distribution allows uncovering the detailed features of the flow structure associated with the size of gas bubbles, the average degree of the network can reflect the macroscopic property of the flow behavior, The allometric scaling exponent is very sensitive to the complexity of fluid dynamics and allows characterizing the dynamic behaviors in the evolution of different flow patterns. In this regard, limited penetrable visibility graph analysis of fluid signals can provide a new perspective and a novel tool for uncovering the dynamical mechanisms governing the formation and evolution of different flow patterns.
Hardware implementation for blind demodulation method for chaotic direct sequence spreadspectrum signals
Guo Jing-Bo, Xu Xin-Zhi, Shi Qi-Hang, Hu Tie-Hua
Acta Physica Sinica. 2013, 62 (11): 110508 doi: 10.7498/aps.62.110508
Full Text: [PDF 4474 KB] Download:(856)
Show Abstract
In this paper, we design a field-programmable gate array (FPGA)-based hardware implementation for blind demodulation method for chaotic direct sequence spread spectrum (CD3S) signals. Both transmitter and receiver are designed. The transmitter can produce ten chaotic maps as the spreading sequence. In the receiver, the mathematic model of unscented Kalman filter (UKF) chaotic fitting is built and simplified for hardware implementation. The hardware structure of the receiver is based on this simplified model. For real time fitting different chaotic maps, a dynamic adjustment strategy of range-differentiating factor is proposed. The additove white Gausian noise (AWGN) and multipath channel experiments verify the anti-noise and anti-multipath performance of the UKF chaotic fitting method on one hand. On the other hand, the experiments verify the method can demodulate CD3S signals spread by all ten chaotic maps effectively.
Time-controllable projective synchronization of a class of chaotic systems based on adaptive method
Wang Chun-Hua, Hu Yan, Yu Fei, Xu Hao
Acta Physica Sinica. 2013, 62 (11): 110509 doi: 10.7498/aps.62.110509
Full Text: [PDF 309 KB] Download:(513)
Show Abstract
To solve the problem of indeterminate synchronization time in different chaotic systems, this paper presents a time-controllable synchronization scheme. A general synchronization controller and parameter update laws are proposed to stabilize the error system, thus the drive and response systems could be synchronized up to a given scaling matrix at a pre-specified exponential convergence rate. The synchronization time formula is strictly deduced, which suggests that the speed of synchronization is determined by several parameters, such as exponential rate, initial system value and other parameters brought in by the controller. By adjusting these parameters, the performance of the synchronization can be effectively improved. In numerical simulation, two nonidentical 3D autonomous chaotic systems are chosen to verify this method. The error system can be rapidly stabilized, and unknown parameters are also identi?ed correctly. Firally, two groups of time-controllable parameters are given to verify the theory, wherein synchronization of both cases can be obtained quickly and each result of the synchronization is consistent with the theoretical calculation. The synchronization scheme is characterized by high safety and efficiency, and has its potential value in secure communication.
Complicated behaviors and non-smooth bifurcation of a switching system with piecewise linearchaotic circuit
Wu Li-Feng, Guan Yong, Liu Yong
Acta Physica Sinica. 2013, 62 (11): 110510 doi: 10.7498/aps.62.110510
Full Text: [PDF 1237 KB] Download:(504)
Show Abstract
The complex dynamical and non-smooth bifurcations of a compound system with periodic switches between two piecewise linear chaotic circuits are investigated. Based on the analysis of equilibrium states, the conditions for Fold bifurcation and Hopf bifurcation are derived to explore the bifurcations of the compound system with periodic switches while there are different stable solutions in the two subsystems. Different types of oscillations of the swithing system are observed, and the mechanism is studied and presented. In the difference of periodic oscillations, the number of the swithing points increases doubly with the variation of the parameter, which leads from period-doubling bifurcation to chaos.
Joint compression and tree structure encryption algorithm based on EZW
Deng Hai-Tao, Deng Jia-Xian, Deng Xiao-Mei
Acta Physica Sinica. 2013, 62 (11): 110701 doi: 10.7498/aps.62.110701
Full Text: [PDF 2374 KB] Download:(563)
Show Abstract
A novel joint compression-encryption algorithm based on embedded zerotree wavelet (EZW) coding is proposed. Encryption process is performed before entropy coding. The principles of the context modification and the decision modification and described. Simulation results show that the proposed algorithm has a good effect on security, and has the same compression efficiency compared to the original image compression algorithm.
The numerical-aperture-dependent optical contrast and thickness determination of ultrathin flakes of two-dimensional atomic crystals: A case of graphene multilayers
Han Wen-Peng, Shi Yan-Meng, Li Xiao-Li, Luo Shi-Qiang, Lu Yan, Tan Ping-Heng
Acta Physica Sinica. 2013, 62 (11): 110702 doi: 10.7498/aps.62.110702
Full Text: [PDF 2476 KB] Download:(1258)
Show Abstract
The optical and electronic properties of two-dimensional atomic crystals including graphene are closely dependent on their layer numbers (or thickness). It is a fundamental issue to fast and accurately identify the layer number of multilayer flakes of two-dimensional atomic crystals before further research and application in optoelectronics. In this paper, we discuss in detail the application of transfer matrix method to simulate the optical contrast of ultrathin flakes of two-dimensional atomic crystals and further to identify their thickness, where numerical aperture of microscope objective is considered. The importance of numerical aperture in the thickness determination is confirmed by the experiments on the graphene flakes. Furthermore, two lasers with different wavelengths can be serviced as light sources for the thickness identification of flakes of two-dimensional atomic crystals with a size close to the diffraction limit of the microscope objective. The transfer matrix method is found to be very useful for the optical-contrast calculation and thickness determination of flakes of two-dimensional atomic crystals on multilayer dielectric substrate.
Zhang Hu-Zhong, Li De-Tian, Dong Chang-Kun, Cheng Yong-Jun, Xiao Yu-Hua
Acta Physica Sinica. 2013, 62 (11): 110703 doi: 10.7498/aps.62.110703
Full Text: [PDF 2544 KB] Download:(492)
Show Abstract
Theoretical studies of electrodes potential influence on the sensitivity and ratio of anode current and emission current (Igrid/Ie) will be beneficial for providing theoretical basis and experimental instruction in the research of ionization gauge with carbon nanotubes cathode. In this paper, based on the structure of IE514 extractor gauge, the model of carbon nanotube ionization gauge is built by ion optic simulation software SIMION 8.0. And the influence of electrode potential on the sensitivity and Igrid/Ie is discussed. Results show that with increasing ratio between anode voltage and gate voltage (Vgrid/Vgate), Igrid/Ie increases, while the sensitivity of the gauge decreases with the increase in anode voltage, which would further affect the extension of vacuum measurement lower limit. Moreover, the simulation results are in good agreement with the experimental data reported. Consequently, it is very important to improve the sensitivity, anode current and extension of measurement lower limit to set up an appropriate electrode voltage. In addition, the method adopted in this paper can be extended to the research and development of new-styles of extremely high vacuum ionization gauge of carbon nanotube cathode, which could provide an effective method to resolve the problem of extremely high vacuum measurement.
A 3D numerical simulation to study thesystem of gyrotron
Xia Meng-Zhong, Liu Da-Gang, Yan Yang, Peng Kai, Yang Chao, Liu La-Qun, Wang Hui-Hui
Acta Physica Sinica. 2013, 62 (11): 111301 doi: 10.7498/aps.62.111301
Full Text: [PDF 3167 KB] Download:(565)
Show Abstract
In order to break the limitation of gyrotron emission producing the ideal electron beam in the traditional gyrotron numerical simulation, this paper on the basis of theoretical analysis of structural parameters for the 94 GHz double-anode magnetron injection electron gun, by optimizing the grid plot of conformal FDTD algorithm, obtains the high-performance electron beam of the horizontal and vertical velocity ratio of 1.42 and the maximum velocity spread of 5.92%, By using the optimized electron gun to replace the traditional gy rotron emission in the numerical simulation of the gyrotron system and using the four-process parallel MPI in computation, we finally obtain a TE03 mode, 94 GHz, the average output power of about 40 kW, with on efficiency of 10.5% for the high-performance gyrotron oscillating tube.
A non-linear analysis for gamma-ray spectrum based on compressed sensing
Feng Bing-Chen, Fang Sheng, Zhang Li-Guo, Li Hong, Tong Jie-Juan, Li Wen-Qian
Acta Physica Sinica. 2013, 62 (11): 112901 doi: 10.7498/aps.62.112901
Full Text: [PDF 242 KB] Download:(832)
Show Abstract
The gamma-ray spectrum analysis is an important method for quantitative analysis of radionuclide. Although widely used, the weak peak identification and overlapping peaks resolution are still difficult for gamma-ray spectrum analysis. To solve the problem, a new method based on compressed sensing is proposed for improving gamma-ray spectrum analysis in this paper. The proposed method models physical modulation of gamma spectrometer as a linear equation, and formulates the gamma-ray spectrum analysis as a corresponding inverse problem. The true gamma spectrum is obtained by solving the inverse problem by applying sparsity constraint under the framework of compressed sensing. The feasibility of the proposed method is demonstrated by both numerical simulation and Monte Carlo simulation experiments. Results demonstrate that the proposed method can simultaneously resolve overlapped peaks and reduce the fluctuations of gamma-ray spectrum, effectively improving the accuracy of gamma-ray spectrum analysis.
Spectroscopic properties of AlC (X4-, B4-) molecule
Liu Hui, Xing Wei, Shi De-Heng, Sun Jin-Feng, Zhu Zun-Lue
Acta Physica Sinica. 2013, 62 (11): 113101 doi: 10.7498/aps.62.113101
Full Text: [PDF 178 KB] Download:(292)
Show Abstract
The potential energy curves (PECs) of X4- and B4- states of the AlC molecule have been studied using highly accurate internally contracted multireference configuration interaction approach with the Davidson modification. The Dunning's correlation-consistent basis sets, aug-cc-pVnZ (n=D,T,Q,5,6) are used for the present study. To improve the quality of PECs, core-valence correlation and scalar relativistic corrections are considered. Core-valence correlation corrections are calculated with an aug-cc- pCVTZ basis set. Scalar relativistic correction calcualtions are made using the third-order Douglas-Kroll Hamiltonian approximation at the level of a cc-pV5Z basis set. Obvious effect on the PECs by the core-valence correlation and relativistic corrections has been observed. All the PECs are extrapolated to the complete basis set limit. The convergence observations of present calculations are made and the convergent behavior is discussed with respect to the basis set. Using these PECs, the spectroscopic parameters (TeReωeωexeωeyeBe and αe) of the X4- and B4- states are determined and compared with those reported in the literature. The vibration manifolds are evaluated for each state of non-rotation AlC molecule by numerically solving the radial Schrödinger equation of nuclear motion. For each vibrational state, the vibrational level and inertial rotation constants are obtained, which are in excellent accordance with the experimental findings.
Potential energy function and spectroscopic parameters of SN- molecular ion
Li Song, Han Li-Bo, Chen Shan-Jun, Duan Chuan-Xi
Acta Physica Sinica. 2013, 62 (11): 113102 doi: 10.7498/aps.62.113102
Full Text: [PDF 371 KB] Download:(536)
Show Abstract
The molecular structure of the ground electronic state (X3-) of SN- molecular ion has been calculated by using the CCSD(T) method in combination with the correlation-consistent basis sets aug-cc-pVXZ (X=D,T,Q,5). The equilibrium internuclear distance Re , harmonic frequency ωe and dissociation energy De of the molecular ion are derived and are extrapolated to the complete basis set limit. Comparisons of corresponding parameters between this work and those reported previously indicate our results agree well with the experimental data. A reliable potential energy curve is obtained and is perfectly reproduced in the form of the Murrell-Sorbie analytical potential function. we utilized have the potential energy curve to calculate the relevant spectroscopic parameters of the ground state of the system. The vibrational levels and corresponding molecular constants for the X3- state are obtained by solving the radial Schrödinger equation of the nuclear motion. Calculations in the present work indicate that an improvement in theoretical computations of SN- molecular ion is achieved.
Study on spectroscopic properties and molecular constants of the ground and excited states of AsCl free-radical
Zhu Zun-Lue, Lang Jian-Hua, Qiao Hao
Acta Physica Sinica. 2013, 62 (11): 113103 doi: 10.7498/aps.62.113103
Full Text: [PDF 155 KB] Download:(381)
Show Abstract
The dissociation limit of AsCl free-radical is correctly determined based on group theory and atomic and molecular statics. Potential energy curves (PECs) for the ground state and several low-lying electronic excited states of AsCl free-radical are calculated using the multi-reference configuration interaction method with the basis set of aug-cc-pV5Z where the Davidson correction is considered as an approximation to full CI. Separation parameters (Re, ωe, ωeχe, D0, De, Be and αe) are evaluated using the PEC of AsCl. Spectroscopic parameters are compared with those reported in the literature, and excellent agreement is found between them. With the PEC of AsCl free-radical, forty vibrational states of AsCl free-radical are predicted when J=0 by numerically solving the radial Schrödinger equation of nuclear notion. For each vibrational state, the vibrational levels and inertial rotation constants are reported.
Investigation of photoionization of excited atom irradiated by the high-frequency intense laser
Tian Yuan-Ye, Guo Fu-Ming, Zeng Si-Liang, Yang Yu-Jun
Acta Physica Sinica. 2013, 62 (11): 113201 doi: 10.7498/aps.62.113201
Full Text: [PDF 6979 KB] Download:(381)
Show Abstract
Solving numerically the time-dependent Schrödinger equation in three-dimensional momentum space, we have investigated the energy spectroscopy and two-dimensional momentum angular distribution near the ionization threshold of the photoelectron generated from excited atom under the action of high-frequency laser pulse. The results show that the ionized process is mainly the single-photon ionization in this energy range. The principal quantum number of the initial state can be determined by the position of the first peak in photoelectron spectrum; its angular quantum number of the initial state can be determined by the angular distribution of the two-dimensional momentum of the photoelectron. This law does not change with the variation of the intensity and pulse duration of the incident laser pulse within a relatively broad range of these parameters. In principle, we can utilize these spectra to identify the initial state of the atoms. In addition, the photoelectron momentum spectrum of superposition state is investigated for different relative phase of the state.
Numerical simulation of Trichel pulse characteristics in bar-plate DC negative corona discharge
Wu Fei-Fei, Liao Rui-Jin, Yang Li-Jun, Liu Xing-Hua, Wang Ke, Zhou Zhi
Acta Physica Sinica. 2013, 62 (11): 115201 doi: 10.7498/aps.62.115201
Full Text: [PDF 876 KB] Download:(1058)
Show Abstract
An improved multi-component two-dimensional hybrid model is presented for the simulation of Trichel pulse corona discharge. The model is based on the plasma hydrodynamics and chemical models, including 12 species and 27 reactions. In addition, the photoionization and secondary electron emission effects are taken into account. Simulation is carried out on a bar-plate electrode configuration with an inter-electrode gap of 3.3 mm, the positive potential applied to the bar being 5.0 kV, the pressure in air discharge being fixed at 1.0 atm, and the gas temperature assumed to be a constant (300 K). In this paper, some key microscopic characteristics such as electric field distribution, net charge density distribution, electron density distribution at 5 different instants during a Trichel pulse are analyzed emphatically. Further more, the electron generation and disappearing rates, positive and negative ion distribution characteristics along the axis of symmetry are also investigated in detail in the later Trichel pulse cycle. The results can give valuable insights into the physical mechanism of negative corona discharge.
Influence of addifion of electronegative gases on the properties of capacitively coupled Ar plasmas
Hong Bu-Shuang, Yuan Tao, Zou Shuai, Tang Zhong-Hua, Xu Dong-Sheng, Yu Yi-Qing, Wang Xu-Sheng, Xin Yu
Acta Physica Sinica. 2013, 62 (11): 115202 doi: 10.7498/aps.62.115202
Full Text: [PDF 1006 KB] Download:(491)
Show Abstract
Investigation of electronegative plasmas has now been atrractive due to the advantages of negative-ion assisted etching and charge-free ion implantation in semiconductor manufacture. Langmuir electrostatic probe, as a simple, inexpensive and good spatial resolution diagnosic tool, is popularly used in investigating electronegative plasmas. In this paper, the Langmuir electrostatic probe is proposed to measure the capacitively coupled Ar plasmas with added electronegative gases, such as O2, Cl2 and SF6. The experimental results from the measurements of Ar plasmas with added electronegative gases driven by a 40.68 MHz field indicate that, with increasing flow rate of electronegative gas, high energy peak will occur in electron energy possibility function and shift towards higher energyside. The addition of electronegative gases reduces the electron density significantly as the electron temperature increases. We also calculate the electronegativity of Ar plasmas for the three kinds of electronegative gases. The preliminary interpretations of the above experimental phenomena are presented.
Simulation of hollow cathode discharge by combining the fluid model with a transport model for metastable Ar atoms
He Shou-Jie, Ha Jing, Liu Zhi-Qiang, Ouyang Ji-Ting, He Feng
Acta Physica Sinica. 2013, 62 (11): 115203 doi: 10.7498/aps.62.115203
Full Text: [PDF 576 KB] Download:(359)
Show Abstract
The characteristics of rectangular hollow cathode discharge are studied based on a fluid model combined with a transport model for metastable Ar atoms in argon. The distribution of potential, density of electrons and ions, and the density of metastable atoms are calculated at a pressure of 10 Torr. The peak density of electron and ion is 4.7×1012 cm-3, and the peak density of metastable atoms is 2.1×1013 cm-3. Results obtained in terms of fluid-metastable hybrid model are compared with that in terms of the fluid model, which show that the electron produced by stepwise ionization is one of the important source of new electrons, and the metastable atoms have an obvious effect on the hollow cathode discharge. Compared with the results calculated in terms of fluid model, the density of electrons obtained in terms of hybrid model increases, and the depth of cathode sheath and the averaged electron energy decrease.
Iterative method for multimode waveguide design
Wang Qiang, Zhou Hai-Jing, Yang Chun, Li Biao, He Xiao-Yang
Acta Physica Sinica. 2013, 62 (11): 115204 doi: 10.7498/aps.62.115204
Full Text: [PDF 672 KB] Download:(368)
Show Abstract
Coupled mode theory is an effective tool for analysis and synthesis of overmoded waveguides, but the inverse problem has not been solved yet. This paper completed the iterative procedure to solve the inverse problem. The new method can design automatically and fast various mode converters, mode transducers and horn antennas with special radial pattern. Compared with conventional methods, the structure design using the new method has more advantages in electromagnetic and structural properties. Two design examples are given: dual band TM01-TE11 mode converter and smooth-wall feed horn antenna. The two working frequencies of the dual band TM01-TE11 mode converter are 8.75 GHz and 10.3 GHz, and the radius is 16 mm. The converter efficiencies exceed 99% at the two working frequencies. The smooth-wall feed horn antenna converts the TE11 mode to Gaussian beam effectively. Simulation results agree well with the theoretical predictions.
Unbiased solid surface charging research inplasma environment
Cao He-Fei, Liu Shang-He, Sun Yong-Wei, Yuan Qing-Yun
Acta Physica Sinica. 2013, 62 (11): 119401 doi: 10.7498/aps.62.119401
Full Text: [PDF 637 KB] Download:(478)
Show Abstract
Within the plasma environment of spacecraft, the interaction between electrons and ions may cause surface charging and discharging and may degrade the performance of spacecraft. The charging potential is a key factor for discharging process. By considering the comprehensive effects of particle mass, temperature, density of plasma, secondary electrons and the velocity of unbiased solid, a general equation of surface charging potential of unbiased solid has been derived using Maxwell velocity distribution function. The expressions under some special and general conditions have been also analyzed. The surface charging and discharging properties are summarized under different plasma environment and motion states of unbiased solid.
An analog modulated simulation source for X-ray pulsar-based navigation
Zhou Feng, Wu Guang-Min, Zhao Bao-Sheng, Sheng Li-Zhi, Song Juan, Liu Yong-An, Yan Qiu-Rong, Deng Ning-Qin, Zhao Jian-Jun
Acta Physica Sinica. 2013, 62 (11): 119701 doi: 10.7498/aps.62.119701
Full Text: [PDF 3315 KB] Download:(618)
Show Abstract
In this paper a high resolution X-ray simulation source is proposed and designed to verify the navigation based on X-ray pulsar in the simulation experience system. The simulation source consists of an arbitrary signal generator and a grid controlled X-ray tube. According to the grid tube's characteristic curve, the data of the pulsar standard pulse template are converted. Then using the method of direct digital frequency synthesis, the converted data are synthesized to waveforms, called the analog modulated grid voltage. In the grid controlled X-ray tube, the grid voltage changes the number of electrons hitting on the target and controls the X-ray intensity. With an analog modulated pulse profile applied on the tube grid electrode, the tube will emit X-rays which will match photons' statistical distribution and simulate the X-ray pulsar profile extremely well. The properties of Crab pulsar simulation source are tested in X-ray pulsar navigation simulation experience system. The results of the test are as follows: Comparing the tested pulse profile with the standard pulsar profile, we have time correlation coefficient is 0.9774, and frequency correlation coefficient is 0.9853. The X-ray photon flux is 1.90 ph·cm-2·s-1, the pulsed fraction is 76.15%, and the half-width half maximum is 1.879 ms. These results show that the X-ray simulation source has several merits, such as: strong ability to simulate the X-ray, low cost and simple operation. So it is an important means for the improvement of X-ray pulsar navigation.
Copyright © Acta Physica Sinica
Tel: 010-82649294,82649829,82649863 E-mail: |
d79bf83f074575f0 |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
This question already has an answer here:
Picture a situation where we have two observers, $A$ and $B$, and a system in a certain quantum state. If $B$ makes a measurement of some observable, say energy for example, the state will collapse to one of the possible energy states with a definite probability. If $A$ makes a measurement after this has happened he will observe a precise energy given by the state to which our system has collapsed after the the measurement made by $B$.
Pretty straightforward till now. Now, if both observers make a measurement at the same time they will both measure the same value. But we know from special relativity that simultaneity is relative to the observer, so we may think that from some observer this measurements won't be simultaneous and actually for him $A$ will have made the measurement before $B$.
My question is obvious then, how do measurements and relativity of simultaneity marry?
share|cite|improve this question
marked as duplicate by Ben Crowell, Brandon Enright, Waffle's Crazy Peanut, Chris White, twistor59 Jun 7 '13 at 12:26
duplicate of – Ben Crowell Jun 3 '13 at 21:16
The relativity of simultaneity relies on spatial separation of the events. If two measurements are made at the same system, there will be no ambiguity of the order, because there is no spatial separation.
share|cite|improve this answer
The loophole you're missing is that while $B$ can decide when he measures and what observable he measures, he has no control over what measurement outcome he gets. Thus if they repeat this many times, $A$ will observe some probability distribution over energies, which is exactly what she would get if she had measured first, or even if $B$ hadn't measured at all. With this simple setup, there is no way that $A$ and $B$ can communicate at all, let alone faster than light, and it is faster-than-light communication that sits ill with special relativity, because only that can break causality.
There is still something weird going on, though. If $A$ and $B$ step up their game a little bit, then they can play Bell inequality or CHSH games, in which the correlations between their measurement outcomes are greater than they could possibly be (for a hidden-variable theory) unless they were communicating faster than light.
However, these games are always symmetric in $A$ and $B$. The really weird thing is that whatever entangled state $A$ and $B$ share, they cannot use it to communicate faster than light, because the local outcomes are always random. It's the correlations that are weirdly nonlocal.
As another example, take quantum teleportation. Here $A$ has some quantum state $\psi$ and shares some entangled state $\Psi$ with $B$. By entangling $\psi$ with her half of $\Psi$ and performing a measurement, she can collapse $B$'s half of $\Psi$ into a copy of $\psi$ - and she can do so instantly. Unfortunately, though, $B$'s teleported copy of $\psi$ is scrambled by some unitary operation which depends on $A$'s measurement outcome. To unscramble that unitary, $A$ needs to communicate with $B$ classically, which is at the speed of light or slower, and only then can $B$ get a trustworthy copy of $\psi$. Once again: instant collapse, but no way to use it to communicate.
That said, it is even weirder that this is nonrelativistic quantum theory we're talking about. By taking the usual Schrödinger equation, we're married to a specific observer, and there isn't a formal requirement that the resulting theory not allow instantaneous communication. To do this properly, you really should be using relativistic QM to do quantum information, which is an active field of research (see e.g. RQI-N).
share|cite|improve this answer
Here, you suppose that the sign of $t_A - t_B$ can change, but that would mean that A and B are separated by a space-like interval.
But that means that the "state" which comes from A to B, or from B to A, in a space-like interval way.
So it is not a physical acceptable state.
If, now, your state is a acceptable physical state, then A and B are separated by a time-like interval, then the sign of $t_A - t_B$ is always the same
share|cite|improve this answer
|
4b94ab3c54999213 | Spring 2013 Courses
• Visit Anchor to search for courses by title, instructor, department, and more.
103. Introductory Physics I
An introduction to the conservation laws, forces, and interactions that govern the dynamics of particles and systems. Shows how a small set of fundamental principles and interactions allow us to model a wide variety of physical situations, using both classical and modern concepts. A prime goal of the course is to have the participants learn to actively connect the concepts with the modeling process. Three hours of laboratory work per week. To ensure proper placement, students are expected to have taken the physics placement examination prior to registering for Physics 103.
104. Introductory Physics II
Dale Syphers M 10:30 - 11:25, W 10:30 - 11:25, F 10:30 - 11:25 Searles-315
An introduction to the interactions of matter and radiation. Topics include the classical and quantum physics of electromagnetic radiation and its interaction with matter, quantum properties of atoms, and atomic and nuclear spectra. Three hours of laboratory work per week will include an introduction to the use of electronic instrumentation.
107. Introductory Astronomy
Yuk Tung Liu M 8:30 - 9:25, W 8:30 - 9:25, F 8:30 - 9:25 Searles-315
A quantitative introduction to astronomy with emphasis on stars and the structures they form, from binaries to galaxies. Topics include the night sky, the solar system, stellar structure and evolution, white dwarfs, neutron stars, black holes, and the expansion of the universe. Several nighttime observing sessions required. Does not satisfy pre-med or other science departments’ requirements for a second course in physics. Not open to students who have credit for Physics 62 or Physics 162.
224. Quantum Physics and Relativity
An introduction to two cornerstones of twentieth-century physics, quantum mechanics, and special relativity. The introduction to wave mechanics includes solutions to the time-independent Schrödinger equation in one and three dimensions with applications. Topics in relativity include the Galilean and Einsteinian principles of relativity, the “paradoxes” of special relativity, Lorentz transformations, space-time invariants, and the relativistic dynamics of particles. Not open to students who have credit for or are concurrently taking Physics 275, 310, or 375.
229. Statistical Physics
Develops a framework capable of predicting the properties of systems with many particles. This framework, combined with simple atomic and molecular models, leads to an understanding of such concepts as entropy, temperature, and chemical potential. Some probability theory is developed as a mathematical tool.
240. Modern Electronics
Dale Syphers T 1:00 - 3:55, TH 1:00 - 3:55 Searles-316
A brief introduction to the physics of semiconductors and semiconductor devices, culminating in an understanding of the structure of integrated circuits. Topics include a description of currently available integrated circuits for analog and digital applications and their use in modern electronic instrumentation. Weekly laboratory exercises with integrated circuits.
280. Nuclear and Particle Physics
Stephen Naculich M 1:30 - 2:25, W 1:30 - 2:25, F 1:30 - 2:25 Searles-313
An introduction to the physics of subatomic systems, with a particular emphasis on the standard model of elementary particles and their interactions. Basic concepts in quantum mechanics and special relativity are introduced as needed.
301. Methods of Experimental Physics
Madeleine Msall T 1:00 - 3:55, TH 1:00 - 3:55 Searles-021
Intended to provide advanced students with experience in the design, execution, and analysis of laboratory experiments. Projects in optical holography, nuclear physics, cryogenics, and materials physics are developed by the students.
357. The Physics of Climate
Mark Battle M 11:30 - 12:25, W 11:30 - 12:25, F 11:30 - 12:25 Searles-313
370. Advanced Mechanics
Yuk Tung Liu M 10:30 - 11:25, W 10:30 - 11:25, F 10:30 - 11:25 Searles-115
A thorough review of particle dynamics, followed by the development of Lagrange’s and Hamilton’s equations and their applications to rigid body motion and the oscillations of coupled systems. |
1ec9d6962c146ec9 | Differential equations in physics
1. Why are so many of the important equations of physics first or second order differential equations (schrodinger etc).Why are there few third or fourth order differential equations that describe the physical world?
2. jcsd
3. Good question :D
4. vanesch
vanesch 6,236
Staff Emeritus
Science Advisor
Gold Member
I can only second that !
Of course, there is an answer, but it is just begging the question. The answer is that spacetime has symmetries of translation and uniform motion. This makes you pick "positions" and "velocities" as initial conditions. But as the question *why* does spacetime have these symmetries, the answer is then that these leave the laws of nature invariant, which are... second order diff. equations...
Last edited: Jun 12, 2008
5. Why do the laws of nature have to be second order diff. equations in order to be invariant ?
6. vanesch
vanesch 6,236
Staff Emeritus
Science Advisor
Gold Member
No, it is the other way around (hence why this is begging the OP's question). BECAUSE we find that the laws of nature are second-order, we NOTICE that they are invariant under certain transformations, and hence we call these invariance laws, the invariance of spacetime.
If you look at Newton's law, F = m a, you see that it is invariant under translations : x' = x + u, and you see that it is invariant under uniform motion: x' = x + v t. This means that you should be able to choose a "u" and a "v", two initial conditions. Hence (at least) second order. However, Newton's equation is not invariant under uniform acceleration: x' = x + g t^2 is not an invariance of Newton's laws. So no third order.
So what came first ? Newton's law, from which we deduced invariance under translation and uniform motion ? Or invariance under translation and uniform motion, from which we knew that we would have a second-order differential equation ?
7. rcgldr
rcgldr 7,396
Homework Helper
Somewhat related to the OP, in the case of motion, most physics related problems involve forces or accelerations, and in some cases (like aerodynamic drag), the equations for those forces or accelerations are too complicated to be integerated directly, so we're left with 2nd order differential equations. There is a 3rd order effect, jerk, which is the rate of change in acceleration. One case where 3rd order effect (jerk) would be important would be a automotive simulator, since there are flexible components in an automobile, such as suspension and tires, and jerk affects how these components respond.
I'm not sure how often 3rd order effects exist but are ignored, as opposed to aspects of nature that are truly limited to 2nd order effects.
8. Another place I've see the third derivative "jerk" and maybe even higher order derivatives considered is in designing the shape of automotive cams - since the follower or valvestem motion follows (or should follow) the shape of the cam, and acceptable valve seat wear depends on letting the valve down gently.
9. But if there is a non-zero jerk over time then surely there has to be a non-zero d^4x/dt^4 and a d^5x/dt^5, etc. Why are these not considered?
10. Given a solution to Newton's equation:
x''(t) = F(x,x'), x(0) = a , x'(0) = b
It is trivial to solve for x'''(t), etc, by simply differentiating. This accounts for the cases with non-zero jerk.
First-order equations cannot oscillate, they can only grow or shrink or shrink towards a limit point. Second-order equations can oscillate, and they always either become unbounded or go into a steady state (either a fixed point or a periodic oscillation) as t -> Infinity. Equations of order 3 and higher can have chaotic oscillations.
Remember that a system of n second order equations is equivalent to a single equation of order 2n. So a third degree equation for modeling shock absorption is a way of abstracting the complex process.
In quantum mechanics the justification is much more straightforward. Here we use a second order partial differential equation because its solutions are members of an infinite-dimensional vector space. The choice is arbitrary subject to the axioms of quantum mechanics. Another alternative is to use infinite matrices.
11. Andy Resnick
Andy Resnick 5,749
Science Advisor
This is a great question, and I've spent some time trying to come up with a decent reason. I don't have a complete answer, but I got a good hint from "Conceptual Foundations of Contemporary Relativity Theory", and here it goes:
Let's start with the basic dynamical force equations: Newton's gravitational and Maxwell's electrodynamic equations. Both of these have a form of
F = k/r^2
These equations are important becasue they relate *kinematic* things (distances, accelerations, velocities, charge, time..) to *dynamic* things (forces).
I need to stress that equations of motion are extremely fundamental things, much more fundamental than other types of equations (constitutive relations, disperison relations, etc). Things like Hamilton's equations, or Lagrange's equations, or Shrodinger's equations, Einstein relations, etc... are rooted as equations of motion, even thought they may superficially look more complicated.
Ok- F = k/r^2 is a fundamental equation in physics. It is also known that the 1/r^2 part is due to their being 3 spatial dimensions. So the equations of motion reflect a fundamental property of space.
Now here's the important part: the equation F = k/r^2 is an integral form of Poisson's equation [tex]\nabla^{2}\phi=\rho[/tex], or alternatively Laplace's equation [tex]\nabla^{2}\phi=0[/tex]. Consequently, my book makes the following assertion [pg 179] , and the part I don't fully understand:
"Any physical law must be a partial differential equation containing no derivatives higher than the second, and that the law must be linear in the second derivative."
It goes on to state that "there are no known cases where third (or higher) order differential equations are required in basic laws, nor do we have conceptual resources for interpreting them, but nonlinear equations and combinations of first and second derivatives are known to occur".
Apparently Eddington felt that this was too restrictive, and the reason is "unwarranted bias". However, Schrodinger stated that "The great acheivements of Newton's laws was to concentrate attention on the *second* derivatives- to suggest that *they*- not the first or third or fourth, not any other property of the motion- ought to be accounted for by the environment."
As I said, this is a fascinating topic, and I don't have a good answer for it. However, I think a good explanation is simply that the three-dimensional nature of space leads to physical laws being expressed as second-order differential equations. I would be most interested to hear from anyone else on this.
12. man...most of these ideas are waaay beyond me.
Ive always just thought...well Diffeq's are math, and physics is the application of math to real life.
Well that really doesnt answer it, all I know is that if I see another heat equation problem anytime soon I just might jump off a bridge.
everything in the entire worldis calculus. last year my Matlab profesor showedus some of his projects on differentiantial equations...apparently even the Cruise Control in everyone's cars is some sort of 4th order differential equation.
13. There is an equation in structural mechanics which is of 4'th order in 1D (bending of a rod with a known cross-section), but the 2'nd and 3'd derivatives have physical meaning, and could hence be used as boundary conditions. This in contrast to quasi-corrections to differential equations which gives contributions like [tex]\nabla^5[/tex] etc. where there is no physical meaning of these higher derivatives boundary conditions. There is for example no meaning of [tex]\nabla^2[/tex] as a boundary condition in the Schrödinger equation. In short words this is the problem of defining "mathematically well formed" equations. 1)Meaningful boundary conditions
2)Not many equations survive invariance under translation and rotation as been mentioned here, and those who do are the ones we know...
3) Most PDE's are derived from conservations of something physical like charge, involving incoming flux through a surrounding surface. Applying Gauss, stokes and Green's laws to transform to volume integrals leads to 2'nd order derivatives like the laplacian operator.
14. Andy Resnick
Andy Resnick 5,749
Science Advisor
I agree there are higher-order equations in physics: the stream function is a biharmonic, The Korteweg-deVries equation, the elastic wave propogation equation, I'm sure there's many more. But none of those have been elevated to a plysical *law*. The equations above are derived from more fundamental concepts.
15. This is a very interesting topic.
Maybe there are different laws involving higher order differential equations which are invariant under some other strange symmetries that we would never consider as natural because of our specific view of the world? Maybe there are conscient beings passing through time in a different manner, perceiving a different kind of causality that doesn't affect ours, seeing a totally different world and they would never see Newtons laws or similar as something natural, and they would never imagine that there are beings like us travelling through the world in a different direction? And maybe there are more general laws in physics still to be discovered by us and by them, some laws that we might have in common. And maybe I'm just talking nonsense? Probably so... |
aae7a14f23dec54c | Heat equation
From Example Problems
Jump to: navigation, search
The heat equation is an important partial differential equation which describes the variation of temperature in a given region over time. In the special case of heat propagation in an isotropic and homogeneous medium in the 3-dimensional space, this equation is
• u(t, x, y, z) is temperature as a function of time and space;
• ut is the rate of change of temperature at a point over time;
• u_{{xx}}, u_{{yy}}, and u_{{zz}} are the second spatial derivatives (thermal conductions) of temperature in the x, y, and z directions, respectively
The heat equation is a consequence of Fourier's law of cooling (see heat conduction).
To solve the heat equation, we also need to specify boundary conditions for u.
Solutions of the heat equation are characterized by a gradual smoothing of the initial temperature distribution by the flow of heat from warmer to colder areas of an object.
The heat equation is the prototypical example of a parabolic partial differential equation.
Using the Laplace operator, the heat equation can be generalized to
u_{t}=k\Delta u,\quad
where the Laplace operator is taken in the spatial variables.
The heat equation governs heat diffusion, as well as other diffusive processes, such as particle diffusion. Although they are not diffusive in nature, some quantum mechanics problems are also governed by a mathematical analog of the heat equation (see below). It also can be used to model some processes in finance.
Solving the heat equation using Fourier series
The following solution technique for the heat equation was proposed by Joseph Fourier in his treatise Théorie analytique de la chaleur, published in 1822. Let us consider the heat equation for one space variable. This could be used to model heat conduction in a rod. The equation is
(1)\ u_{t}=ku_{{xx}}\quad
where u = u(t, x) is a function of two variables t and x. Here
• x is the space variable, so x ∈ [0,L], where L is the length of the rod.
• t is the time variable, so t ≥ 0.
We assume the initial condition
(2)\ u(0,x)=f(x)\quad \forall x\in [0,l]\quad
where the function f is given and the boundary conditions
(3)\ u(t,0)=0=u(t,L)\quad \forall t>0\quad .
Let us attempt to find a solution of (1) which is not identically zero satisfying the boundary conditions (3) but with the following property: u is a product in which the dependence of u on x, t is separated, that is:
(4)\ u(t,x)=X(x)T(t).\quad
This solution technique is called separation of variables. Substituting u back into equation (1),
{\frac {T'(t)}{kT(t)}}={\frac {X''(x)}{X(x)}}.\quad
Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value − λ. Thus:
(5)\ T'(t)=-\lambda kT(t)\quad
(6)\ X''(x)=-\lambda X(x).\quad
We will now show that solutions for (6) for values of λ ≤ 0 cannot occur:
1. Suppose that λ < 0. Then there exists real numbers B, C such that
X(x)=Be^{{{\sqrt {-\lambda k}}x}}+Ce^{{-{\sqrt {-\lambda k}}x}}.
From (3) we get
X(0)=0=X(L)\quad ,
and therefore B = 0 = C which implies u is identically 0.
2. Suppose that λ=0. Then there exists real numbers B, C such that
X(x)=Bx+C\quad .
From equation (3) we conclude in the same manner as in 1 that u is identically 0.
3. Therefore, it must be the case that λ > 0. Then exists there exist real numbers A, B, C such that
T(t)=Ae^{{-\lambda kt}}\quad
X(x)=B\sin({\sqrt {\lambda k}}x)+C\cos({\sqrt {\lambda k}}x).
From (3) we get C=0 and that for some positive integer n,
{\sqrt {\lambda k}}=n{\frac {\pi }{L}}.
This solves the heat equation in the special case that the dependence of u has the special form (4).
In general, the sum of solutions to (1) which satisfy the boundary conditions (3) also satisfies (1) and (3). We can show that the solution to (1), (2) and (3) is given by
u(x,t)=\sum _{{n=1}}^{{+\infty }}D_{n}\left(\sin {\frac {n\pi x}{L}}\right)e^{{-{\frac {n^{2}\pi ^{2}kt}{L^{2}}}}}
D_{n}={\frac {2}{L}}\int _{0}^{L}f(x)\sin {\frac {n\pi x}{L}}\,dx.
Generalizing the solution technique
The solution technique used above can be greatly extended to many other types of equations. The idea is that the operator uxx with the zero boundary conditions can be represented in terms of its eigenvectors. This leads naturally to one of the basic ideas of the spectral theory of linear self-adjoint operators.
Consider the linear operator Δ u = ux x. The infinite sequence of functions
e_{n}(x)={\sqrt {{\frac {2}{L}}}}\sin {\frac {n\pi x}{L}}
for n ≥ 1 are eigenvectors of Δ. Indeed
\Delta e_{n}=-{\frac {n^{2}\pi ^{2}}{L^{2}}}e_{n}.
Moreover, any eigenvector f of Δ with the boundary conditions f(0)=f(L)=0 is of the form en for some n ≥ 1. The functions en for n ≥ 1 form an orthonormal sequence with respect to a certain inner product on the space of real-valued functions on [0, L]. This means
\langle e_{n},e_{m}\rangle =\int _{0}^{L}e_{n}(x)e_{m}(x)dx=\left\{{\begin{matrix}0&n\neq m\\1&m=n\end{matrix}}\right.
Finally, the sequence {en}nN spans a dense linear subspace of L2(0, L). This shows that in effect we have diagonalized the operator Δ.
Heat conduction in non-homogeneous anisotropic media
In general, the study of heat conduction is based on several principles. Heat flow is a form of energy flow, and as such it is meaningful to speak of the time rate of flow of heat into a region of space.
• The time rate of heat flow into a region V is given by a time-dependent quantity qt(V). We assume q has a density, so that
q_{t}(V)=\int _{V}Q(t,x)\,dx\quad
• Heat flow is a time-dependent vector function H(x) characterized as follows: the time rate of heat flowing through an infinitesimal surface element with area d S and with unit normal vector n is
{\mathbf {H}}(x)\cdot {\mathbf {n}}(x)\,dS
Thus the rate of heat flow into V is also given by the surface integral
q_{t}(V)=-\int _{{\partial V}}{\mathbf {H}}(x)\cdot {\mathbf {n}}(x)\,dS
where n(x) is the outward pointing normal vector at x.
• The Fourier law states that heat energy flow has the following linear dependence on the temperature gradient
{\mathbf {H}}(x)=-{\mathbf {A}}(x)\cdot [\operatorname {grad}(u)](x)
where A(x) is a 3 × 3 real matrix, which in fact is symmetric and non-negative.
By Green's theorem, the previous surface integral for heat flow into V can be transformed into the volume integral
=\int _{{\partial V}}{\mathbf {A}}(x)\cdot [\operatorname {grad}(u)](x)\cdot {\mathbf {n}}(x)\,dS
=\int _{V}\sum _{{i,j}}\partial _{{x_{i}}}a_{{ij}}(x)\partial _{{x_{j}}}u(t,x)\,dx
• The time rate of temperature change at x is proportional to the heat flowing into an infinitesimal volume element, where the constant of proportionality is dependent on a constant κ
\partial _{t}u(t,x)=\kappa (x)Q(t,x)\,dx
Putting these equations together gives the general equation of heat flow:
\partial _{t}u(t,x)=\kappa (x)\sum _{{i,j}}\partial _{{x_{i}}}a_{{ij}}(x)\partial _{{x_{j}}}u(t,x)
• The constant κ(x) is the inverse of specific heat of the substance at x × density of the substance at x.
• In the case of an isotropic medium, the matrix A is a scalar matrix equal to thermal conductivity.
Particle diffusion
Particle diffusion equation
One can model particle diffusion by an equation involving either:
In either case, one uses the heat equation
c_{t}=D\Delta c,\quad
P_{t}=D\Delta P.\quad
Both c and P are functions of position and time. D is the diffusion coefficient that controls the speed of the diffusive process, and is typically expressed in meters squared over second.
The random trajectory of a single particle subject to the particle diffusion equation is a brownian motion.
If a particle is placed in {\vec R}={\vec 0} at time t=0, then the probability density function associated to the vector {\vec R} will be the following:
P({\vec R},t)=G({\vec R},t)={\frac {1}{(4\Pi Dt)^{{3/2}}}}e^{{-{\frac {{\vec R}^{2}}{4Dt}}}}
It is related to the probability density functions associated to each of its components R_{x}, R_{y} and R_{z} in the following way:
P({\vec R},t)={\frac {1}{(4\Pi Dt)^{{3/2}}}}e^{{-{\frac {R_{x}^{2}+R_{y}^{2}+R_{z}^{2}}{4Dt}}}}=P(R_{x},t)P(R_{y},t)P(R_{z},t)
The random variables R_{x}, R_{y} and R_{z} are distributed according to a normal distribution of mean 0 and of variance 2\,D\,t. In 3D, the random vector {\vec R} is distributed according to a normal distribution of mean {\vec 0} and of variance 6\,D\,t.
At t=0, the expression of P({\vec R},t) above is singular. The probability density function corresponding to the initial condition of a particle located in a known position {\vec R}={\vec 0} is the Dirac delta function, noted \delta ({\vec R}) (the generalisation to 3D of the Dirac delta function is simply \delta ({\vec R})=\delta (R_{x})\delta (R_{y})\delta (R_{z})). The solution of the diffusion equation associated to this initial condition is also called a Green function.
Historical origin of the diffusion equation
The particle diffusion equation was originally derived by Albert Einstein in 1905. Einstein used it in order to model brownian motion. The reference of the major article he published on this subject is the following:
• Einstein, A. "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen." Ann. Phys. 17, 549, 1905. [1]
Solving the diffusion equation through Green functions
Green functions are the solutions of the diffusion equation corresponding to the initial condition of a particle of known position. For another initial condition, the solution to the diffusion equation can be expressed as a decomposition on a set of Green Functions.
Say, for example, that at t=0 we have not only a particle located in a known position {\vec R}={\vec 0}, but instead a large number of particles, distributed according to a spatial concentration profile c({\vec R},t=0). Solving the diffusion equation will tell us how this profile will evolve with time.
As any function, the initial concentration profile can be decomposed as an integral sum on Dirac delta functions:
c({\vec R},t=0)=\int c({\vec R}^{0},t=0)\delta ({\vec R}-{\vec R}^{0})dR_{x}^{0}\,dR_{y}^{0}\,dR_{z}^{0}
At subsequent instants, given the linearity of the diffusion equation, the concentration profile becomes:
c({\vec R},t)=\int c({\vec R}^{0},t=0)G({\vec R}-{\vec R}^{0},t)dR_{x}^{0}\,dR_{y}^{0}\,dR_{z}^{0}, where G({\vec R}-{\vec R}^{0},t) is the Green function defined above.
Although it is more easily understood in the case of particle diffusion , where an initial condition corresponding to a Dirac delta function can be intuitively described as a particle being located in a known position, such a decomposition of a solution into Green functions can be generalized to the case of any diffusive process, like heat transfer, or momentum diffusion, which is the phenomenon at the origin of viscosity in liquids.
Schrödinger equation for a free particle
With a simple division, the Schrödinger equation for a single particle of mass m in the absence of any applied force field can be rewritten in the following way:
\psi _{t}={\frac {i\hbar }{2m}}\Delta \psi , where i is the unit imaginary number, and \hbar is Planck's constant divided by 2\pi , and \psi is the wavefunction of the particle.
This equation is a mathematical analogue of the particle diffusion equation, which one obtains through the following transformation:
c({\vec R},t)\psi ({\vec R},t)
D~{\frac {i\hbar }{2m}}
Applying this transformation to the expressions of the Green functions determined in the case of particle diffusion yields the Green functions of the Schrödinger equation, which in turn can be used to obtain the wavefunction at any date through an integral on the wavefunction at t=0:
\psi ({\vec R},t)=\int \psi ({\vec R}^{0},t=0)G({\vec R}-{\vec R}^{0},t)dR_{x}^{0}\,dR_{y}^{0}\,dR_{z}^{0}, with
G({\vec R},t)={\frac {m^{{3/2}}}{(2i\Pi \hbar t)^{{3/2}}}}e^{{-{\frac {{\vec R}^{2}m}{2i\hbar t}}}}
Remark: this analogy between quantum mechanics and diffusion is a purely mathematical one. In physics, the evolution of the wavefunction according to Schrödinger equation is not a diffusive process.
Diffusion (of particles, heat, momentum...) describes the return to global thermodynamic equilibrium of an inhomogeneous system, and as such is a time-irreversible phenomenon, associated to an increase in the entropy of the universe: in the case of particle diffusion, if c({\vec R},t) is a solution of the diffusion equation, then c({\vec R},-t) isn't. Intuitively we know that particle diffusion tends to resorb spatial concentration inhomogeneities, and never amplify them.
As a generalization of classical mechanics, quantum mechanics involves only time-reversible phenomena: if \psi ({\vec R},t) is a solution of the Schrödinger equation, then the complex conjugate of \psi ({\vec R},-t) is also a solution. Note that the complex conjugate of a wavefunction has the exact same physical meaning as the wavefunction itself: the two react exactly in the same way to any series of quantum measurements.
It is the imaginary nature of the equivalent diffusion coefficient i\hbar /(2m) that makes up for this difference in behavior between quantum and diffusive systems.
On a related note, it is interesting to notice that the imaginary exponentials that appear in the Green functions associated to the Schrödinger equation create interferences between the various components of the decomposition of the wavefunction. This is a symptom of the wavelike properties of quantum particles.
The heat equation arises in the modeling of a number of phenomena and is often used in financial mathematics in the modeling of options. The famous Black-Scholes option pricing model's differential equation can be transformed into the heat equation allowing relatively easy solutions from a familiar body of mathematics. Many of the extensions to the simple option models do not have closed form solutions and thus must be solved numerically to obtain a modeled option price. The heat equation can be efficiently solved numerically using the Crank-Nicolson method and this method can be extended to many of the models with no closed form solution. (Wilmott, 1995)
An abstract form of heat equation on manifolds provides a major approach to the Atiyah-Singer index theorem, and has led to much further work on heat equations in Riemannian geometry.
See also
• Wilmott P., Howison S., Dewynne J. (1995) The Mathematics of Financial Derivatives:A Student Introduction. Cambridge University Press.
External links
nl:Warmtevergelijking ru:Уравнение диффузии sl:Difuzijska enačba sv:Värmeledningsekvationen |
239a91de40f7362a | zbMATH — the first resource for mathematics
a & b logic and
a | b logic or
!ab logic not
abc* right wildcard
"ab c" phrase
(ab c) parentheses
any anywhere an internal document identifier
au author, editor ai internal author identifier
ti title la language
so source ab review, abstract
py publication year rv reviewer
cc MSC code ut uncontrolled term
On homoclinic structure and numerically induced chaos for the nonlinear Schrödinger equation. (English) Zbl 0707.35141
Authors’ summary: It has recently been demonstrated that standard discretizations of the cubic nonlinear Schrödinger (NLS) equation may lead to spurious numeical behaviour. In particular, the origins of numerically induced chaos and the loss of spatial symmetry are related to the homoclinic structure associated with the NLS equation. In this paper, an analytic description of the homoclinic structure via soliton type solutions is provided and some consequences for numerical computations are demonstrated. differences between an integrable discretization and standard discretizations are highlighted.
Reviewer: A.D.Osborne
35Q55NLS-like (nonlinear Schrödinger) equations
35Q51Soliton-like equations
35B35Stability of solutions of PDE
65Z05Applications of numerical analysis to physics
Full Text: DOI |
e2fdfe0030fc96fc | Graphene is a simple honeycomb lattice of carbon that displays a number of remarkable physical properties. The essential reason for all the fascinating properties of graphene is that the low energy excitations behave as if governed not be the Schrödinger equation, but instead by an effective Dirac-Weyl equation. This latter equation describes particles of zero rest mass which (therefore) travel at the speed of light.
Much of the strange physics of this equation transfers to the physics of graphene, producing several novel properties (Klien tunneling, absence of back scattering). On the other hand, the more humble origin of this this effective equation can itself still be seen at several places in the physics of graphene which, therefore, consists of an mixture of exotic Dirac-Weyl physics with standard solid state physics.
Note that by an effective equation we simply mean that while the electrons are, of course, governed by the Schrödinger equation they are conspiring in some way to produce low energy behavior that appears as if they were governed by a quite different equation. Since we are dealing with an effective equation the Dirac-Weyl equation that describes low energy graphene excitations contains not the speed of light c, but an effective constant (which turns out to be just the Fermi velocity).
The following link provides a derivation of the origin of the Dirac-Weyl in graphene physics, and allows one to see how various properties of the quasi-particles such as their chirality emerge.
Continuum approximation description of graphene
Possible projects that can be done within the group are briefly described here:
Reduced density-matrix functional theory
An abstract for possible diploma projects can be found here
DFT - Developments
DFT - Applications
Contact our webmaster |
d13b6535745f8c44 | Quantum Gravity and String Theory
Quantized Space-Time and Internal Structure of Elementary Particles: a New Model
Authors: Hamid Reza Karimi
In this paper we present a model in which the time and length are considered quantized. We try to explain the internal structure of the elementary particles in a new way. In this model a super-dimension is defined to separate the beginning and the end of each time and length quanta from another time and length quanta. The beginning and the end of the dimension of the elementary particles are located in this super-dimension. This model can describe the basic concepts of inertial mass and internal energy of the Elementary particles in a better way. By applying this model, some basic calculations mentioned below, can be done in a new way: 1- The charge of elementary particles such as electrons and protons can be calculated theoretically. This quantity has been measured experimentally up to now. 2- By using the equation of the particle charge obtained in this model, the energy of the different layers of atoms such as hydrogen and helium is calculated. This approach is simpler than using Schrödinger equation. 3- Calculation of maximum speed of particles such as electrons and positrons in the accelerators is given.
Comments: 23 pages.
Download: PDF
Submission history
[v1] 16 Nov 2009
Unique-IP document downloads: 1305 times
Add your own feedback and questions here:
comments powered by Disqus |
23e1e51578a74ebe | Challenger Disaster / Computational Science / Fenyman Diagram / Manhattan Project / Mathematics / Nobel Prize / Path Integral Formulation / Physics / Quantum Electrodynamic Theory / Quantum Theory / Richard Fenyman / Superfluid Helium / Weak Decay / Weak Force
Notable Names: Richard Feynman
What defines genius? Real genius, not just the smart kid in the back of the class with all the answers. People like Galileo, Da Vinci, Einstein. The brilliant minds that take standard concepts, turn them upside down, and show us exactly why it never made such sense to us before. They take two dimensional images, and show us three dimensional truths.
Feynman, explaining something cool.
Or in the case of Richard Feynman, they take the most basic bits of the universe, and give us quantum electrodynamics. Feynman was a brilliant mathematician and physicist, and arguably one of the greatest science lecturers of all time. Let’s delve for a bit, via Feynman, into the wacky, weird world of energy: the stuff everything you have ever known or interacted with (including yourself, and this computer screen!) is composed of.
Now, I’m no physicist, but listening to Feynman’s lectures and interviews motivates me to learn more about the big majestic mystery of our physical universe. Born in 1918 in New York, Feynman was an intelligent student who had mastered differential and integral calculus by the time he was 15. He was turned away from Columbia University before being accepted at the famed MIT in Boston. After completing his bachelor’s, he then went on to Princeton, excelling constantly in physics, mathematics, and computational sciences. Indeed, his reputation for unprecedented thinking, clarifying lectures, and charming genius was so great that Albert Einstein himself attended his first graduate lecture. He was on his way to revolutionizing the field of physics, generating theories that are still being studied as our technology advances enough to measure it in laboratories. Feynman’s reputation even led him to the Manhattan Project, at the tender age of 24.
If you’re not into atomic or war history, the Manhattan Project was a secret project developed by the American government, that led to the creation of the first atomic bomb. The Manhattan Project operated from 1942-1946 in Los Alamos, New Mexico, and Feynman was a major contributor in the theoretical and computational division. Feynman has said that his idea of assisting on the project with the purpose of defending the US against Germany and Japan (who were supposed to be racing to develop the bomb first), should have dissipated when the threat did. He continued on with the work, stating that he was driven by solving the problem, not thinking deeply about the moral complications. He was also present at the Trinity Bomb test – the first atomic explosion, and the official inception of the Atomic Age. Shortly after, and despite the pleading of Robert Oppenheimer (head of the Los Alamos lab) to stay and continue contributing, Feynman took a post at Cornell briefly. He claimed he was uninspired by the atmosphere and close to burning out intellectually there, so he took a post at Cal Tech, where he ended up doing some of his best research. This includes:
• a model of weak decay: The ‘weak’ interaction is one of the four fundamental forces of the universe, along with the strong nuclear, electromagnetic, and gravity. The interactions of these forces control all the little bits of our universe that cannot be broken down any further; the rules that regulate our most basic building blocks (that we know of). According to the Standard Theory, these are known as quarks, leptons, gauge bosons and higg boson (You may have heard about the Higgs-Boson, as it has been appearing quite frequently in the news. It is the only undiscovered particle of these, and scientists are quite close to finding it, thanks to the Large Hadron Collider’s incredible technology). While gravity is most commonly known force to us regular folks, the weak force controls quarks and leptons – known collectively as ‘fermions’ because they are the two particles of matter, not light. Weak force controls both radioactive decay, and hydrogen fusion – the force allowing the sun to shine, and all life to live. You may not think it’s that important, but without the weak force, there is no you, because there would be no universe, no sun, no energy to get that tan in the summer! A classic example of weak decay is when a neutron breaks down into a proton, electron, and anti-neutrino. Feynman ultimately developed a new and succinctly described model for this decay factor, incorporating ideas that had been lacking before.
• physics of the superfluidity of supercooled liquid helium: Helium is the second most abundant particle in the observable universe, and its behaviour is amongst the strangest of all. It also has the unique property of having one of the lowest boiling and melting points: -269°C and -272°C respectively. In liquid form, helium had been observed to behave rather bizarrely when it was cooled slightly below the boiling point (Check out this excellent video for a visual representation). Feynman didn’t solve the whole problem, but applied the Schrödinger equation successfully to display the quantum mechanical behaviour on a macroscopic scale (I’ll try to briefly explain quantum mechanics in a moment).
• quantum electrodynamics: This is the work Feynman is best known for, and for which he won a joint Nobel Prize in 1965. The quantum world itself is a section of physics that deals in the tiniest part of matter we know about – atoms. It’s a bizarre world that breaks down all the other rules that govern our everyday life. The five main ideas behind quantum theory are:
A) Energy is not continuous, but moves in small, discreet bundles.
B) Elementary particles move like matter AND waves (excellent video explaining this crazy phenomena here).
C) This movement is intrinsically random.
D) It is impossible to know the location and momentum of a particle at the same time – the more precisely one is known, the less precise the other measurement is.
E) The quantum world is absolutely nothing like the one we live in.
Feynman was one of the founding father of the Quantum Electrodynamic Theory. While complicated, it basically describes (through mathematics) all interactions of light with matter, and of charged particles (a subatomic particle or ion with an electric charge) with one another. It was important because it was the first theory to cohesively integrate Einstein’s special relativity theory into each equation, as well as satisfying the Schrödinger equation (a problem that Paul Dirac and Norman Wiener, two scientists that had developed the theory previously, were unable to solve).
The three main concepts of Feynman’s QED theory is that: A) a photon goes from a location and time to another location and time, B) an electron goes from a location and time to another location and time, and C) an electron emits or absorbs a photon at a certain place and time. OK – what does that mean? To help explain these, Feynman came up with the self-named Feynman diagrams. Feynman Diagram Elements
Feynman Diagram (simple).
The first image shows us the symbols of parts A, B, or C of his theory. The second shows us an example of a Feynman Diagram – an ‘electron-positron annihilation’. Not to be mistaken for a Star Trek battle, this is when an negative electron (e−), and it’s opposite, a positive electron (positron [e+]) collide. This results in the annihilation of both, and photons are sent shooting out from the collision. Feynman’s theories and his well-known diagrams make ideas like this clearer, and more accessible visually to a large portion of the mathematically-disinclined population. Keep in mind, these diagrams are not set paths – just simplified suggestions representing potential quantum relationships symbolically.
It’s important to note that QED theory doesn’t tell you what will happen, but predicts the probability of what will happen. In quantum mechanics, this means that you add up the sum of all possibilities, to any given endpoint, and predict the probability of the end result based on this total sum. We can loosely think of this as taking a random walk. You’ve had a bad day at work and want to clear your mind. Without knowing your final destination, you decide to cross the road to the other side, which happens to be infinite. Your brain is (hopefully!) measuring where potholes in the road you may have to avoid are, and the probability of whether or not you will get hit by a car. Your brain then tells you when to finally move, and on what path. Your exact footsteps are not predictable, nor is where or when you will step onto the sidewalk, but your brain has calculated the possibilities. And if you were a quantum particle participating in the theory, you would end up with a path and endpoint that were the sum of all possibilities. This computational method was referred to by Feynman as the path integral formulation , and stands in contrast to previous theories that predicted a single, unique trajectory. This formula helps us to understand (or at least diversify) our understanding of the movement of the very tiny little building blocks of our universe.
Phew. If I have confused you, I’m sorry. I’m a bit confused myself at this point! Particles here, mathematics all over the chalkboard, what does that mean when I need to drag myself out of bed and go to work to feed the kids? The quantum world is difficult to grasp, and I would suspect that it’s still somewhat difficult even for the most brilliant of minds like Feynman. But that doesn’t mean its existence is irrelevant. It in fact informs everything about our lives, our composition, our beautiful planet tucked away here in this tiny corner of the universe. If our goal is to know ourselves, understanding the smallest bits is surely important, difficult as it may be. I’m sure this was one of Feynman’s motivating factors.
While working on all of these ideas and more, Feynman also dedicated a large portion of his career to teaching. While still at Cal Tech, he was asked to get the undergraduates really involved and appreciative of physics. After several years of work, this resulted in the extremely accessible, beautiful, and inspiring Feynman’s Lectures on Physics which I highly recommend if you have the remotest interest in physics. Perhaps it will clear up any confusion I may have left you floundering in today!
Now, I barely understand a percent of the incredible problems that Feynman naturally intuited, thought about deeply, and solved. However, the reason I appreciate him and his success as a physicist is due not only to his inherent genius, but also to his understanding of human nature. He was always open to new ideas and subjects, and constantly engaged his whole brain with love, academics, and artists – even creating some art himself under the pseudonym of ‘Ofey’. Watching his interviews and documentaries is always a pleasure, as he somehow manages to circumvent the common way of thinking, and present what have otherwise been very difficult concepts as clear and simple. Feynman has always managed to grasp the type of mind required to appreciate the universe – curious and humourous. As one of his colleagues best described, when you hear Feynman speak, you understand clearly the science behind physics. Once you leave the room however, you find yourself struggling to follow the same pathway that Feynman drew in your brain. I’d suspect it’s because few of us have ever taken that path before, and were so amazed by the beautiful things Feynman was showing us, that we forgot to remember the path. If we were to work hard enough though, we may be able to figure out the average probability to get back (A Feynman pun!).
Richard Fenyman continued to revolutionize and bring physics to light (another pun!) for the rest of us. He worked on the Challenger disaster of ’86, and raised awareness of the huge discrepancies between the NASA management teams and their poorly informed understanding of physics. In his rather stark review, he says quite truthfully, “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”
Feynman died from several forms rare cancers at the age of 69, in Los Angeles. His last words, in true humourous form, “I’d hate to die twice. It’s so boring.”
In memory of true genius, Richard P. Feynman 1918-1988.
What is necessary “for the very existence of science” and so forth, and what the characteristics of nature are, are not to be determined by pompous preconditions. They are determined always by the material with which we work, by nature herself. We look, and we see what we find, and we cannot say ahead of time -successfully- what it is going to look like.
The most reasonable possibilities turn out often not to be the situation.
What is necessary for the very existence of science is just the ability to experiment, the honesty in reporting results -the results must be reported without somebody saying what they’d like the results to have had been rather than what they are- and finally -an important thing-, the intelligence to interpret the results but an important point about this intelligence is that it should not be sure ahead of time what must be.
8 thoughts on “Notable Names: Richard Feynman
1. I think the truth of science is the fact that there is no truth. Science is the process of finding truth. The whole adage of journey, not destination.
It leaves it open for all possibilities.
2. Truly an incredible man, and you pay a reasoned tribute to his elegant life. On his involvement with the atomic bomb, I find it fascinating to hear his description of the joyful celebrations at Los Alamos the night the bombs dropped in Japan — that his mood was actually the mood and mental state of so many at that time, that the bomb had accomplished good. His later serious reflections on that, and analogies to the leveling of New York City, demonstrate his genius even more.
Feynman was able to apply his intellect inwardly to his own actions as well as outwardly to the natural world, and arrive logically at regret. Special man who understood his place in history, but SURE didn’t take it too seriously…! Bongos:
3. Thanks everyone! Physics are fascinating, and Feynman’s relationship with the rules of our universe are also pretty incredible. Makes you think twice about what you’re seeing, and why.
4. Thank you for your efforts to keep a great man’s memory alive, and for the food for thought. I have spent a lot of time sitting on the bookstore floor reading his lectures and have bought all I could find. 🙂
Leave a Reply
You are commenting using your account. Log Out / Change )
Google+ photo
Twitter picture
Facebook photo
Connecting to %s |
0054f1ad995e5d2e | Research ArticleOPTICS
Deconvolution of optical multidimensional coherent spectra
See allHide authors and affiliations
Science Advances 01 Jun 2018:
Vol. 4, no. 6, eaar7697
DOI: 10.1126/sciadv.aar7697
Optical coherent multidimensional spectroscopy is a powerful technique for unraveling complex and congested spectra by spreading them across multiple dimensions, removing the effects of inhomogeneity, and revealing underlying correlations. As the technique matures, the focus is shifting from understanding the technique itself to using it to probe the underlying dynamics in the system being studied. However, these dynamics can be difficult to discern because they are convolved with the nonlinear optical response of the system. Inspired by methods used to deblur images, we present a method for deconvolving the underlying dynamics from the optical response. To demonstrate the method, we extract the many-particle diffusion Green’s functions for excitons in a semiconductor quantum well from two-dimensional coherent spectra.
Optical coherent multidimensional spectroscopy (CMDS) has become an established tool for investigating material properties (112). It has been applied on a wide range of materials including photosynthetic complexes (1, 4), colloidal and epitaxial semiconductor quantum dots (6, 1317), atomic vapors (18), semiconductor quantum wells (7, 11, 12, 19, 20), metal surfaces (21), and two-dimensional (2D) materials (2224). Physical processes that are accessible through the additional spectral dimensions include energy transfer, coherent coupling, relaxation, and dipole-dipole interaction (4, 14, 18, 24, 25). Furthermore, the homogeneous and inhomogeneous linewidths can be determined separately (7, 23, 26, 27), giving access to microscopic dephasing and distributions.
Although CMDS has been successful, it remains a niche technique because of difficulty in understanding the rich spectra and obtaining insight into underlying material properties. Any insight is typically realized by comparison to theoretical results, which must incorporate both the elaborate theoretical tools needed to calculate the spectra (7, 2830) in addition to the particular materials and processes being studied. Furthermore, the information of interest is often obscured by spectral broadening such that interpretation of the spectra is sometimes described as “blobology.” This situation is further exacerbated by the lack of understanding about CMDS outside the spectroscopy community.
To address this challenge, we propose a new paradigm for analyzing coherent multidimensional spectra. Our approach is inspired by methods developed in imaging (3134) to deconvolve the effects of the imaging instrument from the acquired image. In a similar fashion, the effects due to the spectroscopic method can be deconvolved from the acquired spectra to reveal underlying material properties and dynamics. The deconvolution requires a theoretical description of CMDS that accounts for the presence/absence and strength of coherences, incoherent processes, the nature of the eigenstates, optical selection rules, and functions that parametrize the material model. Details inaccessible to the spectroscopy are averaged out. On the basis of this theoretical description, algorithms can be developed that implement deconvolution techniques to extract the functions parametrizing the model for the material. A few general descriptions, and hence algorithms, should cover a wide range of CMDS methods and materials. In the future, when more algorithms are developed and the approach matures, it may be possible to produce standardized programs that can be used by experimentalists as part of the routine data processing. Extracting a material’s properties in this way is easier and more intuitive for a non-expert in CMDS to interpret and allows for direct comparison with theoretical models of the underlying physical phenomena.
The method that we present here as proof of principle does not assume a specific form for the Green’s function, which describes energy flow in the material, and thus is suitable for a continuum, noninvertible cases, and ill-posed, ambiguous cases including noise. A different approach has been proposed for the inversion of 2D spectra for a few coupled pigments (35). In this case, the population transfer matrix is uniquely determined, which assumes a specific form of the Green’s function formulated in terms of relaxation rates and line shape. This assumption restricts the applicability to a few discrete states. Our method is also extendable to the discrete case but is particularly well suited for congested states and even continua or when dark states are important. However, the presented formulas and algorithms will require modifications and extensions for applications/materials beyond the presented example material type, where the assumptions used in the derivation do not hold.
To demonstrate this paradigm for analyzing CMDS, we select the example of spectral diffusion of the exciton distribution in a disordered semiconductor quantum well. For this example, a theoretical model and experimental data already exist (25). Since incoherent exciton relaxation dominates the evolution of the spectra, only the spectral line shapes and relaxation Green’s functions enter the simplified model (see the Supplementary Materials), making it a good candidate for demonstrating this concept. The extracted line-shape function contains the inhomogeneous distribution and energy-dependent homogeneous line shape. Prior efforts to analyze CMDS peak shapes only extracted constants characterizing inhomogeneous and homogeneous broadening, which assumes certain functional forms (26, 27).
An exciton is an electron-hole pair bound by the Coulomb attraction but free to move as a unit. Confining them in a quantum well increases their oscillator strength, and hence their optical nonlinearity, which provides a strong signal in a CMDS experiment (5, 12, 19, 20). Real quantum wells always have some degree of disorder, primarily due to fluctuations in the well thickness, which results in localization of some states and a mobility edge marking the gradual transition from localized to delocalized states (25, 36). The corresponding variation in energy produces inhomogeneous broadening in the optical spectrum of the excitonic resonance. Spectral diffusion results from spatial migration of excitons among the states, often mediated by acoustic phonons, including across the mobility edge.
The specific 2D coherent spectra presented here are produced by exciting a sample with a sequence of three cocircularly polarized pulses with wave vectors k1, k2, and k3, as illustrated in Fig. 1A. Their interaction gives rise to a signal in the direction kI = −k1 + k2 + k3, which corresponds to a photon echo if the pulse with wave vector k1 arrives first. 2D spectra can be generated by measuring the amplitude and phase of the signal as a function of both τ, the time between pulses k1 and k2, and τ’, the time over which the signal is emitted, and by performing a 2D Fourier transform. During the delay T between pulses k2 and k3, exciton relaxation processes can occur. Spectral diffusion from the initial absorption energy to the final emission energy of the excitons can be tracked through the evolution of the 2D spectra as function of T. Cocircularly polarized pulses are used to avoid the participation of bound biexcitons. Furthermore, coherences during T and exciton-exciton interactions do not significantly influence the spectra. Since the exciton resonance is spectrally narrow, we neglect effects of finite pulse bandwidth. Furthermore, the phonon system is assumed to be a bath with constant temperature.
Fig. 1 Pulse sequence of a 2D photon echo and visualization of optical transformation.
(A) The pulse sequence applied in the 2D photon echo spectroscopy. (B) Visualization of the transformation from an object O(x′, y′) to an image I(x, y) using the convolution with the PSF cf. Eq. 2.
Considering these assumptions and approximations, the 2D spectrum isEmbedded Image(1)based on the sum-over-states approach (28), where frequencies Ω1 and Ω2 result from Fourier transformation with respect to τ and τ’ (see the Supplementary Materials for the derivation of Eq. 1 and eq. S33 for a detailed discussion of the validity range of Eq. 1).
The line-shape function, L(Δω, ω), depends on ω the exciton frequency and Δω, the detuning from ω. The line-shape function describes the two-time correlation function of the absorption and emission processes (37). The relaxation Green’s function, G1, ω2; T), is the probability that an excitation absorbed at ω1 is emitted at ω2 after time T. Extracting G1, ω2; T) is the main goal because it captures the exciton redistribution dynamics, which give rise to the spectral diffusion. Equation 1 shows that G1, ω2; T) is convolved in two dimensions with L(Δω, ω), so we must find a way to deconvolve them.
The problem of 2D deconvolution has been addressed in image processing. Specifically, the image of an object O(x′, y′) can be represented asEmbedded Image(2)where the point spread function (PSF) describes the effect of the optical apparatus (see Fig. 1B). The PSF is often extracted by using the image of a point source to enable reconstruction of the original O(x′, y′) from the image.
The structural similarity between Eqs. 1 and 2 suggests that methods used to reconstruct images might be applicable to 2D spectra. However, there is no equivalent to a point source for a 2D spectrum. Thus, we need a different strategy for determining L(Δω, ω) from a spectrum. For zero waiting time T ≈ 0, G1, ω2; T ≈ 0) = δ(ω1 − ω2)/D1), where δ(x) is the Dirac delta distribution and D(ω)is the density of states, and thus, the spectrum isEmbedded Image(3)where Embedded Image. The spectrum corresponding to Eq. 3 has a diagonal form, as shown in Fig. 2A, with the inhomogeneous width along the diagonal and the homogeneous width in the cross-diagonal direction. Since Eq. 3 only depends on Embedded Image, we can extract Embedded Image using an optimization algorithm, as described in the Supplementary Materials. If we can also extract D(ω), then L(ω) can be determined.
Fig. 2 Photon echo spectra and data extracted from T = 0 ps.
(A and B) Normalized experimental photon echo spectra (absolute value) for T = 0 ps and T = 20 ps at 5 K. (C) Reconstructed relaxation Green’s function for T = 0 ps at 5 K. (D) Absolute value of line-shape function L(Δω, ω) for 20 K. (E and F) Rescaled line-shape function L(Δω, ω)/|L(0, ω)| for 5 and 20 K, respectively, with the corresponding oscillator strength L(0, ω) and D(ω) given as inset. In (D), (E), and (F), the gray lines mark the area with low reconstruction error.
To extract D(ω), we need to use a different spectroscopic measurement that also depends on D(ω) in conjunction with Embedded Image. One possibility is the linear absorptionEmbedded Image(4)from which we extract Embedded Image using an optimization algorithm.
Examples of the input spectra and extracted functions used in the deconvolution are given in Fig. 2. Figure 2D shows a reconstructed Embedded Image. The absolute error between the calculated spectrum and the experimental data is minimized. As a result, the quality of the extracted line-shape function is only good in areas with large signals. In areas with lower signal strength, the reconstructed line-shape function may have random phase jumps and oscillations, resulting later in artifacts in the reconstructed Green’s function. L(Δω, ω) includes the line shape along Δω and the oscillator strength distribution multiplied by the density of states along ω. In Fig. 2 (E and F), the line shape L(Δω, ω)/|L(0, ω)|, the oscillator strength L(0, ω)/D(ω), and density of states D(ω) are plotted separately for temperatures 5 and 20 K. We focus on high oscillator strength areas with low reconstruction error ranging from 1543.8 to 1545.2 meV, marked by the gray lines. For 5 K, the linewidth increases with increasing energy since an increased number of scattering states are reachable for higher energies due to the increasing D(ω). For 20 K, the linewidth is broader than for 5 K, as expected, and the width stays almost constant inside the trusted area. This broadening results from the higher bath temperature that opens most scattering channels for lower exciton states, which do not contribute at 5 K. All states inside the distribution have a similar lifetime. The oscillator strength distributions are very similar for both temperatures, as expected.
After extracting the line-shape function Embedded Image and the density of states D(ω), we are now ready to extract the relaxation Green’s function, G1, ω2; T), from Eq. 1 using an optimization algorithm. The parts of the Green’s function connected to bright states are successfully extracted, whereas those connected to dark states do not contribute to the signal. Thus, only part of the full Green’s function is successfully extracted, and the overall probability is not conserved since relaxation involving the dark states and exciton recombination occurs. In the following discussions, we focus on the area with sufficient oscillator strength for valid reconstruction (the area between 1543.8 and 1545.2 meV). For energies lower than 1543.8 meV, no excitons and therefore no oscillator strength exist in the quantum well, and thus, many spurious features appear in the Green’s functions in this spectral region. For energies higher than 1545.2 meV, a continuum of excitons with smoothly decreasing oscillator strength exists; therefore, distortion above 1543.8 meV is expected to be smaller but can still lead to false results. For T = 0 ps (see Fig. 2C), a perfect reconstruction would lead to a strict diagonal shape. The deviations from the expected diagonal shape will be used for T > 0 ps as an indicator for problems such as spurious features or noise that are not reliable.
In Fig. 3 (A and B), the Green’s function for the 1s exciton relaxation is shown for T = 10 and 20 ps at a temperature of 20 K. The reconstructed G1, ω2; T) is compared to the simulated result using the theory from the study of Singh et al. (25). The details of the reconstruction of the Green’s function are ambiguous, so it is possible that multiple Green’s functions reproduce the experimental spectrum equally well. The ambiguity represents the resolution limit and is influenced by the width of the line shapes, as well as the discretization and resolution of the experimental data. This ambiguity causes visible (oscillatory) noise in the reconstructed Green’s functions (examples of the ambiguous reconstruction can be found in the Supplementary Materials). Starting at T = 10 ps, off-diagonal contributions (around the horizontal line) in the Green’s function show exciton redistribution (spectral diffusion), almost covering a weak diagonal contribution. After 10 ps, the excitons are broadly distributed over more localized states with larger oscillator strength and lower energy, closer to the initially excited energy and nonequilibrium temperature. After longer delay times, the maxima of the distributions move toward higher energy until the maxima converge at the same final energy for different initial energies (visible as a horizontal feature parallel to the abscissa), which reflects the quasi-equilibrium distribution at the lattice temperature.
Fig. 3 Reconstructed and simulated relaxation Green’s function at 20 K.
(A and B) Reconstructed relaxation Green’s functions G1, ω2; T) for the initial energy ℏω1 and the final energy ℏω2 for T = 10 ps and T = 20 ps at 20 K. (C and D) Corresponding simulated relaxation Green’s functions. (The exciton–acoustic-phonon scattering in the second order Born-Markov approximation overestimates the relaxation times by a factor of 2 at 20 K (25); therefore, we use T = 5 and 10 ps from theory for the comparison.) In (A) and (B), the green contour line shows off-diagonal contribution at T = 0 ps, its presence indicates areas that may contain large artifacts and spurious features. Diagonal and horizontal lines provide visual guidance.
The simulated Green’s function shown in Fig. 3 (C and D) shows qualitatively the same behavior and prominent features, such as the disappearing diagonal and the horizontal off-diagonal contribution moving toward higher final energies. It includes scattering of exciton with acoustic phonons and radiative recombination in a disorder potential (25, 36). Since we know that the model with exciton–acoustic-phonon scattering in the second order Born-Markov approximation overestimates the relaxation times by a factor of 2 at 20 K (25), we use T = 5 and 10 ps from theory for the comparison. Overall, the Green’s function from simulation is much smoother than the reconstructed Green’s function.
At 5 K in Figs. 2C and 4 (A and B), the diagonal is more dominant than for 20 K. It is clearly visible in the Green’s function for T = 0 and T = 10 ps, since only a few excitons have recombined and scattered. It is much sharper in the Green’s function than in the spectra in Fig. 2 (A and B), highlighting the success of the deconvolution. The decay of the diagonal contribution is slower, as in the high temperature case with no redistribution of the off-diagonal contribution to higher energies for longer delay times. Instead, the distribution moves toward lower temperatures for longer delay times compared to the higher initial excitation, reflecting the lower bath temperature. Limitations in reconstructing the off-diagonal distribution with lower amplitude near the high-amplitude diagonal appear in the plot. The high-amplitude diagonal masks part of the low-amplitude contribution and generates echoes of the diagonal visible along the off-diagonal, visible above and below the diagonal. Again, the simulated relaxation Green’s function in Fig. 4 (C and D) shows qualitatively the same behavior as the reconstructed, for 5 K; also, the quantitative agreement is better compared to 20 K, since the second order Born-Markov approximation is more suitable for lower temperatures. However, we observe a stronger contribution above the diagonal (relaxation toward higher energies) in the extracted data than in the simulated data. We believe that this is caused partially by the larger reconstruction error from low oscillator strength in the area, which is seen as false signal at T = 0 ps as well. Higher (hot)–phonon temperature in the experiment caused by the excitation may be another reason, but we believe that it is mainly caused by reconstruction errors.
Fig. 4 Reconstructed and simulated relaxation Green’s function at 5 K.
(A and B) Reconstructed relaxation Green’s functions G1, ω2; T) for the initial energy ℏω1 and the final energy ℏω2 for T = 10 ps and T = 20 ps at 5 K. (C and D) Corresponding simulated relaxation Green’s functions. In (A) and (B), the green contour line shows off-diagonal contribution at T = 0 ps, and its presence indicates areas that may contain large artifacts and spurious features. Diagonal and horizontal lines provide visual guidance.
In conclusion, we have extracted Green’s functions from CMDS using deconvolution methods inspired by those developed for image processing. This approach allows a direct comparison between theoretical and simulated Green’s functions, which can be calculated by specialists in materials physics theory without requiring that they also become experts in the spectroscopic method. We illustrated this procedure for spectral diffusion inside the exciton manifold of a semiconductor quantum well, producing good agreement and enhanced insight into the processes occurring in the system. We were able to extract the energy-dependent homogeneous line shape and the oscillator strength.
The photon echo signal was generated by a sequence of three actively phase-stabilized cocircularly polarized excitation pulses with wave vectors k1, k2, and k3 (cf. Fig. 1A). The photon echo signal was collected in the direction kI = −k1 + k2 + k3. The signal was heterodyned with a reference pulse and detected through spectral interferometry to measure both amplitude and phase. The signal was recorded as delay τ was scanned, and delay T was kept constant. The signal was then Fourier-transformed with respect to τ. The sample was a four-period 10-nm-wide GaAs quantum well with 10-nm-wide Al0.3Ga0.7As barriers. The excitation was restricted to the heavy-hole exciton resonance with ~150-fs-long pulses. The experiment was carried out for different sample temperatures 5 K to 20 K using a sample-in-vapor helium flow cryostat [cf. study of Singh et al. (25) for more details].
The simulation used a sum over state treatment analogous to the study of Abramavicius et al. (28) for calculating the spectra. The exciton wave functions were obtained from a numerical solution of the 2D Schrödinger equations in relative and in center of mass coordinates. The calculation of the wave functions included Coulomb interaction and a random disorder potential caused by quantum well width fluctuations (36). The exciton wave function was used for calculating radiative and exciton-phonon scattering rates in the second-order Born-Markov approximation (36). The density matrix equations of motion were then solved numerically using the Portable, Extensible Toolkit for Scientific Computation (PETSC) library (38, 39) for obtaining the relaxation Green’s functions. In the end, the resulting quantities were averaged over several random realizations [cf. study of Singh et al. (25) for more details].
Extraction algorithms
For extracting a quantity f such as the line-shape or the Green’s function, first, a cost function C(f) was defined. The major contribution to the cost function C(f) is the error between the measured quantity (for example, a spectrum) calculated from f compared to the experimental data. Other contributions to the cost function C(f) ensured specific features of f, such as a specific functional form, smoothness, etc.
The cost function was then minimized using the TAO (Toolkit for Advanced Optimization) package from PETSC (3840) applying suitable constraints. More details about the used cost functions and the extraction procedure are provided in the Supplementary Materials.
Supplementary material for this article is available at
Supplementary Text: Reconstruction algorithms
Supplementary Text: Derivation of material model
fig. S1. Different reconstructed Green’s function G1, ω2; T) at 5 K and T = 10 ps.
fig. S2. Contributing pathways to the photon echo signal.
Acknowledgments: This work was inspired, in part, by the suggestions of R. Merlin (U. Michigan). Funding: The work at University of Michigan and JILA was primarily supported by the Chemical Sciences, Geosciences, and Energy Biosciences Division, Office of Basic Energy Science, Office of Science, U.S. Department of Energy under award no. DE-FG02-02ER15346 and no. DE-SC0015782. The work at Technische Universität Berlin was supported by the Deutsche Forschungsgemeinschaft through SFB 951 B12 and GRK 1558 A4. Author contributions: S.T.C. conceived the experimental concept. R.S. and M.S. ran the experiments. M.R. designed and calculated the simulation and the extraction algorithms. S.T.C. and M.R. wrote the manuscript. All authors discussed the results and commented on the manuscript. The discussions of all authors lead to the idea of the extraction algorithms. Competing interests: S.T.C. is an inventor on a patent application related to this work filed by the University of Michigan (no. 20180073856, 15 September 2017). All the other authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.
View Abstract
Navigate This Article |
07fa14040c884051 | The Full Wiki
Atomic number: Quiz
Question 1: The atomic number, Z, should not be confused with the mass number, A, which is the total number of protons and ________ in the nucleus of an atom.
Atomic nucleusNeutronElectronNeutrino
Question 2: Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes), the ________ of an atom is roughly equal to A.
Atomic massMolar massChemistryOxygen
Question 3: Among other things, Moseley demonstrated that the lanthanide series (from ________ to lutetium inclusive) must have 15 members — no fewer and no more — which was far from obvious from the chemistry at that time.
Question 4: This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated ________ of the nucleus, i.e.
Magnetic fieldElectric chargeElectric currentElectromagnetism
Question 5: Atoms having the same atomic number Z but different neutron number N, and hence different atomic mass, are known as ________.
Stable nuclideActinoidTechnetiumIsotope
Question 6: In an atom of neutral charge, the atomic number is also equal to the number of ________.
Question 7: Most naturally occurring elements exist as a mixture of isotopes, and the average atomic mass of this mixture determines the element's ________.
BoronAtomic weightFluorineAvogadro constant
Question 8: In general, the ________ becomes shorter as atomic number increases, though an "island of stability" may exist for undiscovered isotopes with certain numbers of protons and neutrons.
Half-lifeCosmic rayRadioactive decayNuclear fission
Question 9: The configuration of these electrons follows from the principles of ________.
Quantum mechanicsWave–particle dualitySchrödinger equationIntroduction to quantum mechanics
Question 10: In chemistry and physics, the atomic number (also known as the proton number) is the number of protons found in the nucleus of an ________ and therefore identical to the charge number of the nucleus.
Got something to say? Make a comment.
Your name
Your email address |
1bdfaa90e056aa03 | quantum mechanics powell crasemann pdf
Cohen-Tannoudji,., Diu,.
Google Scholar, mcKelvey,.P., Solid State and Semiconductor Physics, Harper and Row, New York, 1966.
Google Scholar, dalven,., Introduction to Applied Solid State Physics: Topics in the hexagonal pyramid calculator surface area Applications of Semiconductors, Superconductors, Ferromagnetism, and the Nonlinear Optical Properties of Solids, Plenum Press, New York, 1990.Google Scholar, kittel,., Introduction to Solid State Physics, John Wiley Sons, New York, 1976.And Laloë,., Quantum Mechanics, John Wiley Sons, New York, 1977.Further reading, bastard,., Wave Mechanics Applied to Semiconductor Heterostructures, Halsted Press, New York, 1988.Ziman,.M., Elements of Advanced Quantum Theory, Cambridge University Press, London, 1969.Google Scholar, davydov,.S., Quantum Mechanics, Pergamon, New York, 1965.Google Scholar, pierret,.F., Advanced Semiconductor Fundamentals, Addison-Wesley, Reading, Mass., 1989.
Liboff,.L., Introductory Quantum Mechanics, Addison-Wesley, Reading, Mass., 1998.
The basic concepts and formalism of quantum mechanics have been exposed, including the quantized nature of the electromagnetic field, the wave-particle duality, the probability of presence of a particle, the wavefunction, the Schrödinger equation.
Google Scholar, kluwer Academic Publishers 2002.
Preview, unable to display preview.
Simple quantum mechanical systems have been analyzed to understand these novel concepts, including an infinite and a finite potential well.
Through these, the major aspects associated with quantum mechanics have been discussed, including the quantization of energy levels and momenta, and tunneling effects.
Google Scholar, powell,.L.Chapter 388 Downloads, summary, in this Chapter, we have shown the limitations of classical mechanics and the success of quantum mechanics.And Crasemann,., Quantum Mechanics, Addison-Wesley, Reading, Mass., 1961.Buy, quantum Mechanics (Dover Books on Physics) on m free.Powell and Bernd, crasemann were Professors of Physics at the.Quantum Mechanics on m free shipping on qualified orders.15 The Postulates of, quantum Mechanics.1925, a much more general theory of, quantum Mechanics was created.By Powell and Crasemann.See Emery James, Diodes and Transistors, diodetran.1.4 Postulates and Principles of the Matrix Representation.2.3 Rigid Body Rotation.Print, quantum mechanics /.L. |
264b2f3c1178f116 | When I make martinis, the recipe I use is 2.5 fluid ounces1 of gin2 and 1/2 fluid ounce dry vermouth3, shaken or stirred4 with 7 ice cubes5, then strained into a cocktail glass (I mix until it's cold enough, and not for some specific length of time.)
My Observation
If I use less ice, the martini is more diluted; if I use more ice, the martini is stronger (too strong in fact—I prefer some dilution.)
My Hypothesis
With a lot of ice, the drink chills very fast, not giving the ice as much time to melt, and vice-versa.
My Friends
think I am crazy about the amount of ice6. They claim that (a) the ice has to melt to chill the drink so no matter how much ice I start with, the same amount is melted to reach the desired temperature, and (b) I had to retake Physics 2A in college7, so what do I know anyway?
I pointed out that I could chill the drink with really cold rocks (a.k.a. whiskey stones) and it could get just as cold with no melting and maybe they are missing something.
The answers to these questions seem to support my friends' argument that the melting of the ice is the overwhelming contributor to the cooling:
However my experiment seems to demonstrate otherwise.
My Experiment
I tested this by making six martinis, two each with 4, 7, and 10 ice cubes. I used the same amount of gin and vermouth in each. I weighed8 the ingredients before adding them to the mixing cup. I stirred until the desired temperature9 was achieved. Then I weighed the amount after straining.
My Result
The 4-cube martinis gained more weight than the 7-cube martinis, which in turned gained more weight than the 10-cube martinis. I assume the weight gain was the melted ice. My friends happily drank all the martinis, but remained unconvinced of my Physics acumen, either theoretical or experimental10.
My Question
Am I exhibiting confirmation bias and my stupid friends are right? Or is there an explanation for my hypothesis and observation and they should finally shut up about my having to retake Physics 2A because come on it was like 30 years ago already and besides, I'm not using Planck's constant to calculate how long it takes atoms to slow to a halt, I'm just making martinis!
Is the ratio of nearly 2:1 ice-to-liquid a factor? How about the constant mixing? Or is this dependent on the starting temperature of the ice and even though the warming of the ice contributes very little, it is enough to affect the outcome?
1. Apologies for the use of American measurements in this scholarly context, but that's what my utensils are labeled with.
2. No apologies for the insistence on gin. If you want to use vodka or make some other cocktail and call it a martini, we have nothing further to discuss.
3. I use a 5:1 ratio. Others may quibble. They are wrong.
4. Seriously, I do not care. Can I please continue?
5. For my ice trays, this is 5 fluid ounces of water, prior to freezing, in a plain old kitchen freezer.
6. Although they are all too happy to drink the martinis I make.
7. These are friends that I went to college with so I can't argue that point, but there were three intramural softball playoff games the weekend before finals, so when was I going to study? But I digress...
8. Postal scale, accurate to 1/10 (American again) ounce, sorry.
9. 28 degrees American Fahrenheit on my kitchen thermometer that measures to 1/10 degree, but accuracy unknown.
10. I am willing to rerun that experiment as long as necessary until I get it right!
I appreciate the answers and I am accepting the answer from @cyberx86 because (a) it independently supports my insistence that 7 ice cubes is The Right Number, and (b) What kind of physics site would we have if we didn't reward showing your work?
However no one really "put a bow on it" so I'll add
My Conclusions
Since the final temperature of the martini drops below 0° C and there is also ice melting, there must be both melting and absorption occurring together during the mixing.
The measurable change in the amount of melting in this case is due to the addition of the heat absorption capacity of more ice, but only because it is below 0° C to start with.
• 11
$\begingroup$ I love physics (ex-scientist, PhD) and I cry over how physics is taught at schools. It is not a surprise that a lot of children do not like it because of just that: examples of logs slipping on a slope instead of real-life problems like this one. They end up with a learnt-by-heart Schrödinger equation and do not understand the kWh on their electricity bill. This is to say that the question is great on its own, but the reasoning and experimental effort behind is even greater. $\endgroup$ – WoJ Jul 28 '17 at 6:38
• 2
$\begingroup$ Wait, let me understand: when you say that the 4-cube martinis gained more weight you mean after removing what remained of the ice cubes? $\endgroup$ – valerio Jul 28 '17 at 9:09
• $\begingroup$ Everyone is leaving out a huge factor in how ice behaves in drinks: surface area. Larger ice cubes with smaller surface area will cool a drink more with less dilution than shaved ice, which will melt quite quickly by comparison. This is why high end bars go through the extra effort to make and offer spherical ice that just barely fits in a tumbler. Not practical for a martini glass, but the size and shape of the ice is a factor. $\endgroup$ – Todd Wilcox Jul 28 '17 at 12:33
• 1
$\begingroup$ @valerio92, yes the gain was after removing the ice by straining the martini into a cocktail glass, but before adding an olive. $\endgroup$ – bmb Jul 28 '17 at 15:20
In order for the martini to cool, heat is transferred from it, into the ice. This results in the martini's temperature dropping and the ice's temperature rising. Once the ice reaches its melting point, it's temperature will not rise, but a state change will occur (this is the latent heat). It is only if the ice melts that water is added to your martini and dilutes it.
Here is an attempt and expressing this idea mathematically, there are lots of assumptions made.
$1 fl. oz = 0.0295735 mL$
$c_{water} = 4184 J/kg^{\circ}C$
$D_{water} = 1 kg/L$
$c_{ice} = 2108 J/kg^{\circ}C$
$L_{ice} = 333550 J/kg^{\circ}C$
$c_{ethanol} = 2460 J/kg^{\circ}C$
$D_{ethanol} = 0.789 kg/L$
Additionally, the following assumptions are made:
• Once melted, the water is ignored in the temperature calculations
• Gin is assumed to be 45% ethanol and 55% water, other components are ignored
• Vermouth is assumed to be 18% ethanol and 82% water, other components are ignored
• All specific heat capacities of mixtures are calculated as weighted averages
• Freezer temperature is assumed to be -18 degrees Celsius
• Room temperature is assumed to be 20 degrees Celsius
${T_{inital}}_{martini} = 20^{\circ}C$
${T_{final}}_{martini} = -2.22^{\circ}C$
${T_{inital}}_{ice} = -18^{\circ}C$
${T_{final}}_{ice} = 0^{\circ}C$
$V_{gin} = 2.5 fl. oz = 0.0739 L $
$D_{gin} = 0.905 kg/L $
$m_{gin} = 0.0669 kg $
$c_{gin} = 3408.2 J/kg^{\circ}C $
$V_{vermouth} = 0.5 fl. oz = 0.0148 L $
$D_{vermouth} = 0.962 kg/L $
$m_{vermouth} = 0.0142 kg $
$c_{vermouth} = 3873.68 J/kg^{\circ}C $
$V_{martini} = 3 fl. oz = 0.0887 L $
$D_{martini} = 0.915 kg/L $
All of this, gives us the following numbers for the martini:
$m_{martini} = 0.0811 kg $
$c_{martini} = 3485.78 J/kg^{\circ}C $
The energy lost by the martini to bring it to its final temperature is:
$Q_{martini} = m_{martini}c_{martini}({T_{final}}_{martini}-{T_{inital}}_{martini})$
$Q_{martini} = (0.0811 kg)(3485.78 J/kg^{\circ}C)(-2.22^{\circ}C-25^{\circ}C)$
$Q_{martini} = -6285 J $
This means that the ice needs to absorb 6285 J of energy. Part of this will occur by an increase in the temperature of the ice, any additional energy will go into melting the ice.
The mass of an ice cube is based on 5 fl. oz for 7 ice cubes:
$m_{icecube} = 0.02112 kg $
The amount of energy each ice cube can absorb as its temperature rises to 0 degrees Celsius is:
$Q_{ice} = m_{ice}c_{ice}({T_{final}}_{ice}-{T_{inital}}_{ice})$
$Q_{ice} = (0.02112 kg)(2108 J/kg^{\circ}C)(0^{\circ}C-(-18^{\circ}C))$
$Q_{ice} = 801.5 J/cube $
The amount of energy each ice cube would absorb if it completely melted is:
$Q_{melt} = m_{ice}L_{ice}$
$Q_{melt} = (0.02112 kg)(333550 J/kg^{\circ}C)$
$Q_{melt} = 7046 J/cube $
This means, that if we have enough ice cubes, assuming all absorb heat from the martini equally, that more ice cubes will be able to remove more heat before needing to melt. Additionally, this means that after a certain point, additional ice cubes shouldn't make an appreciable difference. (Assuming that the ice cubes are removed once the target temperature is reached, and not allowed to melt).
Ice Cubes Energy from Melting Water Added (mL) Estimated Final Mass (g)
1 -5483 16 97.6
2 -4681 14 95.2
3 -3880 12 92.8
4 -3078 9.2 90.4
5 -2277 6.8 88.0
6 -1475 4.4 85.6
7 -673.8 2.0 83.2
8 127.7 0 81.1
9 929.2 0 81.1
10 1731 0 81.1
You should be able to verify the basic trend (mass decreases with additional ice cubes until a point which it plateaus) with the experimental setup you used. You can compare your final masses to those estimated in the table above, although I would expect your measured values to be somewhat higher than these. These numbers do not include the energy needed to cool the mixing vessel.
• 3
$\begingroup$ Nice work, but also notice that this calculation ignores that ice cubes might melt at the outside while still beeing well below the melting point at the inside. However this doesn't change the trend. $\endgroup$ – Anedar Jul 28 '17 at 8:53
• 1
$\begingroup$ That is true - would be interested to know how much the ice actually melts. I would guess that a thin layer of martini, surrounding the ice, reaches an equilibrium temperature with it, and then slowly sinks due to its increased density. It seems that you might get varying amounts of ice melting depending on how the drink is mixed. $\endgroup$ – cyberx86 Jul 28 '17 at 9:05
• $\begingroup$ So it seems those other questions apply if the ice is close to 0 C, but each degree below 0 C increases the energy a cube can absorb before melting by about 44.5 J. This is .6 % of the contribution, so at -18 C, 11% of the cooling is from energy absorption. This is not as negligible as described in those other questions. Excellent answer; thanks for showing your work! $\endgroup$ – bmb Jul 28 '17 at 17:38
• If the ice is at 0 degrees Celsius, so that the energy to chill the drink is absorbed by the latent heat of fusion, then it seems to me that your friends are right. Cooling the gin by a certain amount requires you to melt a certain amount of ice, independently of how many ice cubes you use.
• If the ice is so cold that a single cube can chill the entire drink without melting, then again it doesn't matter how many you use (because there will be no melting).
• In the intermediate case, which is probably the realistic one if you're getting your ice from a home freezer, the ice will first cool the drink until it reaches its melting point, and then begin to melt. More ice cubes will allow the first part of this process to account for more of the cooling, so the drink will indeed be less watery.
• You should probably just also keep your gin in the freezer if you like martinis enough to go through all this.
• $\begingroup$ Thanks, AGML. The key part of your answer, "the ice will first cool the drink until it reaches its melting point, and then begin to melt" seems to be merely restating my hypothesis, "with a lot of ice, the drink chills very fast, not giving the ice as much time to melt." My friends, and the links to other questions, want to refute that by claiming that the warming of the ice does not contribute enough. Why do you think it does? $\endgroup$ – bmb Jul 28 '17 at 1:22
• 3
$\begingroup$ It's not the same because it has nothing to do with time. It depends on how much energy is required to bring the ice to its melting point. Whether the warming of the ice contributes "enough" depends on the concentration of water you feel is significant, the temperatures of the ice and gin, the shape and volume of the ice cubes, and maybe the purity of the ice. Very roughly, heating a gram of ice by 1 degree will heat 0.75 grams of gin by the same. Supposing the ice starts at -20C that could make a difference. $\endgroup$ – AGML Jul 28 '17 at 1:54
• 1
$\begingroup$ If it's not about time, why write "...first cool the drink until it reaches its melting point, and then begin to melt" Are you suggesting the ice just sits there not melting until the entire cube reaches 0 degrees C? $\endgroup$ – bmb Jul 28 '17 at 4:32
• 1
$\begingroup$ Yes, the entire cube will reach zero, or very near, quickly. When you drop them in water the surface does not start flaking off. They crack right through, and very quickly. It does not just all go liquid because of the great heat of fusion. It takes a lot of energy for the water to change from solid to liquid, and it does so without changing temperature. So, it has to melt from the outside in as heat becomes available. Ice starts at -20. Warms to zero. Stays at zero until it melts. I agree with AGML. With ice from a typical freezer, more cubes means less melt. Cool some cubes with dry ice. $\endgroup$ – C. Towne Springer Jul 28 '17 at 6:56
• $\begingroup$ Arguably gin from the freezer is too cold for martinis. Unless you're putting in a frankly unreasonable quantity of room-temperature vermouth. Of course, if it's too cold you can sit there stirring it until it's drinkable, whereas if it's warm you're ruined. An alternative approach, when you're concerned about dilution by ice, is to use overproof gin. $\endgroup$ – Steve Jessop Jul 28 '17 at 8:53
Your Answer
|
3589499312fc61bb | 11/27 – Danah Zohar: Quantum Leadership
Fresh Perspective / August-November 2013
Russ Volckmann
In the late 1970s and into the 1990s it seemed that more and more people were being drawn to some study of physics, quantum mechanics, chaos and complexity theories. We were reading The Tao of Physics. This led us to David Bohm (with and without Krishnamurti), Niels Bohr, Heisenberg, Feynman, Glick, Sheldrake and on and on. It seemed that every nuance of these new sciences held promise that we would find new ways of understanding, being and doing in our own development and that of society, culture, and organizations. Ralph Stacey and Jeff Goldstein were early contributors to this work. And they have continued with a focus on complexity theory with folks like Jim Hazy, Mary Uhl-Bien and others. The work of Danah (pronounced “Donna”) Zohar was significant through her books like The Quantum Self, The Quantum Society, and the ones she co-authored with her husband Ian Marshall on Spiritual Intelligence and Spiritual Capital. Currently her work includes trainings in quantum leadership.
Danah Zohar
Danah Zohar
Russ Volckmann
Russ Volckmann
Russ: Danah, I am delighted to have the chance to talk with you. I read your Quantum Self in the early 1990s and have been intrigued by some of the work you’ve done. I’ve not read your most recent work on leadership, but I look forward to learning about that..
Danah: The Spiritual Capital book would probably be your favorite. That’s really my best book — well there are two on leadership: Rewiring the Corporate Brain and Spiritual Capital. But they’re both good actually. The last one catches you up with my thinking today on that subject.
Russ: The Quantum Self was certainly an important book. One of the things that you close with in that book is the idea of a quantum world view. Could you say what you mean by that?
Danah: Yes, I mean a quantum paradigm. In other words, the whole of our cultural sense, the scientific revolution going back to the late 16th century focused on Newton’s work in the 17th century, had a deep impact on thinking in every other field for the next 300 years right up to this day. So when we talk about Newtonian physics, that’s just physics. But there’s something that everybody is talking about now – the mechanistic paradigm, the Newtonian paradigm. That’s the way of thinking about psychology, society, economics and management in terms of the same categories, concepts and idea structures that form Newtonian physics. Thus, Freud always wanted to be the Newton of psychology; John Stewart Mill who helped to write liberal political philosophy and capitalism said that he owed everything that he ever knew to the incomparable Mr. Newton.
Fredrick Taylor, the management thinker, was consciously very influenced by Newton. Modern cognitive science is still very influenced by the Newtonian paradigm. Everything is just material; we are just mind machines.
Consciousness is an illusion because it doesn’t fit into the paradigm. It’s through fundamental shifts in science that heralds fundamental shifts in culture, though it may take 100, 200 years to catch up. The fundamental shift in physics was at the beginning of the 20th century with the discovery of the atom, the splitting of the atom and eventually quantum physics, which was mathematically formulated in 1927. This actually describes the fundamental categories of causality, perception, and relationship, David Bohm’s work on implicate order and explicate order, the wave function and our world and so on.
Fundamental shifts in the sciences influenced me in a very strong way as a teenager and are now beginning to seep into the general culture. There is a whole new way to think about psychology, society, economics, leadership, and spirituality from a quantum perspective. A quantum paradigm is emerging. It’s not quantum physics itself that addresses these things. It doesn’t! It talks about elementary particles. But Newton wasn’t talking about all these things, either. He was just talking about particles and forces.
What is important is the way these scientific thoughts seep into the general consciousness and form a whole cultural paradigm. That’s what I think is now happening with this shift from the Newtonian or mechanistic paradigm to what could easily be called a quantum paradigm.
Russ: What was the path that brought you to quantum physics? Are you a physicist by training?
Danah: Yes, I read physics and philosophy at MIT. As a child, I had two passions: God and astronomy. I had an astronomy club in one of my grandmother’s spare chicken coops in the Ohio countryside. I went to the local country Methodist church with my grandparents every Sunday and was seriously into Jesus. By the age of 11, I had really begun to lose my faith in Christianity. I was quite lost for two years. My grades fell; I just seemed to be drifting. Then I discovered the atom at 8th grade science class and never looked back.
The atom led to nuclear physics, which then led to quantum physics. By the age of 15, I was reading quantum physics textbooks. Being a teenager with all the teenager’s normal angst and questions, I found myself without realizing it framing life’s big question in terms of quantum ideas. I got to MIT when I won a scholarship in physics, because I was one of those American childhood gadget scientific whiz kids. We have the National Science Fair, which used to be called the Westinghouse Talent Search. It’s something else now. But I won all those prizes and got to meet President Kennedy.
I had an atomic accelerator, a cloud chamber and a bubble chamber in my bedroom and was smashing atoms night and day and all that. I was a real monster case. Then I went to MIT with a scholarship in physics. Within the first year at MIT realized that what really interested me wasn’t being a practicing physicist, but more working out the philosophical implications of what was going on in physics. While it was unheard of at that time, MIT allowed me to do a double degree in philosophy and physics. Then I went on to grad school at Harvard and did three years PhD work in philosophy, religion and psychology.
Russ: When was that? What years?
Danah: I was at MIT 62 to 66 and Harvard 66 to 69.
Russ: While you were doing that, I was at Berkeley.
Danah: That was the place to be in the 60s. I envy you. Our student revolutions were very mild compared to yours. What did you read at Berkeley, Russ?
Russ: Political science. I was a South Asia area specialist.
Danah: Fascinating! That was foresightful of you because it’s all the thing now.
Russ: You did a dissertation at Harvard, yes?
Danah: I didn’t finish my PhD, so I didn’t do a dissertation. My books are kind of my dissertations.
Russ: Who were some of the major influences on you at Harvard?
Danah: Eric Ericson was a very strong influence. I still remain strongly under the influence of two of my MIT professors: Bert Dreyfus who went to Berkeley and Sam Todes, who went to Northwestern. Todes didn’t have a big reputation because he never wrote his great book – but he was a brilliant philosopher. Both were in phenomenology and existentialism. I greatly preferred that to analytical philosophy.
At Harvard I worked with Stanley Cavell who was the Wittgenstein and Heidegger man there. I got a failing grade in Christian Theology from Rheinhold Niehbur’s son Richard for declaring that the only true way to become a Christian is to become a Jew. He felt that I should apply my wide ranging imagination in something other than Christian theology and failed me in the course.
At that point I converted to Judaism, being true to my own beliefs, and went off to live in Israel. I was a research fellow graduate student at Hebrew University for a couple of years, but I didn’t pay much attention to my studies. I got preoccupied with left wing pro-Palestine politics and started writing and journalism while in Israel. It was a very good transition time for me. I never thought about quantum physics for years until I met my husband when I was 31. He was a psychiatrist with a very strong background in physics and mathematics from Oxford. He was babbling on for seven years about quantum physics and consciousness. I thought why doesn’t he write about psychotherapy? What is he on about? Who cares?
Russ: Your husband’s name?
Danah: Ian Marshall, I wrote all my books with him.
Russ: Was he teaching at Oxford at the time?
Danah: No, he was a practicing psychiatrist. He earned his living with psychiatry and psychotherapy. But he gave lectures at Oxford on quantum physics to various seminars and was fully respected as a member of the physics community in Oxford. He was a very brilliant man. He is dead now.
Russ: I’m sorry.
Danah: Anyway, I had to go into hospital for major surgery after my second child was born. I was under anesthetic for five hours. When I began to come out of the anesthetic, the thought occurred to me straight away that if Ian is right, that changes absolutely everything. I wrote nonstop the outline of the Quantum Self, a subject I had not consciously been thinking about up to that time. The only books around on quantum philosophy provided a few strands. There was, of course, Fritzjo Capra”s The Tao of Physics, which broke the mold because Capra was the very first to say something about how quantum physics relates to something outside the laboratory, in his case Eastern philosophies. David Bohm, the famous quantum physicist from the 20th century who worked with Einstein and Robert Oppenheimer and then lived here in Britain, was writing in his quantum physics text book back in the early 50s that there were striking similarities between the way quantum systems behave and the way that human consciousness behaves. He said that there seems to be more than mere coincidence and that the basis for a connection should be pursued.
That was in my subconscious since I read his book at 15, but that experience on waking from the anesthetic brought it to the fore again. I suddenly understood what my husband had been babbling on about. The only precursors to Bohm were the founding fathers of quantum physics, themselves, particularly Wolfgang Pauli who had a very close working relationship with Jung. He felt that you really couldn’t complete physics without psychology and you couldn’t really have a good psychology without physics, and he meant quantum physics. Schrödinger, in his famous book, What is Life talks about life in terms of quantum physics. These are the only precursors to the Quantum Self that I know about.
I wished at first to relate it to psychology, a model of the self and the various things I wrote about in that book. It was a first for that kind of thing. There were then, after it was published, a whole industry of follow-on books. So now there is a whole literature – much of it is not very good – but there’s a whole literature now on quantum this, that and the other – everything from quantum soup to quantum sex. These days quantum has come to mean cool.
Russ: Then there were the people working around that time who were focusing on the application to organizations and development and change like Jeff Goldstein, Glenda Eoyang, and Ralph Stacy. Goldstein moved his work more towards complexity theory, later.
Danah: I’ve come to think it’s a blend of complexity theory and quantum thinking that is relevant to leadership ideas. But there is a very fascinating scientific bridge between complexity and quantum physics that I think is particularly applicable with organizations: This is what systems complexity biologists call complex adaptive systems. These are living systems; all living systems are complex adaptive systems. They can be thought of as living quantum systems.
Russ: Which is why people started to read authors like Rupert Sheldrake and others at that time, as well.
Danah: Sheldrake taught the fringe of this, but he didn’t know about the complexity theory yet. He concentrated more on the morphogenic field idea. But Sheldrake is playing the same ball game intuitively and indeed he got his idea from a paper my husband published on resonance phenomena and consciousness back in 1960. Only Sheldrake thought Ian was dead. He was very surprised to find that he was alive when we published the Quantum Self!
He only acknowledged Ian on the last footnote on the last page, which caused some bitterness. But even my husband back in 1960 was thinking about things like the Sheldrake book of resonance phenomena and fields. A lot of that now makes sense in terms of modern quantum field theory and complex adaptive systems, but neither my husband nor Sheldrake knew complexity theory – it’s newer than that. Ian certainly knew about quantum science and was inspired by that. But I think it’s only been since the Quantum Self was published that general kind of movement began to understand consciousness as somehow possibly linked to quantum activity in the brain.
We now know there is quantum biology. Very serious people think there is some kind of link between quantum physics and consciousness, but it’s not clear what it is yet. But these complex adaptive systems are a bridge between the two and leadership, because after all organizations are living systems. They are not machine systems as Taylor thought; they are living organic systems. Complex adaptive systems bring the properties of quantum phenomena into living systems. So, as I said, they can be thought of as living quantum systems. They are called complex systems, so there is a link between the two. My later work has focused on this and I present these ideas in Spiritual Capital.
Russ: There are two things then that are particularly of interest here. One is how you evolved your work into the whole idea of spiritual intelligence – you started publishing about that in 2000 – and then how you brought that work into complex adaptive systems, particularly in relation to leadership. So spiritual intelligence is, these days, a more current term than was the case when you and Ian were writing about it.
Danah: Oh, yes. My book was the first on spiritual intelligence, but now it’s quite a big field of interest. There have been dozens of follow-on books by others SQ, as I call it, is now used quite a bit in management and leadership thinking.
Russ: In your work on spiritual intelligence, you and Ian had a dozen principles that you had developed. Cindy Wigglesworth has 21 practices around spiritual intelligence in her more recent work. But the thing that interests me is your discussion of the relationship between cognitive intelligence or IQ, emotional intelligence or EQ and spiritual intelligence or SQ. Could you talk about that a little bit?
Danah: I can talk both personally and intellectually about it and both are my interests. I told you when we started that I had two major passions as a child, God and science, I never dropped the God stuff; I felt I could find it in some religion. So while I continued with my philosophy of physics, I also serially trawled the world’s religions looking for my spiritual home. I became first a Quaker, then a Unitarian, then a Jew, then a Buddhist. At some point along this line I had children who were exposed to all these journeys. They turned to me one day and asked, “Mommy, which is true? The Christians say Jesus is God and the Jews say Jesus is not God and the Buddhist say there isn’t a God at all. We’re confused, Mommy. What’s true?” I always try to answer my children honestly and I found I couldn’t give a clear answer to this. I didn’t want to say, “Well the one we belong to now.”
This led me to starting to think about my own spiritual odyssey. Why was I trawling through all these great roads of religion? What was I looking for? Then Daniel Goleman’s book came out on EQ. I know the real work was done by Demacio, but Goldman’s book is what broke the mold and brought Demacio’s research to public attention.
Russ: It is an excellent report.
Danah: It’s well written. It’s written at a level people can read. Demacio is probably a far more brilliant scientist, but he isn’t a very good writer. So the Goleman book really broke the mold and, of course, affected me deeply. I thought about it a lot. I thought, “Well, you know, he attributes some things to emotional intelligence that aren’t really just emotion. Other things that he doesn’t talk about in emotional intelligence do exist in our intelligence framework.” I was also very influenced by Victor Frankl in Man’s Search for Meaning, I felt that I didn’t find what you would call in its broadest terms a spiritual dimension. Then I realized there is a further intelligence, spiritual intelligence.
SQ is involved with our pursuit of meaning, our need for a sense of higher purpose, a need for an overall context to our lives – kind of an overarching myth, if you like, something that makes sense of it all and will hold it all together. Ian and I were talking a lot about three kinds of thinking at the time – rational thinking, emotional thinking and quantum thinking – what would these be like if we were to think with quantum concepts and quantum categories. I just suddenly had this flash of an idea that this quantum intelligence is a spiritual intelligence in that when you’re doing quantum work, you do have to think of why you’re doing the experiment and the affect of this conscious intention on the outcome of the experiment.
Consciousness and our sense of purpose are closely bound in the fundamentals of the science. Quantum cosmology has given us this whole story of the universe, from the quantum vacuum to us. I hadn’t gone as far as I have right now; I will just jump ahead for one second and go back. I’m now writing a book called Finding God within Physics: A new quantum spiritual vision for our times in which I’m really developing more ideas on this. But to go back, even at that time, I thought spiritual intelligence is somehow quantum intelligence. So then I looked for what might be the dynamic transformation principles of spiritual intelligence. I found it natural to look for them in the properties of complex adaptive systems.
There are ten striking characteristics of complex adaptive systems, including dialogue with the environment, self-organizing, creative use of genetic mistakes (so called), being holistic, things like these. I worked out the conscious equivalent to those. I turned self-organizing into self-awareness and the use of genetic mistakes into positive use of diversity and so on. I derived ten of my twelve principles of SQ from those properties. That’s the link with quantum. It’s seeing complex adaptive systems as living quantum systems. Then I added humility and sense of compassion, because they seemed to me absolutely necessary to an authentic spiritual life. That’s where I got my twelve principles. They are, in fact, principals of quantum intelligence in other times.
Russ: Then there is the relationship between the spiritual, the intellectual and the emotional. You see the spiritual as foundational. Is that because it is at the heart of the meaning making system?
Danah: It is the heart of the meaning making system in the spiritual terms of SQ. Of course, then it’s the meaning of life, the meaning of what I’m doing. You know, existential meaning. But there are equivalents in neuroscience.
Russ: Neuroscience?
Danah: There are phenomena in neuroscience that tie into all of this. They’ve discovered the things that bind the brain together are oscillations across the brain going at 40 Hertz or 40 cycles per second. All conscious systems – interestingly enough even a piece of tissue, a living tissue put in a Petri dish – are oscillating at 40 Hertz. A lot of people speculate that these might be the oscillations of the quantum field across the brain. Like everything else about the brain at the moment, that’s speculative. But it is one of the major strands of thought. Those 40 Hertz are at the basis of just simple concept formation. We take the millions of sensory data coming into our brain every second and bind them into some of the objects, concepts, purposes and so on. Even our IQ, which of course deals with concepts and abstract ones at that, is drawing on this 40 Hertz, possibly quantum intelligence or spiritual intelligence.
I’d rather call it quantum where it applies to IQ and EQ and call it spiritual when it applies to the existential domain. But I think scientifically and neurologically it’s probably all the same phenomenon and that it is some fundamental basis to our consciousness, whether our consciousness is expressed as IQ, EQ or SQ.
Russ: At one point you suggested that spiritual intelligence is foundational.
Danah: Yes. Well, I think I may be confusing you because I’m still working this out. I think I want to say that quantum intelligence is foundational and its spiritual component is spiritual intelligence. Its intellectual component is IQ; its emotional component is EQ, because you see this binding together in the brain – and it all has to do with what has been called “the binding problem” in the brain. Fifteen years ago there were all kinds of wild theories, because neuroscience had no solid idea how it is that we could take these millions of data per second bombarding us and form it into an organized world and an organized system of thought. The brain is all interconnected, but nothing neurological connects the brain across the whole brain. Then, it was Wolfe Singer who did the major research on this and he discovered that there is this oscillating field sweeping across the brain and that it unites all neuro activity in the brain.
This is proper science; this isn’t speculation. This has been tested and written about in hundreds of research papers. Now we do know that this 40 Hertz oscillatory field somehow unites our conscious field for us. That is what I have been proposing as a quantum field, and other people do, too. In which case you can say that quantum field is a kind of binding intelligence or pattern making intelligence, meaning making intelligence, that then expresses itself in all the brain’s capacities.
Russ: One of the things that really impressed me about your work is the fact that you included attention not just to the individual but also to the collective. You are not excluding them. You are seeing them both as critical variables. Could you say a little about that?
Danah: Yes, that is quite critical to this whole quantum view of things. In quantum physics you have probably heard that everything is both wave-like and particle-like.
Russ: Yes.
Danah: But the mainstream quantum physics has had massive trouble bridging the gap between the wave and the particle aspect that shows itself on our level of reality. Why does the Schrödinger wave function, which is a wave of potentialities, infinite potentiality, suddenly become one thing, a particle? It’s called the measurement problem or the collapse of the wave function problem. Nobody has got an answer to it.
There are five main theories of how it happens, One of the two most interesting is the so called Many Worlds theory – that the wave function doesn’t collapse every time there is a bifurcation or choice made with the Schrödinger wave function. Instead, it all happens, each possibility in its own universe. So there are infinity universes out there and every time you make a decision there is a different Russ Volckmann now in a different universe, because there is another Russ Volckmann who didn’t make that decision and so it keeps bifurcating.
I think this is a bit mad. It’s certainly metaphysically mad; it just doesn’t come to anything interesting. The other interesting theory that is coming back into vogue because of new experimental evidence, is David Bohm’s theory about the implicate and the explicate orders. Have you heard off that?
Russ: Yes, absolutely.
Danah: Okay. The implicate order is essentially the level of reality of the Schrödinger wave function. The explicate order is the order of our material world, particles as it were.
Now for Bohm, there isn’t some radical border. For him the wave function doesn’t collapse. Rather. An aspect of the wave function peaks in a peak of energy. So here at the top of the peak you have a particle or what passes for a particle, it has a lot of characteristics of the particles. But for Bohm, the implicate aspect of the wave function still exists as possibilities that are pregnant within the particle. The implicate order continues to exist within the explicate order, the wave in the particle.
Danah: So I am Danah Zohar sitting here right now talking to Russ Volckmann on an Apple Macintosh computer. That potential to be other people with other characteristics and other thoughts is there in the incarnate me. For Bohm. this explained the problem of the so called action at a distance, because you know in non- locality particles seem to be linked across space and time even though no signal could possibly have passed between them. Bohm saw this as easy: they are not separate photons.
Waves and Particles
Waves and Particles
The photon is the peak here and there is another photon that is the peak there. But there is a wave spanning out and the waves overlap and that’s why the particles are correlated.
That’s where I got my notion of the wave aspect of the self and the particle aspect of the self. I am developing that much further in the book that I am writing now, because I understand it better. It has implications for relationships, which I did see in the quantum self, but also implications for identity, immortality or plausible immortality, because the Schrödinger equation, the wave aspect of myself is immortal. It’s eternal, it’s outside time and yet the particle aspect of me is clearly in time and I will die.
When I die, I don’t just disappear. It’s just that that the wave spreads out again and then it will go along. Now this happens in physics laboratories. I am also suggesting it happens to us. It goes along for a while, implicate. and then it peaks again. So Danah Zohar who peaked over here and then died emerges again not as Danah Zohar, maybe as a man this time, maybe as another type of being or something – God knows what – but as another of my potentialities. But I, Danah Zohar from here, I am still in the wave of whatever becomes in the next life.
This does happen in a laboratory. Particles go back into the implicate order, come up again in the explicate order. But while in the explicate order, they are influencing you and you are influencing them, too. This is called backward causation. There is never any split between past, present and future really in terms of the being of particles. Since everything in the universe has a wave function, including the universe itself, you and I have wave functions. Why isn’t it possible that we peak, so called die, but really just become implicate again, peak again and so on.
Russ: With theories of reincarnation of one variation or another they seem to attach what I would have called a sense of ego to…
Danah: No, Russ. The ego self is just the particle aspect of the self. The wave aspect is the so called capital S self and it has no ego. It’s pure potentiality.
Russ: Excellent.
Danah: I only have an ego when I am here [at the peak-Ed.], incarnate in space and time and have projects, personal relationships, hang-ups and all the rest that constitutes me. The wave function doesn’t have an ego. That’s why people are going to be disappointed by my model. Roger Penrose and Stewart Hammeroff have come up with the same model recently and published it, (so I am not the only nutcase out there thinking about it). Some quite big physicists are thinking about it. But people who want to live in the next life will be very disappointed by this theory. And as my seven year old grandson said while I tried to explain it to him, “But Nana, I want to come back as me. I want to be Kai again.” Now that’s the ego saying look, let me hang on here. I will go to a next life if you like, but let it be me. Well, you won’t be.
Russ: Do you have any sense of when your new book will be published?
Danah: It has to be finished in January 2015, which means it will be probably published in the autumn of 2015.
Russ: We will be looking forward to that. I want to ask about a couple of specific things before talking with you about leadership. When I was looking through your work, I was looking for evidence of interest in adult development psychology. I have been fascinated by the work of Loevinger, Cook-Greuter, Tolbert, Perry, Clare Graves, Don Beck, Michael Commons and other models of adult development.
Danah: I haven’t heard of anyone of those people. I can’t read them all, but tell me the most important one or two to read.
Russ: Okay. I will give you one author and one book. Robert Kegan, he is at Harvard. One of his books is In Over Our Heads. And then the other book is by Beck and Cowan and it’s called Spiral Dynamics.
Danah: I know that book very well.
Russ: That’s the work of Graves. People like Don Beck and others associated with his approach are accomplishing remarkable thins in organizations, communities and socieities, internationally. If you know the book very well, how do you see it fitting in with the approach you are taking?
Danah: I haven’t been able to make much sense of Spiral Dynamics, frankly. Some people who read my work think there is a crossover. Some Spiral Dynamics people even got in touch with me. I forget his name now, but there is somebody – it’s not Beck himself but one of Beck’s proteges who works in leadership. He wanted me to do joint programs with him for the Young Presidents Association, YPA, on combining quantum thinking and Spiral Dynamics.
I looked at it for a long time because it was a big contract and I just couldn’t see it myself, Russ. I just can’t get into Spiral Dynamics.
Russ: Do you see value in looking at adult developmental processes?
Danah: I see developmental process.
Russ: Right, but you don’t see patterns in them…
Danah: I can’t link it to all these colors of consciousness they’ve got.
Russ: The other area I noticed was that in your early work you make reference to some of the work of Ken Wilber, particularly his holographic model. I know you have read Sex, Ecology and Spirituality. I am wondering if you have kept up at all…
Danah: I haven’t read that one.
Russ: It’s referenced in one of your books.
Danah: I probably picked something out of it that my husband referenced and I quoted it, but I haven’t read the book.
Russ: Okay. I was just curious if you had followed up on the work of Ken Wilber at all, because it’s another effort at putting together a Meta model.
Danah: Wilber’s work for the most part derives from Aurobindo.
Russ: Yes, among others.
Danah: I prefer to get that through Aurobindo himself and do my own stuff with it, to what Wilber does with it. I think Wilber is a very interesting author, but his foundation is Aurobindo and I read Aurobindo directly. I like the Synthesis of Yoga very much, which is this big book on synchronizing models and levels of consciousness. But yes, I am very into that.
Russ: Okay. Then let’s talk about this whole concept of leadership, because I know that your work currently involves quite a bit of activity with business and government working with people around ideas of leadership.
Danah: Yes, most of my work is speaking engagements and consulting work is concerned with that.
Russ: I want to offer a working hypothesis, before we get into that. That working hypothesis is that one of the big problems with the whole field of leadership – leadership studies, leadership development – is language. The terms are conflated – leadership, leader, leading – in such a way that people can talk about anything and claim it’s any of the other things and we have no distinctions among them.
Danah: I am afraid that’s a cultural problem across the board. Language is becoming meaningless.
Russ: In my work I try and use them, make distinctions among the three saying that we cannot get into definition. We have to stay with distinctions unless we are dealing with a particular context. So, leadership for me is the context of the individual and the collective, the implicate and the explicate if you will. Leader is a role, which is a set of expectations held by people who step into that role, as well as what other stakeholders have for people who step into that role. The constellation of expectations will depend on the context, including the sets of stakeholders.. Leading is what people do when they step into that role. It can vary considerably – be effective in one sense, in one context and not another and so on. So I just want you to know that that’s a framework in my mind when I talk about these things.
Danah: I agree, but I think it’s fair. I mean I haven’t used this term before, but what you say makes me think of the clear implication on my work about SQ and quantum leadership, The notion that you are calling the context could equally be described as a field of meaning. This is to say that the people working together in an organization and indeed the organization as a whole at some levels, share a field of meaning. Then that would be coming at the importance of spiritual intelligence or meaning intelligence.
The field of meaning can be dark and disturbing and chaotic, it can also be positive and inspiring and motivational. My work points very similarly to what you are saying, in the sense that the leader is, if you like, that Bohmian peak, the field of the meaning that the whole organization is caught up in. So his own or her own authenticity and sense of meaning and so on affects the whole organization.
That’s why when you get a very charismatic leader like Walsh or Jobs their personality just completely suffuses the organization. I think this happens less dramatically with less dramatically charismatic leaders in all organizations. You get a bad CEO and an organization can just fall apart, and that isn’t just because finances are going bad or things are piling up or something. It’s not just practical inefficiency. I think the practical problems in these organizations emerge from the fact that the whole field of meaning of the organization is somehow askew.
That’s a new thought, thanks to you. That’s why dialogue is so creative because friends feed creatively off each other.
Russ: When we are talking about managing of organization we are talking about dealing in those elements you were just referring to, but when we are talking about the leadership of an organization I would guess we are often seeing leadership show up in multiple peaks of the wave, if you will. There are multiple waves. So in any complex system you are going to have over time individual multiple occurrences of leading happen.
When we focus on developing leadership, we are talking about both the individual and the collective. When we are talking about developing individuals to step into leader roles, we are talking about developing individuals whether or not they step into leader roles. We cannot “train” a leader; we can only develop individuals.
Danah: I am completely with you and I completely agree. I say the same thing in different terms, Russ. You are making me see better ways to articulate my work, because I run this big annual SQ and Quantum Leadership course with 20 people very year and it’s more about self development. It’s not really a training course at all. It’s about self development, because my basic feeling is that leadership is a deeply personal issue. If you haven’t done the work on yourself, you can’t possibly do it for your organization or your nation or whatever. For me leadership development is really getting these people to be more in touch with their own motivations, feelings, positive and negative qualities. I take them into some pretty deep raw stuff.
Russ: May I ask about that for a moment, because much of what passes as leadership development it seems to me is trying to train people into particular skill sets that they are suggesting are universally valid and useful. And yet it sounds like what you are doing and what the implication of the individual development piece is, is that you are introducing people to practices that they have to integrate into their lives as they evolve and develop. Is that accurate?
Danah: Absolutely. For instance I teach them all to meditate. Every morning of the course begins with mindfulness meditation. Most of the people, because they come from the business world, have never meditated before and they all will say what a powerful thing that has given them. Now meditation isn’t just some silly practice; it’s a process of self reflection and it helps to nurture a kind of reflective thinking in people. So when people come to my course, because I have mistakenly called it a training course, they expect what the first thing you said – I am going to give them some skills. They can learn to practice them; they can go away and teach them.
Now and then somebody is very disappointed with the course, because it doesn’t do that. It more puts them through an experience that I hope gives them inner skills, not just to transform themselves during five quick days, because you can’t change your life in five days, but skills of reflection, honesty, self awareness and so on that they have to hold and develop each day as they go on.
People have written back to me and said how powerfully this has worked in their lives and leadership, their family lives and their leadership roles, and where these skills have changed everything. The course has destroyed one marriage and saved another marriage, because the men who came on the course were so changed by it that in the one case the wife was delighted by the change, and in the other case the wife couldn’t live with the change.
People undergo some pretty deep changes. I tell them in the course that it was fun. But this is the beginning of a whole new style of living and cultivating it into your life, which makes you much better in relationships, empathy, teamwork, all these things that leaders need.
But you can’t teach that as skills. It’s deep inner development and it has to come with a sense of deep inner commitment.
Russ: You mentioned mindful meditation, could you share any other examples of practices that you teach or that you present?
Danah: I teach one that is my own. It may have been developed by others, but it’s original to me for me, so I will say I developed it. I call it reflective meditation or reflective practice. As I sit quietly at the end of the day. I have the fantastic option to sit for two or three hours at the end of the day, because of my lifestyle. I live alone and I am right at home and I don’t have to get to work at 7:00 in the morning.
I just sit quietly for at least half an hour. First of all just sit and feel in your body where you are a bit tense or ill at ease or something. Focus on that and think, well, what happened in my day that’s made me feel a bit seized in the stomach or has made my chest a bit tight or given me a headache? Or positively, what in my day has absolutely thrilled me or surprised me?
Then you begin a process of why: why did that upset me? Then you get an answer to that and go one layer deeper. Well, okay, that upset me because it reminded of the way my mother used to tell me off when she thought I was being a naughty child. Okay, well why did that upset me? What was I feeling? Go deeper and deeper and deeper with these just why, why, why, each time you get an answer. Say “Okay I get that, but why?”
If you are able to sit there long enough you do find that at some point you get to the bottom of the situation and you just go, “Sigh.” I personally can’t function without doing this every night at least for an hour.
When I have houseguests I make them go to bed or up to the guestroom an hour before me. My husband always went into bed at midnight, so midnight was my magic hour. That was my time to go into this questioning state. It digests your day; it clears up your day and clears up the issues that everyone of us during the day is bothered about or surprised about or affected by something.
Russ: Danah, thank you so much for your gifts. I am wondering if there is anything I haven’t asked about that you wish I had?
Danah: You have been pretty comprehensive. A good interview, I have enjoyed doing it, Russ. It’s given me some fresh ways to think of my own ideas.
Russ: Wonderful. I am glad.
Danah: It’s wonderful to be interviewed by somebody who actually is coming from somewhere, is familiar with my work and has his own point of view, because then it’s a creative dialogue and that’s always interesting.
Russ: Thank you.
Leave a Reply
|
6eb560eeefd9fa74 | Ground state
Ground state
• 1D ground state has no nodes 1
• Examples 2
• Notes 3
• Bibliography 4
1D ground state has no nodes
In 1D the ground state of the Schrödinger equation has no nodes. This can be proved considering an average energy in the state with a node at x=0, i.e. \psi(0)=0. Consider the average energy in this state
\left\langle\psi| H|\psi\right\rangle=\int dx\; \left(-\frac{\hbar^2}{2m}\psi^* \frac{d^2\psi}{dx^2}+V(x)|\psi(x)|^2\right) where V(x) is the potential. Now consider a small interval around x=0, i.e. x\in[-\epsilon,\epsilon]. Take a new wavefunction \psi'(x) to be defined as \psi'(x)=\psi(x), x<-\epsilon and \psi'(x)=-\psi(x), x>\epsilon and constant for x\in[-\epsilon,\epsilon]. If epsilon is small enough then this is always possible to do so that \psi'(x) is continuous. So assuming \psi(x)\approx-cx around x=0, we can write the new function as
\psi'(x)=N\left\{\begin{array}{ll} |\psi(x)| & |x|>\epsilon\\ c\epsilon & |x|\le\epsilon \end{array}\right.
where N=\frac{1}{\sqrt{1+|c|^2\epsilon^3/3}} is the norm. Note that the kinetic energy density |d\psi'/dx|^2<|d\psi/dx|^2 everywhere because of the normalization. Now consider the potential energy. For definiteness let us choose V(x)\ge 0. Then it is clear that outside the interval x\in[-\epsilon,\epsilon] the potential energy density is smaller for the \psi' because |\psi'|<|\psi| there. On the other hand, in the interval x\in[-\epsilon,\epsilon] we have
{V^\epsilon_{avg}}'=\int_{-\epsilon}^\epsilon dx\; V(x)|\psi'|^2=\frac{\epsilon^3|c|^2}{1+|c|^2\epsilon^3/3}\int_{-\epsilon}^\epsilon V(x)\approx \frac{2\epsilon^4|c|^2}{3}V(0)+\dots\;.
which is correct to this order of \epsilon and \dots indicate higher order corrections. On the other hand, the potential energy in the \psi state is
V^\epsilon_{avg}=\int_{-\epsilon}^\epsilon dx\; V(x)|\psi|^2=\int_{-\epsilon}^\epsilon dx\; |c|^2|x|^2V(x)\approx\frac{2\epsilon^4 |c|^2}{3}V(0)+\dots\;. which is the same as that of the \psi' state to the order shown. Therefore, the potential energy unchanged to leading order in \epsilon by deforming the state with a node \psi into a state without a node \psi'. We can do this by removing all nodes thereby reducing the energy, which implies that the ground state energy must not have a node. This completes the proof.
• The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by \frac{h^2 n^2}{8 m L^2}, where h is the Planck constant, m is the mass of the particle, n is the energy state (n = 1 corresponds to the ground-state energy), and L is the width of the well.
1. ^ "Unit of time (second)". SI Brochure. |
49cfee02b5460952 | Tuesday, 18 August 2009
Edited By:
Arip Nurahman
Department of Physics,
Faculty of Sciences and Mathematics
Indonesia University of Education
Follower Open Course at MIT-Harvard University, Cambridge. USA.
Moon Moon symbol
A moon just past full as seen from Earth's northern hemisphere
Adjective lunar
Perigee 362,570 km (0.0024 AU)
(356,400-370,400 km)
Apogee 405,410 km (0.0027 AU)
(404,000-406,700 km)
Semi-major axis 384,399 km (0.00257 AU)[1]
Eccentricity 0.0549[1]
Average orbital speed 1.022 km/s
Inclination 5.145° to the ecliptic[1]
(between 18.29° and 28.58° to Earth's equator)
Longitude of ascending node regressing by one revolution in 18.6 years
Argument of perigee progressing by one revolution in 8.85 years
Satellite of Earth
Physical characteristics
Mean radius 1,737.10 km (0.273 Earths)[1][2]
Equatorial radius 1,738.14 km (0.273 Earths)[2]
Polar radius 1,735.97 km (0.273 Earths)[2]
Flattening 0.00125
Circumference 10,921 km (equatorial)
Surface area 3.793 × 107 km2 (0.074 Earths)
Volume 2.1958 × 1010 km3 (0.020 Earths)
Mass 7.3477 × 1022 kg (0.0123 Earths[1])
Mean density 3.3464 g/cm3[1]
Equatorial surface gravity 1.622 m/s2 (0.165 4 g)
Escape velocity 2.38 km/s
Sidereal rotation
27.321582 d (synchronous)
Equatorial rotation velocity 4.627 m/s
Axial tilt 1.5424° (to ecliptic)
6.687° (to orbit plane)
Albedo 0.136[3]
Surface temp.
min mean max
100 K 220 K 390 K
70 K 130 K 230 K
Apparent magnitude −2.5 to −12.9[nb 1]
−12.74 (mean full Moon)[2]
Angular diameter 29.3 to 34.1 arcminutes[2][nb 2]
Atmosphere[5][nb 3]
Surface pressure 10−7 Pa (day)
10−10 Pa (night)
Composition Ar, He, Na, K, H, Rn
The Moon is Earth's only known natural satellite,[nb 4][6] the fifth largest satellite in the Solar System. It is the largest natural satellite of a planet in the Solar System relative to the size of its primary, having a quarter the diameter of Earth and 181 its mass [nb 5]. The Moon is the second densest satellite after Io, a satellite of Jupiter. It is in synchronous rotation with Earth, always showing the same face; the near side is marked with dark volcanic maria among the bright ancient crustal highlands and prominent impact craters. It is the brightest object in the sky after the Sun, although its surface is actually very dark, with a similar reflectance to coal. Its prominence in the sky and its regular cycle of phases have since ancient times made the Moon an important cultural influence on language, calendars, art and mythology. The Moon's gravitational influence produces the ocean tides and the minute lengthening of the day. The Moon's current orbital distance, about thirty times the diameter of the Earth, causes it to appear almost the same size in the sky as the Sun, allowing it to cover the Sun nearly precisely in total solar eclipses.
The moon's surface shows striking contrasts of light and dark. The light areas are rugged highlands. The dark zones were partly flooded by lava when volcanoes erupted billions of years ago.
The moon's surface shows striking contrasts of light and dark. The light areas are rugged highlands. The dark zones were partly flooded by lava when volcanoes erupted billions of years ago. The lava froze to form smooth rock. Image credit: Lunar and Planetary Institute
The distance to the moon is measured to an accuracy of 5 centimeters by a laser beam sent from Earth. The beam bounces off a laser reflector placed on the moon by astronauts, and returns to Earth.
The distance to the moon is measured to an accuracy of 5 centimeters by a laser beam sent from Earth. The beam bounces off a laser reflector placed on the moon by astronauts, and returns to Earth. Image credit: World Book diagram by Bensen Studios
In 1959, scientists began to explore the moon with robot spacecraft. In that year, the Soviet Union sent a spacecraft called Luna 3 around the side of the moon that faces away from Earth. Luna 3 took the first photographs of that side of the moon. The word luna is Latin for moon.
The first people on the moon were U.S. astronauts Neil A. Armstrong, who took this picture, and Buzz Aldrin, who is pictured next to a seismograph. A television camera and a United States flag are in the background.
The first people on the moon were U.S. astronauts Neil A. Armstrong, who took this picture, and Buzz Aldrin, who is pictured next to a seismograph. A television camera and a United States flag are in the background. Their lunar module, Eagle, stands at the right. Image credit: NASA
On July 20, 1969, the U.S. Apollo 11 lunar module landed on the moon in the first of six Apollo landings. Astronaut Neil A. Armstrong became the first human being to set foot on the moon.
In the 1990's, two U.S. robot space probes, Clementine and Lunar Prospector, detected evidence of frozen water at both of the moon's poles. The ice came from comets that hit the moon over the last 2 billion to 3 billion years. The ice apparently has lasted in areas that are always in the shadows of crater rims. Because the ice is in the shade, where the temperature is about -400 degrees F (-240 degrees C), it has not melted and evaporated.
This article discusses Moon (The movements of the moon) (Origin and evolution of the moon) (The exosphere of the moon) (Surface features of the moon) (The interior of the moon) (History of moon study).
The movements of the moon
The moon moves in a variety of ways. For example, it rotates on its axis, an imaginary line that connects its poles. The moon also orbits Earth. Different amounts of the moon's lighted side become visible in phases because of the moon's orbit around Earth. During events called eclipses, the moon is positioned in line with Earth and the sun. A slight motion called libration enables us to see about 59 percent of the moon's surface at different times.
Rotation and orbit
The moon rotates on its axis once every 29 1/2 days. That is the period from one sunrise to the next, as seen from the lunar surface, and so it is known as a lunar day. By contrast, Earth takes only 24 hours for one rotation.
The moon's axis of rotation, like that of Earth, is tilted. Astronomers measure axial tilt relative to a line perpendicular to the ecliptic plane, an imaginary surface through Earth's orbit around the sun. The tilt of Earth's axis is about 23.5 degrees from the perpendicular and accounts for the seasons on Earth. But the tilt of the moon's axis is only about 1.5 degrees, so the moon has no seasons.
Another result of the smallness of the moon's tilt is that certain large peaks near the poles are always in sunlight. In addition, the floors of some craters -- particularly near the south pole -- are always in shadow.
The moon completes one orbit of Earth with respect to the stars about every 27 1/3 days, a period known as a sidereal month. But the moon revolves around Earth once with respect to the sun in about 29 1/2 days, a period known as a synodic month. A sidereal month is slightly shorter than a synodic month because, as the moon revolves around Earth, Earth is revolving around the sun. The moon needs some extra time to "catch up" with Earth. If the moon started on its orbit from a spot between Earth and the sun, it would return to almost the same place in about 29 1/2 days.
A synodic month equals a lunar day. As a result, the moon shows the same hemisphere -- the near side -- to Earth at all times. The other hemisphere -- the far side -- is always turned away from Earth.
People sometimes mistakenly use the term dark side to refer to the far side. The moon does have a dark side -- it is the hemisphere that is turned away from the sun. The location of the dark side changes constantly, moving with the terminator, the dividing line between sunlight and dark.
The lunar orbit, like the orbit of Earth, is shaped like a slightly flattened circle. The distance between the center of Earth and the moon's center varies throughout each orbit. At perigee (PEHR uh jee), when the moon is closest to Earth, that distance is 225,740 miles (363,300 kilometers). At apogee (AP uh jee), the farthest position, the distance is 251,970 miles (405,500 kilometers). The moon's orbit is elliptical (oval-shaped).
As the moon orbits Earth, an observer on Earth can see the moon appear to change shape. It seems to change from a crescent to a circle and back again. The shape looks different from one day to the next because the observer sees different parts of the moon's sunlit surface as the moon orbits Earth. The different appearances are known as the phases of the moon. The moon goes through a complete cycle of phases in a synodic month.
The moon has four phases: (1) new moon, (2) first quarter, (3) full moon, and (4) last quarter. When the moon is between the sun and Earth, its sunlit side is turned away from Earth. Astronomers call this darkened phase a new moon.
The next night after a new moon, a thin crescent of light appears along the moon's eastern edge. The remaining portion of the moon that faces Earth is faintly visible because of earthshine, sunlight reflected from Earth to the moon. Each night, an observer on Earth can see more of the sunlit side as the terminator, the line between sunlight and dark, moves westward. After about seven days, the observer can see half a full moon, commonly called a half moon. This phase is known as the first quarter because it occurs one quarter of the way through the synodic month. About seven days later, the moon is on the side of Earth opposite the sun. The entire sunlit side of the moon is now visible. This phase is called a full moon.
About seven days after a full moon, the observer again sees a half moon. This phase is the last quarter, or third quarter. After another seven days, the moon is between Earth and the sun, and another new moon occurs.
As the moon changes from new moon to full moon, and more and more of it becomes visible, it is said to be waxing. As it changes from full moon to new moon, and less and less of it can be seen, it is waning. When the moon appears smaller than a half moon, it is called crescent. When it looks larger than a half moon, but is not yet a full moon, it is called gibbous (GIHB uhs).
Like the sun, the moon rises in the east and sets in the west. As the moon progresses through its phases, it rises and sets at different times. In the new moon phase, it rises with the sun and travels close to the sun across the sky. Each successive day, the moon rises an average of about 50 minutes later.
Eclipses occur when Earth, the sun, and the moon are in a straight line, or nearly so. A lunar eclipse occurs when Earth gets directly -- or almost directly -- between the sun and the moon, and Earth's shadow falls on the moon. A lunar eclipse can occur only during a full moon. A solar eclipse occurs when the moon gets directly -- or almost directly -- between the sun and Earth, and the moon's shadow falls on Earth. A solar eclipse can occur only during a new moon.
During one part of each lunar orbit, Earth is between the sun and the moon; and, during another part of the orbit, the moon is between the sun and Earth. But in most cases, the astronomical bodies are not aligned directly enough to cause an eclipse. Instead, Earth casts its shadow into space above or below the moon, or the moon casts its shadow into space above or below Earth. The shadows extend into space in that way because the moon's orbit is tilted about 5 degrees relative to Earth's orbit around the sun.
People on Earth can sometimes see a small part of the far side of the moon. That part is visible because of lunar libration, a slight rotation of the moon as viewed from Earth. There are three kinds of libration: (1) libration in longitude, (2) diurnal (daily) libration, and (3) libration in latitude. Over time, viewers can see more than 50 percent of the moon's surface. Because of libration, about 59 percent of the lunar surface is visible from Earth.
Libration in longitude occurs because the moon's orbit is elliptical. As the moon orbits Earth, its speed varies according to a law discovered in the 1600's by the German astronomer Johannes Kepler. When the moon is relatively close to Earth, the moon travels more rapidly than its average speed. When the moon is relatively far from Earth, the moon travels more slowly than average. But the moon always rotates about its own axis at the same rate. So when the moon is traveling more rapidly than average, its rotation is too slow to keep all of the near side facing Earth. And when the moon is traveling more slowly than average, its rotation is too rapid to keep all of the near side facing Earth.
Diurnal libration enables an observer on Earth to see around one edge of the moon, then the other, during a single night. The libration occurs because Earth's rotation changes the observer's viewpoint by a distance equal to the diameter of Earth.
Diurnal libration is caused by a daily change in the position of an observer on Earth relative to the moon. Consider an observer who is at Earth's equator when the moon is full. As Earth rotates from west to east, the observer first sees the moon when it rises at the eastern horizon and last sees it when it sets at the western horizon. During this time, the observer's viewpoint moves about 7,900 miles (12,700 kilometers) -- the diameter of Earth -- relative to the moon. As a result, the moon appears to rotate slightly to the west.
While the moon is rising in the east and climbing to its highest point in the sky, the observer can see around the western edge of the near side. As the moon descends to the western horizon, the observer can see around the eastern edge of the near side.
Libration in latitude occurs because the moon's axis of rotation is tilted about 6 1/2 degrees relative to a line perpendicular to the moon's orbit around Earth. Thus, during each lunar orbit, the moon's north pole tilts first toward Earth, then away from Earth. When the lunar north pole is tilted toward Earth, people on Earth can see farther than normal along the top of the moon. When that pole is tilted away from Earth, people on Earth can see farther than normal along the bottom of the moon.
Origin and evolution of the moon
Scientists believe that the moon formed as a result of a collision known as the Giant Impact or the "Big Whack." According to this idea, Earth collided with a planet-sized object 4.6 billion years ago. As a result of the impact, a cloud of vaporized rock shot off Earth's surface and went into orbit around Earth. The cloud cooled and condensed into a ring of small, solid bodies, which then gathered together, forming the moon.
The rapid joining together of the small bodies released much energy as heat. Consequently, the moon melted, creating an "ocean" of magma (melted rock).
The magma ocean slowly cooled and solidified. As it cooled, dense, iron-rich materials sank deep into the moon. Those materials also cooled and solidified, forming the mantle, the layer of rock beneath the crust.
As the crust formed, asteroids bombarded it heavily, shattering and churning it. The largest impacts may have stripped off the entire crust. Some collisions were so powerful that they almost split the moon into pieces. One such collision created the South Pole-Aitken Basin, one of the largest known impact craters in the solar system.
A basalt rock that astronauts brought to Earth from the moon formed from lava that erupted from a lunar volcano. Escaping gases created the holes before the lava solidified into rock.
A basalt rock that astronauts brought to Earth from the moon formed from lava that erupted from a lunar volcano. Escaping gases created the holes before the lava solidified into rock. Image credit: Lunar and Planetary Institute
About 4 billion to 3 billion years ago, melting occurred in the mantle, probably caused by radioactive elements deep in the moon's interior. The resulting magma erupted as dark, iron-rich lava, partly flooding the heavily cratered surface. The lava cooled and solidified into rocks known as basalts (buh SAWLTS).
Small eruptions may have continued until as recently as 1 billion years ago. Since that time, only an occasional impact by an asteroid or comet has modified the surface. Because the moon has no atmosphere to burn up meteoroids, the bombardment continues to this day. However, it has become much less intense.
Impacts of large objects can create craters. Impacts of micrometeoroids (tiny meteoroids) grind the surface rocks into a fine, dusty powder known as the regolith (REHG uh lihth). Regolith overlies all the bedrock on the moon. Because regolith forms as a result of exposure to space, the longer a rock is exposed, the thicker the regolith that forms on it.
The exosphere of the moon
The lunar exosphere -- that is, the materials surrounding the moon that make up the lunar "atmosphere" -- consists mainly of gases that arrive as the solar wind. The solar wind is a continuous flow of gases from the sun -- mostly hydrogen and helium, along with some neon and argon.
The remainder of the gases in the exosphere form on the moon. A continual "rain" of micrometeoroids heats lunar rocks, melting and vaporizing their surface. The most common atoms in the vapor are atoms of sodium and potassium. Those elements are present in tiny amounts -- only a few hundred atoms of each per cubic centimeter of exosphere. In addition to vapors produced by impacts, the moon also releases some gases from its interior.
Most gases of the exosphere concentrate about halfway between the equator and the poles, and they are most plentiful just before sunrise. The solar wind continuously sweeps vapor into space, but the vapor is continuously replaced.
During the night, the pressure of gases at the lunar surface is about 3.9 x 10-14 pound per square inch (2.7 x 10-10 pascal). That is a stronger vacuum than laboratories on Earth can usually achieve. The exosphere is so tenuous -- that is, so low in density -- that the rocket exhaust released during each Apollo landing temporarily doubled the total mass of the entire exosphere.
The surface of the moon is covered with bowl-shaped holes called craters, shallow depressions called basins, and broad, flat plains known as maria. A powdery dust called the regolith overlies much of the surface of the moon.
Euler Crater has central peaks and slumped walls. The peaks almost certainly formed quickly after the impact that produced the crater compressed the ground. The ground rebounded upward, forming the peaks.
Euler Crater has central peaks and slumped walls. The peaks almost certainly formed quickly after the impact that produced the crater compressed the ground. The ground rebounded upward, forming the peaks. The crater walls are slumped because the original walls were too steep to withstand the force of gravity. Material fell inward, away from the walls. This crater, in Mare Imbrium (Sea of Rains), is about 17 1/2 miles (28 kilometers) across. Image credit: Lunar and Planetary Institute
The vast majority of the moon's craters are formed by the impact of meteoroids, asteroids, and comets. Craters on the moon are named for famous scientists. For example, Copernicus Crater is named for Nicolaus Copernicus, a Polish astronomer who realized in the 1500's that the planets move about the sun. Archimedes Crater is named for the Greek mathematician Archimedes, who made many mathematical discoveries in the 200's B.C.
The shape of craters varies with their size. Small craters with diameters of less than 6 miles (10 kilometers) have relatively simple bowl shapes. Slightly larger craters cannot maintain a bowl shape because the crater wall is too steep. Material falls inward from the wall to the floor. As a result, the walls become scalloped and the floor becomes flat.
Still larger craters have terraced walls and central peaks. Terraces inside the rim descend like stairsteps to the floor. The same process that creates wall scalloping is responsible for terraces. The central peaks almost certainly form as did the central peaks of impact craters on Earth. Studies of the peaks on Earth show that they result from a deformation of the ground. The impact compresses the ground, which then rebounds, creating the peaks. Material in the central peaks of lunar craters may come from depths as great as 12 miles (19 kilometers).
Surrounding the craters is rough, mountainous material -- crushed and broken rocks that were ripped out of the crater cavity by shock pressure. This material, called the crater ejecta blanket, can extend about 60 miles (100 kilometers) from the crater.
Farther out are patches of debris and, in many cases, irregular secondary craters, also known as secondaries. Those craters come in a range of shapes and sizes, and they are often clustered in groups or aligned in rows. Secondaries form when material thrown out of the primary (original) crater strikes the surface. This material consists of large blocks, clumps of loosely joined rocks, and fine sprays of ground-up rock. The material may travel thousands of miles or kilometers.
Crater rays are light, wispy deposits of powder that can extend thousands of miles or kilometers from the crater. Rays slowly vanish as micrometeoroid bombardment mixes the powder into the upper surface layer. Thus, craters that still have visible rays must be among the youngest craters on the moon.
Craters larger than about 120 miles (200 kilometers) across tend to have central mountains. Some of them also have inner rings of peaks, in addition to the central peak. The appearance of a ring signals the next major transition in crater shape -- from crater to basin.
Basins are craters that are 190 miles (300 kilometers) or more across. The smaller basins have only a single inner ring of peaks, but the larger ones typically have multiple rings. The rings are concentric -- that is, they all have the same center, like the rings of a dartboard. The spectacular, multiple-ringed basin called the Eastern Sea (Mare Orientale) is almost 600 miles (1,000 kilometers) across. Other basins can be more than 1,200 miles (2,000 kilometers) in diameter -- as large as the entire western United States.
Basins occur equally on the near side and far side. Most basins have little or no fill of basalt, particularly those on the far side. The difference in filling may be related to variations in the thickness of the crust. The far side has a thicker crust, so it is more difficult for molten rock to reach the surface there.
In the highlands, the overlying ejecta blankets of the basins make up most of the upper few miles or kilometers of material. Much of this material is a large, thick layer of shattered and crushed rock known as breccia (BREHCH ee uh). Scientists can learn about the original crust by studying tiny fragments of breccia.
Maria, the dark areas on the surface of the moon, make up about 16 percent of the surface area. Some maria are named in Latin for weather terms -- for example, Mare Imbrium (Sea of Rains) and Mare Nubium (Sea of Clouds). Others are named for states of mind, as in Mare Serenitatus (Sea of Serenity) and Mare Tranquillitatis (Sea of Tranquility).
Landforms on the maria tend to be smaller than those of the highlands. The small size of mare features relates to the scale of the processes that formed them -- volcanic eruptions and crustal deformation, rather than large impacts. The chief landforms on the maria include wrinkle ridges and rilles and other volcanic features.
Wrinkle ridges are blisterlike humps that wind across the surface of almost all maria. The ridges are actually broad folds in the rocks, created by compression. Many wrinkle ridges are roughly circular, aligned with small peaks that stick up through the maria and outlining interior rings. Circular ridge systems also outline buried features, such as rims of craters that existed before the maria formed.
A lunar rover is parked near the edge of Hadley Rille, a long channel probably formed by lava 4 billion to 3 billion years ago. The slopes in the background are part of a formation called the Swann Hills.
A lunar rover is parked near the edge of Hadley Rille, a long channel probably formed by lava 4 billion to 3 billion years ago. The slopes in the background are part of a formation called the Swann Hills. This photo was taken during the Apollo 15 mission in 1971. Astronaut David R. Scott is reaching under a seat to get a camera. Image credit: NASA
Rilles are snakelike depressions that wind across many areas of the maria. Scientists formerly thought the rilles might be ancient riverbeds. However, they now suspect that the rilles are channels formed by running lava. One piece of evidence favoring this view is the dryness of rock samples brought to Earth by Apollo astronauts; the samples have almost no water in their molecular structure. In addition, detailed photographs show that the rilles are shaped somewhat like channels created by flowing lava on Earth.
Volcanic features
Scattered throughout the maria are a variety of other features formed by volcanic eruptions. Within Mare Imbrium, scarps (lines of cliffs) wind their way across the surface. The scarps are lava flow fronts, places where lava solidified, enabling lava that was still molten to pile up behind them. The presence of the scarps is one piece of evidence indicating that the maria consist of solidified basaltic lava.
Small hills and domes with pits on top are probably little volcanoes. Both dome-shaped and cone-shaped volcanoes cluster together in many places, as on Earth. One of the largest concentrations of cones on the moon is the Marius Hills complex in Oceanus Procellarum (Ocean of Storms). Within this complex are numerous wrinkle ridges and rilles, and more than 50 volcanoes.
Large areas of maria and terrae are covered by dark material known as dark mantle deposits. Evidence collected by the Apollo missions confirmed that dark mantling is volcanic ash.
Much smaller dark mantles are associated with small craters that lie on the fractured floors of large craters. Those mantles may be cinder cones -- low, broad, cone-shaped hills formed by explosive volcanic eruptions.
The interior of the moon
The moon, like Earth, has three interior zones -- crust, mantle, and core. However, the composition, structure, and origin of the zones on the moon are much different from those on Earth.
Most of what scientists know about the interior of Earth and the moon has been learned by studying seismic events -- earthquakes and moonquakes, respectively. The data on moonquakes come from scientific equipment set up by Apollo astronauts from 1969 to 1972.
The average thickness of the lunar crust is about 43 miles (70 kilometers), compared with about 6 miles (10 kilometers) for Earth's crust. The outermost part of the moon's crust is broken, fractured, and jumbled as a result of the large impacts it has endured. This shattered zone gives way to intact material below a depth of about 6 miles. The bottom of the crust is defined by an abrupt increase in rock density at a depth of about 37 miles (60 kilometers) on the near side and about 50 miles (80 kilometers) on the far side.
The mantle of the moon consists of dense rocks that are rich in iron and magnesium. The mantle formed during the period of global melting. Low-density minerals floated to the outer layers of the moon, while dense minerals sank deeper into it.
Later, the mantle partly melted due to a build-up of heat in the deep interior. The source of the heat was probably the decay (breakup) of uranium and other radioactive elements. This melting produced basaltic magmas -- bodies of molten rock. The magmas later made their way to the surface and erupted as the mare lavas and ashes. Although mare volcanism occurred for more than 1 billion years -- from at least 4 billion years to fewer than 3 billion years ago -- much less than 1 percent of the volume of the mantle ever remelted.
Data gathered by Lunar Prospector confirmed that the moon has a core and enabled scientists to estimate its size. The core has a radius of only about 250 miles (400 kilometers). By contrast, the radius of Earth's core is about 2,200 miles (3,500 kilometers).
The lunar core has less than 1 percent of the mass of the moon. Scientists suspect that the core consists mostly of iron, and it may also contain large amounts of sulfur and other elements.
Earth's core is made mostly of molten iron and nickel. This rapidly rotating molten core is responsible for Earth's magnetic field. A magnetic field is an influence that a magnetic object creates in the region around it. If the core of a planet or a satellite is molten, motion within the core caused by the rotation of the planet or satellite makes the core magnetic. But the small, partly molten core of the moon cannot generate a global magnetic field. However, small regions on the lunar surface are magnetic. Scientists are not sure how these regions acquired magnetism. Perhaps the moon once had a larger, more molten core.
There is evidence that the lunar interior formerly contained gas, and that some gas may still be there. Basalt from the moon contains holes called vesicles that are created during a volcanic eruption. On Earth, gas that is dissolved in magma comes out of solution during an eruption, much as carbon dioxide comes out of a carbonated beverage when you shake the drink container. The presence of vesicles in lunar basalt indicates that the deep interior contained gases, probably carbon monoxide or gaseous sulfur. The existence of volcanic ash is further evidence of interior gas; on Earth, volcanic eruptions are largely driven by gas.
History of moon study
Ancient ideas
Some ancient peoples believed that the moon was a rotating bowl of fire. Others thought it was a mirror that reflected Earth's lands and seas. But philosophers in ancient Greece understood that the moon is a sphere in orbit around Earth. They also knew that moonlight is reflected sunlight.
Some Greek philosophers believed that the moon was a world much like Earth. In about A.D. 100, Plutarch even suggested that people lived on the moon. The Greeks also apparently believed that the dark areas of the moon were seas, while the bright regions were land.
In about A.D. 150, Ptolemy, a Greek astronomer who lived in Alexandria, Egypt, said that the moon was Earth's nearest neighbor in space. He thought that both the moon and the sun orbited Earth. Ptolemy's views survived for more than 1,300 years. But by the early 1500's, the Polish astronomer Nicolaus Copernicus had developed the correct view -- Earth and the other planets revolve about the sun, and the moon orbits Earth.
Early observations with telescopes
The Italian astronomer and physicist Galileo wrote the first scientific description of the moon based on observations with a telescope. In 1609, Galileo described a rough, mountainous surface. This description was quite different from what was commonly believed -- that the moon was smooth. Galileo noted that the light regions were rough and hilly and the dark regions were smoother plains.
The presence of high mountains on the moon fascinated Galileo. His detailed description of a large crater in the central highlands -- probably Albategnius -- began 350 years of controversy and debate about the origin of the "holes" on the moon.
Other astronomers of the 1600's mapped and cataloged every surface feature they could see. Increasingly powerful telescopes led to more detailed records. In 1645, the Dutch engineer and astronomer Michael Florent van Langren, also known as Langrenus, published a map that gave names to the surface features of the moon, mostly its craters. A map drawn by the Bohemian-born Italian astronomer Anton M. S. de Rheita in 1645 correctly depicted the bright ray systems of the craters Tycho and Copernicus. Another effort, by the Polish astronomer Johannes Hevelius in 1647, included the moon's libration zones.
By 1651, two Jesuit scholars from Italy, the astronomer Giovanni Battista Riccioli and the mathematician and physicist Francesco M. Grimaldi, had completed a map of the moon. That map established the naming system for lunar features that is still in use.
Determining the origin of craters
Until the late 1800's, most astronomers thought that volcanism formed the craters of the moon. However, in the 1870's, the English astronomer Richard A. Proctor proposed correctly that the craters result from the collision of solid objects with the moon. But at first, few scientists accepted Proctor's proposal. Most astronomers thought that the moon's craters must be volcanic in origin because no one had yet described a crater on Earth as an impact crater, but scientists had found dozens of obviously volcanic craters.
In 1892, the American geologist Grove Karl Gilbert argued that most lunar craters were impact craters. He based his arguments on the large size of some of the craters. Those included the basins, which he was the first to recognize as huge craters. Gilbert also noted that lunar craters have only the most general resemblance to calderas (large volcanic craters) on Earth. Both lunar craters and calderas are large circular pits, but their structural details do not resemble each other in any way.
In addition, Gilbert created small craters experimentally. He studied what happened when he dropped clay balls and shot bullets into clay and sand targets.
Gilbert was the first to recognize that the circular Mare Imbrium was the site of a gigantic impact. By examining photographs, Gilbert also determined which nearby craters formed before and after that event. For example, a crater that is partially covered by ejecta from the Imbrium impact formed before the impact. A crater within the mare formed after the impact.
Describing lunar evolution
Gilbert suggested that scientists could determine the relative age of surface features by studying the ejecta of the Imbrium impact. That suggestion was the key to unraveling the history of the moon. Gilbert recognized that the moon is a complex body that was built up by innumerable impacts over a long period.
In his book The Face of the Moon (1949), the American astronomer and physicist Ralph B. Baldwin further described lunar evolution. He noted the similarity in form between craters on the moon and bomb craters created during World War II (1939-1945) and concluded that lunar craters form by impact.
Baldwin did not say that every lunar feature originated with an impact. He stated correctly that the maria are solidified flows of basalt lava, similar to flood lava plateaus on Earth. Finally, independently of Gilbert, he concluded that all circular maria are actually huge impact craters that later filled with lava.
In the 1950's, the American chemist Harold C. Urey offered a contrasting view of lunar history. Urey said that, because the moon appears to be cold and rigid, it has always been so. He then stated -- correctly -- that craters are of impact origin. However, he concluded falsely that the maria are blankets of debris scattered by the impacts that created the basins. And he was mistaken in concluding that the moon never melted to any significant extent. Urey had won the 1934 Nobel Prize in chemistry and had an outstanding scientific reputation, so many people took his views seriously. Urey strongly favored making the moon a focus of scientific study. Although some of his ideas were mistaken, his support of moon study was a major factor in making the moon an early goal of the U.S. space program.
In 1961, the U.S. geologist Eugene M. Shoemaker founded the Branch of Astrogeology of the U.S. Geological Survey (USGS). Astrogeology is the study of celestial objects other than Earth. Shoemaker showed that the moon's surface could be studied from a geological perspective by recognizing a sequence of relative ages of rock units near the crater Copernicus on the near side. Shoemaker also studied the Meteor Crater in Arizona and documented the impact origin of this feature. In preparation for the Apollo missions to the moon, the USGS began to map the geology of the moon using telescopes and pictures. This work gave scientists their basic understanding of lunar evolution.
Apollo missions
Beginning in 1959, the Soviet Union and the United States sent a series of robot spacecraft to examine the moon in detail. Their ultimate goal was to land people safely on the moon. The United States finally reached that goal in 1969 with the landing of the Apollo 11 lunar module. The United States conducted six more Apollo missions, including five landings. The last of those was Apollo 17, in December 1972.
The Apollo missions revolutionized the understanding of the moon. Much of the knowledge gained about the moon also applies to Earth and the other inner planets -- Mercury, Venus, and Mars. Scientists learned, for example, that impact is a fundamental geological process operating on the planets and their satellites.
After the Apollo missions, the Soviets sent four Luna robot craft to the moon. The last, Luna 24, returned samples of lunar soil to Earth in August 1976.
Recent exploration
The Clementine orbiter used radar signals to find evidence of a large deposit of frozen water on the moon.
The Clementine orbiter used radar signals to find evidence of a large deposit of frozen water on the moon. The orbiter sent radar signals to various target points on the lunar surface. The targets reflected some of the signals to Earth, where they were received by large antennas and analyzed. Image credit: Lunar and Planetary Institute
No more spacecraft went to the moon until January 1994, when the United States sent the orbiter Clementine. From February to May of that year, Clementine's four cameras took more than 2 million pictures of the moon. A laser device measured the height and depth of mountains, craters, and other features. Radar signals that Clementine bounced off the moon provided evidence of a large deposit of frozen water. The ice appeared to be inside craters at the south pole.
The U.S. probe Lunar Prospector orbited the moon from January 1998 to July 1999. The craft mapped the concentrations of chemical elements in the moon, surveyed the moon's magnetic fields, and found strong evidence of ice at both poles. Small particles of ice are apparently part of the regolith at the poles.
The SMART-1 spacecraft, launched by the European Space Agency in 2003, went into orbit around the moon in 2004. The craft's instruments were designed to investigate the moon's origin and conduct a detailed survey of the chemical elements on the lunar surface.
Contributor: Paul D. Spudis, Ph.D., Deputy Director and Staff Scientist, Lunar and Planetary Institute.
How to cite this article: To cite this article, World Book recommends the following format: Spudis, Paul D. "Moon." World Book Online Reference Center. 2004. World Book, Inc.
Further reading
Monday, 17 August 2009
Penjelajah Bintang dari Indonesia
Sumber: Pak Eko Laksono
(Imperium Indonesia)
Di tahun 1905, Albert Einstein dalam rumus Relativitas Khususnya yang terkenal, menyatakan bahwa kecepatan cahaya (c) selalu konstan, dan berbeda dengan hal lainnya, kecepatan cahaya tidak relatif. Dan Einstein juga menyebutkan bahwa tidak mungkin ada sesuatupun di alam semesta, yang bisa melebihi kecepatan cahaya.
Tidak mungkin, impossible.
Dan ini membuat perjalanan luar angkasa akan menjadi "kurang" efisien. Tapi di tahun 1994, seseorang saintis menyatakan bahwa kecepatan melebihi kecepatan cahaya dimungkinkan secara teoritis. Caranya adalah dengan menggunakan Warp Drive Engine, mesin yang mempunyai kemampuan ”melengkungkan Ruang-Waktu”, Space-Time Bubble. Nama saintis itu adalah Miguel Alcubierre.
(Alcubierre Drive, The Warp Drive: Hyper-fast travel within general relativity, 1994 dalam jurnal Classical and Quantum Gravity)
Seandainya ini menjadi kenyataan, maka perjalanan menuju bintang-bintang akan dimungkinkan, dan planet-planet terjauh dan asing di tata surya pun bisa dicapai, hanya dalam hitungan menit..
Dan impian manusia yang telah ada selama ribuan tahun, untuk mengetahui apa yang ada di angkasa luar sana, akan terwujud. Impian ini mulai menjadi populer di dunia sejak sebuah epik fiksi ilmiah ditayangkan di tahun 1966. Kisah petualangan pesawat ruang angkasa Star Trek. Dan inilah kisahnya dari masa ke masa.
1. USS Enterprise NX-01
Ini adalah pesawat awal yang menjadi prequel Star Trek. Diluncurkan April 2151, NX-01 menjadi pesawat luar angkasa Bumi pertama yang mempunyai kecepatan melebihi kecepatan cahaya, Warp 5, sehingga akhirnya manusia bisa mulai menjelajah bintang-bintang. Planet Jupiter pun bisa ditempuh dari Bumi hanya dalam 10 menit.
Warp = c (kecepatan cahaya) ^3Warp 1 = 1 Kali cWarp 2 = 8 kali cWarp 3 = 27 kali cWarp 5 = 125 kali c
2. USS Enterprise NCC-1701
to boldly go, where no one has gone before.”
Inilah yang menjadi awal dari legenda Star Trek. Pesawat Starfleet NCC-1701, yang dipimpin komandannya yang heroik Kapten James Tiberius Kirk. Enterprise menjelajah luar angkasa, mengunjungi planet-planet dan bintang-bintang yang sangat jauh dari Bumi untuk mencari pengetahuan dan tantangan baru.
Diluncurkan tahun 2245 dari galangan kapal antariksa San Fransisco di orbit bumi, kapal ini akhirnya menemui banyak peradaban luar bumi, seperti Klingon dan Romulan, serta menjumpai banyak keanehan dan keajaiban dari kehidupan di luar angkasa.
3. USS Enterprise NCC-1701-A
NCC-1701-A menjadi pesawat baru dalam armada Starfleet bagi Kapten Kirk, setelah pesawat sebelumnya hancur saat pertempuran dengan Klingon (dalam ”Star Trek III : The Search for Spock”). Beroperasi tahun 2286.
4. USS Enterprise NCC-1701-B
Diluncurkan pada 2293, dikomandani Kapten John Harriman (Permulaan film “Star Trek Generations”).
5. USS Enterprise NCC-1701-C
Beroperasi tahun 2332, pesawat ini berperan besar dalam perdamaian antara Federasi (Bumi dan Vulcan) dengan Klingon. Pesawat ini hancur secara heroik mempertahankan Planet Klingon dari serangan kapal-kapal Romulan. (TNG : episode ”Yesterday’s Enterprise”).
6. USS Enterprise NCC-1701-D
USS Enterprise D, Earth orbit
Inilah pesawat Star Trek legendaris yang dikomandani Kapten Jean-Luc Picard. Ukurannya nyaris 2 kali lebih besar dari Enterprise-A milik Kapten Kirk dan telah mempunyai kecepatan maksimum mencapai Warp 9,6.
Dalam penjelajahannya menembus batas-batas alam semesta, pesawat Federasi ini menemukan tantangan-tantangan yang jauh lebih besar. Selain kekuatan Romulan yang terus bertambah, mereka menemui lawan-lawan baru yang jauh lebih berat, seperti peradaban Borg, dan makhluk Omnipotent bernama ”Q”.
"Q" adalah makhluk berwujud manusia yang mempunyai kekuatan ”Maha Kuasa”, yang senang menguji para awak Enterprise, dan terutama, Kapten Picard. Sedangkan Bangsa Borg, adalah makhluk-makhluk setengah biologis dan setengah mesin. Tujuan mereka hanya satu. Mengasimilasi paksa semua kelebihan dari semua makhluk-makhluk hidup yang ada di seluruh alam semesta. Menjadikan diri mereka semakin lama menjadi makin, Sempurna.
Di saat ini, Federasi sadar, bahwa kedahsyatan pesawat-pesawat antariksa mereka tidak ada artinya dibandingkan dengan teknologi Borg yang sangat jauh lebih maju. Dan Borg bisa mengasimilasi seluruh Bumi, tanpa ada perlawanan berarti.
7. USS Enterprise NCC-1701-E
Di bawah ancaman Borg yang akan menginvasi Planet Bumi, Federasi melancarkan proyek-proyek super rahasia (Proyek Sovereign, Defiant, Prometheus) untuk mengakselerasikan kekuatan dan ketahanan kapal-kapalnya. NCC-1701-E, meluncur di tahun 2373, dan menjadi pesawat pertama yang menggunakan senjata baru Tipe-X Phaser Array sebanyak 12 buah, selain Photon dan Quantum Torpedoes.
(Pesawat ini bisa dilihat di Star Trek : First Contact).
Ini adalah pesawat antariksa tercanggih yang dimiliki Armada Starfleet.
8. USS Enterprise NCC-1701-J
Pesawat ini adalah pesawat yang ada bahkan jauh di masa depan Star Trek. Pesawat raksasa ini besarnya lebih dari 1,5 kali Enterprise E. Tidak banyak informasi mengenai teknologi dari pesawat ini, tapi yang pastinya sangat jauh lebih maju.
Skema perbandingan ukuran varian Enterprise
Gambar-gambar dari film terbaru Star Trek, Star Trek XI. Perjalanan luar angkasa akan kembali dimulai..
Star Trek, Wikipedia
Faster Than Light, Wikipedia
Miguel Alcubierre, Wikipedia
Warp Factor, Memory Alpha
Starship Enterprise, Utopia Planitia Yards
Friday, 14 August 2009
How Indonesian People Get Nobel Prize in The Future
Central for Research and Development for Winning
Nobel Prize in Physics at Indonesia
Nobel Fisika Indonesia
"Ilmuwan hanya menetapkan dua hal, yaitu kebenaran dan ketulusan, mereka menetapkan atas dirinya dan atas para ilmuwan lain."
~Erwin S.~
"Tuhan Menggunakan Matematika yang Indah dalam Menciptakan Dunia"
~Paul A.M. Dirac~
Nobel Prize® medal - registered trademark of the Nobel Foundation
The Nobel Prize in Physics 1933
"for the discovery of new productive forms of atomic theory"
Erwin Schrödinger Paul Adrien Maurice Dirac
Erwin Schrödinger Paul Adrien Maurice Dirac
half 1/2 of the prize half 1/2 of the prize
Austria United Kingdom
Berlin University
Berlin, Germany
University of Cambridge
Cambridge, United Kingdom
b. 1887
d. 1961
b. 1902
d. 1984
Photos: Copyright © The Nobel Foundation
Nobel Lecture
Nobel Lecture, December 12, 1933
The Fundamental Idea of Wave Mechanics
The Lecture in Text Format
Pdf 73 kB »
Copyright © The Nobel Foundation 1933
In order to read the text you need Acrobat Reader.
Nobel Lecture
Nobel Lecture, December 12, 1933
Theory of Electrons and Positrons
The Lecture in Text Format
Pdf 48 kB »
Copyright © The Nobel Foundation 1933
In order to read the text you need Acrobat Reader.
Erwin Schrödinger
Born Erwin Rudolf Josef Alexander Schrödinger
12 August 1887(1887-08-12)
Erdberg, Vienna, Austria-Hungary
Vienna, Austria
Citizenship Austria, Germany, Ireland
Nationality Austrian, later Irish
Fields Physics
Institutions University of Breslau
University of Zürich
Humboldt University of Berlin
University of Oxford
University of Graz
Dublin Institute for Advanced Studies
Ghent University
Alma mater University of Vienna
Doctoral advisor Friedrich Hasenöhrl
Other academic advisors Franz S. Exner
Friedrich Hasenöhrl
Notable students Linus Pauling
Felix Bloch
Brendan Scaife
Known for Schrödinger equation
Schrödinger's cat
Schrödinger method
Schrödinger functional
Schrödinger picture
Schrödinger-Newton equations
Schrödinger field
Rayleigh-Schrödinger perturbation
Schrödinger logics
Cat state
Notable awards Nobel Prize in Physics (1933)
Spouse Annemarie Bertel (1920-1965)
Erwin Rudolf Josef Alexander Schrödinger (German pronunciation: [ˈɛʁviːn ˈʃʁøːdɪŋɐ]; 12 August 1887 – 4 January 1961) was a physicist and theoretical biologist who was one of the fathers of quantum mechanics, and is famed for a number of important contributions to physics, especially the Schrödinger equation, for which he received the Nobel Prize in Physics in 1933. In 1935, after extensive correspondence with personal friend Albert Einstein, he proposed the Schrödinger's cat thought experiment.
See also
Paul Adrien Maurice Dirac
Born Paul Adrien Maurice Dirac
8 August 1902(1902-08-08)
Bristol, England
Died 20 October 1984(1984-10-20) (aged 82)
Tallahassee, Florida, USA
Nationality Switzerland (1902–1919)
United Kingdom (1919–1984)
Fields Physics (theoretical)
Institutions University of Cambridge
Florida State University
Alma mater University of Bristol
University of Cambridge
Doctoral advisor Ralph Fowler
Doctoral students Homi Bhabha
Harish Chandra Mehta
Dennis Sciama
Behram Kurşunoğlu
John Polkinghorne
Known for Dirac equation
Dirac comb
Dirac delta function
Fermi–Dirac statistics
Dirac sea
Dirac spinor
Dirac measure
Bra-ket notation
Dirac adjoint
Dirac large numbers hypothesis
Dirac fermion
Dirac string
Dirac algebra
Dirac operator
Abraham-Lorentz-Dirac force
Dirac bracket
Fermi–Dirac integral
Negative probability
Dirac Picture
Dirac-Coulomb-Breit Equation
Notable awards Nobel Prize in Physics (1933)
Copley Medal (1952)
Max Planck Medal (1952)
He is the stepfather of Gabriel Andrew Dirac.
Quantum mechanics
Uncertainty principle
Mathematical formulations
v · d · e
Paul Adrien Maurice Dirac, OM, FRS (play /dɪˈræk/ di-rak; 8 August 1902 – 20 October 1984) was an English theoretical physicist who made fundamental contributions to the early development of both quantum mechanics and quantum electrodynamics. He held the Lucasian Chair of Mathematics at the University of Cambridge and spent the last fourteen years of his life at Florida State University.
Among other discoveries, he formulated the Dirac equation, which describes the behaviour of fermions, and predicted the existence of antimatter.
Dirac shared the Nobel Prize in physics for 1933 with Erwin Schrödinger, "for the discovery of new productive forms of atomic theory."[1]
Dirac noticed an analogy between the Poisson brackets of classical mechanics and the recently proposed quantization rules in Werner Heisenberg's matrix formulation of quantum mechanics. This observation allowed Dirac to obtain the quantization rules in a novel and more illuminating manner. For this work, published in 1926, he received a Ph.D. from Cambridge.
In 1928, building on 2x2 spin matrices which he discovered independently (Abraham Pais quoted Dirac as saying "I believe I got these (matrices) independently of Pauli and possibly Pauli got these independently of me")[19] of Wolfgang Pauli's work on non-relativistic spin systems, he proposed the Dirac equation as a relativistic equation of motion for the wavefunction of the electron.[20] This work led Dirac to predict the existence of the positron, the electron's antiparticle, which he interpreted in terms of what came to be called the Dirac sea.[21] The positron was observed by Carl Anderson in 1932. Dirac's equation also contributed to explaining the origin of quantum spin as a relativistic phenomenon.
The necessity of fermions i.e. matter being created and destroyed in Enrico Fermi's 1934 theory of beta decay, however, led to a reinterpretation of Dirac's equation as a "classical" field equation for any point particle of spin ħ/2, itself subject to quantization conditions involving anti-commutators. Thus reinterpreted, in 1934 by Werner Heisenberg, as a (quantum) field equation accurately describing all elementary matter particles- today quarks and leptons – this Dirac field equation is as central to theoretical physics as the Maxwell, Yang-Mills and Einstein field equations. Dirac is regarded as the founder of quantum electrodynamics, being the first to use that term. He also introduced the idea of vacuum polarization in the early 1930s. This work was key to the development of quantum mechanics by the next generation of theorists, and in particular Schwinger, Feynman, Sin-Itiro Tomonaga and Dyson in their formulation of quantum electrodynamics.
Dirac's Principles of Quantum Mechanics, published in 1930, is a landmark in the history of science. It quickly became one of the standard textbooks on the subject and is still used today. In that book, Dirac incorporated the previous work of Werner Heisenberg on matrix mechanics and of Erwin Schrödinger on wave mechanics into a single mathematical formalism that associates measurable quantities to operators acting on the Hilbert space of vectors that describe the state of a physical system. The book also introduced the delta function. Following his 1939 article,[22] he also included the bra-ket notation in the third edition of his book,[23] thereby contributing to its universal use nowadays.
In 1933, following his 1931 paper on magnetic monopoles, Dirac showed that the existence of a single magnetic monopole in the universe would suffice to explain the observed quantization of electrical charge. In 1975,[24] 1982,[25] and 2009[26][27][28] intriguing results suggested the possible detection of magnetic monopoles, but there is, to date, no direct evidence for their existence.
Dirac's quantum electrodynamics made predictions that were – more often than not – infinite and therefore unacceptable. A workaround known as renormalization was developed, but Dirac never accepted this. "I must say that I am very dissatisfied with the situation," he said in 1975, "because this so-called 'good theory' does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!"[29] His refusal to accept renormalization, resulted in his work on the subject moving increasingly out of the mainstream. However, from his once rejected notes he managed to work on putting quantum electrodynamics on "logical foundations" based on Hamiltonian formalism that he formulated. He found a rather novel way of deriving the anomalous magnetic moment "Schwinger term" and also the Lamb shift, afresh, using the Heisenberg picture and without using the joining method used by Weisskopf and French, the two pioneers of modern QED, Schwinger and Feynman, in 1963. That was two years before the Tomonaga-Schwinger-Feynman QED was given formal recognition by an award of the Nobel Prize for physics. Weisskopf and French (FW) were the first to obtain the correct result for the Lamb shift and the anomalous magnetic moment of the electron. At first FW results did not agree with the incorrect but independent results of Feynman and Schwinger (Schweber SS 1994 "QED and the men who made it: Dyson,Feynman,Schwinger and Tomonaga", Princeton :PUP). The 1963–1964 lectures Dirac gave on quantum field theory at Yeshiva University were published in 1966 as the Belfer Graduate School of Science, Monograph Series Number, 3. After having relocated to Florida in order to be near his elder daughter, Mary, Dirac spent his last fourteen years (of both life and physics research) at the University of Miami in Coral Gables, Florida and Florida State University in Tallahassee, Florida.
In the 1950s in his search for a better QED, Paul Dirac developed the Hamiltonian theory of constraints (Canad J Math 1950 vol 2, 129; 1951 vol 3, 1) based on lectures that he delivered at the 1949 International Mathematical Congress in Canada. Dirac (1951 “The Hamiltonian Form of Field Dynamics” Canad Jour Math, vol 3 ,1) had also solved the problem of putting the Tomonaga-Schwinger equation into the Schrödinger representation (See Phillips R J N 1987 “Tributes to Dirac” p31 London:Adam Hilger) and given explicit expressions for the scalar meson field (spin zero pion or pseudoscalar meson), the vector meson field (spin one rho meson), and the electromagnetic field (spin one massless boson, photon).
The Hamiltonian of constrained systems is one of Dirac’s many masterpieces. It is a powerful generalization of Hamiltonian theory that remains valid for curved spacetime. The equations for the Hamiltonian involve only six degrees of freedom described by grs,prs for each point of the surface on which the state is considered. The gm0(m = 0,1,2,3) appear in the theory only through the variables gr0, ( − g00) − 1 / 2 which occur as arbitrary coefficients in the equations of motion. H=∫d3x[( − g00) − 1 / 2HLgr0/g00 Hr] There are four constraints or weak equations for each point of the surface x0 = constant. Three of them Hr form the four vector density in the surface. The fourth HL is a 3-dimensional scalar density in the surface HL≈0; Hr≈0 (r=1,2,3)
In the late 1950s he applied the Hamiltonian methods he had developed to cast Einstein’s general relativity in Hamiltonian form (Proc Roy Soc 1958,A vol 246, 333,Phys Rev 1959,vol 114, 924) and to bring to a technical completion the quantization problem of gravitation and bring it also closer to the rest of physics according to Salam and DeWitt. In 1959 also he gave an invited talk on "Energy of the Gravitational Field" at the New York Meeting of the American Physical Society later published in 1959 Phys Rev Lett 2, 368. In 1964 he published his “Lectures on Quantum Mechanics” (London:Academic) which deals with constrained dynamics of nonlinear dynamical systems including quantization of curved spacetime. He also published a paper entitled “Quantization of the Gravitational Field” in 1967 ICTP/IAEA Trieste Symposium on Contemporary Physics.
If one considers waves moving in the direction x3 resolved into the corresponding Fourier components (r,s = 1,2,3), the variables in the degrees of freedom 13,23,33 are affected by the changes in the coordinate system whereas those in the degrees of freedom 12, (11-22) remain invariant under such changes. The expression for the energy splits up into terms each associated with one of these six degrees of freedom without any cross terms associated with two of them. The degrees of freedom 13, 23, 33 do not appear at all in the expression for energy of gravitational waves in the direction x3. The two degrees of freedom 12, (11-22) contribute a positive definite amount of such a form to represent the energy of gravitational waves. These two degrees of freedom correspond in the language of quantum theory , to the gravitational photons (gravitons) with spin +2 or -2 in their direction of motion. The degrees of freedom (11+22) gives rise to the Newtonian potential energy term showing the gravitational force between the two positive mass is attractive and the self energy of every mass is negative.
Amongst his many students was John Polkinghorne, who recalls that Dirac "was once asked what was his fundamental belief. He strode to a blackboard and wrote that the laws of nature should be expressed in beautiful equations."[30]
1. Wikipedia
2. Nobel Prize Org.
Ucapan Terima Kasih:
1. DEPDIKNAS Republik Indonesia
2. Kementrian Riset dan Teknologi Indonesia
3. Lembaga Ilmu Pengetahuan Indonesia (LIPI)
4. Akademi Ilmu Pengetahuan Indonesia
5. Tim Olimpiade Fisika Indonesia
Disusun Ulang Oleh:
Arip Nurahman
Pendidikan Fisika, FPMIPA, Universitas Pendidikan Indonesia
Follower Open Course Ware at MIT-Harvard University, USA.
Semoga Bermanfaat dan Terima kasih |
43a3bc97ce372c2b | Publish in OALib Journal
ISSN: 2333-9721
APC: Only $99
Any time
2020 ( 1 )
2019 ( 20 )
2018 ( 15 )
2017 ( 21 )
Custom range...
Search Results: 1 - 10 of 5502 matches for " Bernard Schaeffer "
All listed articles are free for downloading (OA Articles)
Page 1 /5502
Display every page Item
Pairing Effect on the Binding Energy Curve of N = Z Atomic Nuclei [PDF]
Bernard Schaeffer
World Journal of Nuclear Science and Technology (WJNST) , 2013, DOI: 10.4236/wjnst.2013.33013
The saw-tooth phenomenon on the binding energy curve of N = Z nuclei is due to the low binding energy between the α-particles. It was suspected by Gamow to be of van der Waals type, found here to be deuteron bonds. The binding energy per nucleon, in absolute value, of an α-particle is larger than any other combination of 4 nucleons. Therefore, the binding energy per nucleon is low for odd-odd N = Z nuclei and maximum for even-even N = Z nuclei. The assumption of N = Z nuclei to be an assembly of α-particles and deuteron bonds predicts the binding energy of the 32 first N = Z nuclei with a rms deviation of 0.25 MeV.
IUPAC Periodic Table Quantum Mechanics Consistent [PDF]
Bernard Schaeffer
Journal of Modern Physics (JMP) , 2014, DOI: 10.4236/jmp.2014.53020
Most periodic tables of the chemical elements are between 96% and 100% in accord with quantum mechanics. Three elements only do not fit correctly into the official tables, in disagreement with the spherical harmonics and the Pauli exclusion principle. Helium, belonging to the s-block, should be placed beside hydrogen in the s-block instead of the p-block. Lutetium and lawrencium belonging to the d-block of the transition metals should not be in the f-block of the lanthanides or the actinoids. With these slight modifications, the IUPAC table becomes quantum mechanics consistent.
Electromagnetic Schrödinger Equation of the Deuteron 2H (Heavy Hydrogen) [PDF]
Bernard Schaeffer
World Journal of Nuclear Science and Technology (WJNST) , 2014, DOI: 10.4236/wjnst.2014.44029
Abstract: The binding energy of the deuteron is calculated electromagnetically with the Schrödinger equation. In mainstream nuclear physics, the only known Coulomb force is the repulsion between protons, inexistent in the deuteron. It is ignored that a proton attracts a neutron containing electric charges with no net charge and that the magnetic moments of the nucleons interact together significantly. A static equilibrium exists in the deuteron between the electrostatic attraction and the magnetic repulsion. The Heitler equation of the hydrogen atom has been adapted to its nucleus where the centrifugal force is replaced by the magnetic repulsive force, solved graphically, by trial and error, without fit to experiment. As by chance, one obtains, at the lowest horizontal inflection point, with a few percent precision, the experimental value of the deuteron binding energy. This success, never obtained elsewhere, proves the purely static and electromagnetic nature of the nuclear energy.
Anomalous Rutherford Scattering Solved Magnetically [PDF]
Bernard Schaeffer
World Journal of Nuclear Science and Technology (WJNST) , 2016, DOI: 10.4236/wjnst.2016.62010
Abstract: After one century of nuclear physics, the anomalous Rutherford scattering remains a puzzle: its underlying fundamental laws are still missing. The only presently recognized electromagnetic interaction in a nucleus is the so-called Coulomb electric force, in 1/r, only positive thus repulsive in official nuclear physics, explaining the Rutherford scattering at low kinetic energy of the impacting alpha particles. At high kinetic energy the Rutherford scattering formula doesn’t work, thus called “anomalous scattering”. I have discovered that, to solve the problem, it needs only to replace, at high kinetic energy, the Coulomb repulsive electric potential in 1/r, by the also repulsive magnetic Poisson potential in 1/r3. In log-log coordinates, one observes two straight lines of slopes, respectively ?2 and ?6. They correspond with the ?1 and ?3 exponents of the only repulsive electric and magnetic interactions, multiplied by 2 due to the cross-sections. Both Rutherford (normal and anomalous) scattering have been calculated electromagnetically. No attractive force needed.
Electromagnetic Theory of the Nuclear Interaction [PDF]
Bernard Schaeffer
Abstract: After one century of nuclear physics, its underlying fundamental laws remain a puzzle. Rutherford scattering is well known to be electric at low kinetic energy. Nobody noticed that the Rutherford scattering formula works also at high kinetic energy, needing only to replace the repulsive electric -2 exponent by the also repulsive magnetic -6 exponent. A proton attracts a not so neutral neutron as amber attracts dust. The nucleons have magnetic moments that interact as magnets, equilibrating statically the electric attraction between a proton and a not so neutral neutron. In this paper, the electromagnetic potential energies of the deuteron 2H and the α particle 4He have been calculated statically, using only electromagnetic fundamental laws and constants. Nuclear scattering and binding energy are both electromagnetic.
Ab Initio Calculation of 2H and 4He Binding Energies [PDF]
B. Schaeffer
Journal of Modern Physics (JMP) , 2012, DOI: 10.4236/jmp.2012.311210
Abstract: The binding energies of all hydrogen isotopes have been calculated successfully for the first time in a previous paper [J Fusion Energy, 30 (2011) 377], using only the electric and magnetic Coulomb’s laws, without using the hypothetical shell model of the nucleus and its mysterious strong force. In this paper, an elementary calculation gives the order of magnitude of the nuclear interaction. The binding energies of the deuteron and the alpha particle are then calculated by taking into account the proton induced electric dipole in the neutron. The large binding energy per nucleon of 4He, as compared to that of 2H, has been explained by a larger electric attraction combined with a lower magnetic repulsion. The binding energies have been calculated without fitting, using only fundamental laws and constants, proving that the nuclear interaction is only electromagnetic.
The effects of incivility on nursing education [PDF]
Amy Schaeffer
Open Journal of Nursing (OJN) , 2013, DOI: 10.4236/ojn.2013.32023
Incivility in the population has become of great interest within the past decade, particularly in the wake of the school massacre in Columbine and the recent movie theatre mass murder in Aurora, Colorado. While citizens struggle to make sense of these violent behaviors, higher education officials are perhaps most vested in exploring the causes, displays, and solutions to uncivil behavior among both faculty and students. The effects of incivility, whether classified as minor disruptions or major violence, may affect the student nurse and impede his or her progress and ability to become an empathic nurse, which is a goal of nursing education. Academic incivility may contribute to bullying in the workplace, which has been identified as a cause of attrition and contributes to the national nursing shortage. This article describes the effects of uncivil behavior on nursing faculty and students and the effect this may have on the nursing workforce.
Electromagnetic Nature of Nuclear Energy: Application to H and He Isotopes [PDF]
B. Schaeffer
The one million times ratio between nuclear and chemical energies is generally attributed to a mysterious strong force, still unknown after one century of nuclear physics. It is now time to reconsider from the beginning the assumptions used, mainly the uncharged neutron and the orbital motion of the nucleons. Except for the long range Coulomb repulsion, the electric and magnetic Coulombs forces between adjoining nucleons are generally assumed to be negligible in the atomic nucleus by the nuclear specialists. The Schrodinger equation with a centrifugal force as in the Bohr model of the atom is unable to predict the binding energy of a nucleus. In contrast, the attractive electric and repulsive magnetic Coulomb forces alone explain quantitatively the binding energies of hydrogen and helium isotopes. For the first time, with analytical formulas, the precision varies between 1 and 30 percent without fitting, adjustment, correction or estimation, proving the electromagnetic nature of the nuclear energy.
Evaluation of Contact Pressure in Bending under Tension Test by a Pressure Sensitive Film [PDF]
Luis Fernando Folle, Lirio Schaeffer
Journal of Surface Engineered Materials and Advanced Technology (JSEMAT) , 2016, DOI: 10.4236/jsemat.2016.64018
Abstract: The contact pressure acting on the sheet/tools interface has been studied because of growing the concern about the wear of tools. Recent studies make use of numerical simulation software to evaluate and correlate this pressure with the friction and wear generated. Since there are many studies that determine the coefficient of friction in sheet metal forming by bending under tension (BUT) test, the contact pressure between the pin and the sheet was measured using a film that has the ability to record the applied pressure. The vertical force applied to pin was also measured. The results indicate that the vertical force is more accurate to set the contact pressure that using equations predetermined. It was also observed that the contact area between the sheet and the pin is always smaller than the area calculated geometrically. The friction coefficient was determined for the BUT test through several equations proposed by various authors in order to check if there is much variation between the results. It was observed that the friction coefficient showed little variation for each equation, and each one can be used. The material used was the commercially pure aluminum, alloy Al1100.
Equiurídeos da Ilha Grande (Estado do Rio de Janeiro, Brasil)
Schaeffer, Yara;
Boletim do Instituto Oceanográfico , 1972, DOI: 10.1590/S0373-55241972000100004
Abstract: the present paper presents observations on the ecology and systematics of the echiurida from una grande, rio de janeiro state, brazil. some hydrographic data like salinity, temperature, oxygen conten t and mean size of the sediment granulometry (?) are correlated with the occurrence of the echiurida. a new species, thalassema liliae is described. lissomyema exilii (f. müller) is recorded for the first time in this area. the zoogeographic distribution of the family echiurida is also given.
Page 1 /5502
Display every page Item
|
18c49840abd5d191 | Determinism and Indeterminism
Updated About content Print Article Share Article
views updated
Determinism is a rich and varied concept. At an abstract level of analysis, Jordan Howard Sobel (1998) identifies at least ninety varieties of what determinism could be like. When it comes to thinking about what deterministic laws and theories in physical sciences might be like, the situation is much clearer. There is a criterion by which to judge whether a lawexpressed as some form of equationis deterministic. A theory would then be deterministic just in case all its laws taken as a whole were deterministic. In contrast, if a law fails this criterion, then it is indeterministic and any theory whose laws taken as a whole fail this criterion must also be indeterministic. Although it is widely believed that classical physics is deterministic and quantum mechanics is indeterministic, application of this criterion yields some surprises for these standard judgments.
Framework for Physical Theories
Laws and theories in physics are formulated in terms of dynamical or evolution equations. These equations are taken to describe the change in time of the relevant variables characterizing the system in question. Additionally, a complete specification of the initial state referred to as the initial conditions for the system and/or a characterization of the boundaries for the system known as the boundary conditions must also be given. A state is taken to be a description of the values of the variables characterizing the system at some time t. As a simple example of a classical model, consider a cannon firing a ball. The initial conditions would be the initial position and velocity of the ball as it left the mouth of the cannon. The evolution equation plus these initial conditions would then describe the path of the ball.
Much of the analysis of physical systems takes place in what is called state space, an abstract mathematical space composed of the variables required to fully specify the state of a system. Each point in this space then represents a possible state of the system at a particular time t through the values these variables take on at t. For example, in many typical dynamical modelsconstructed to satisfy the laws of a given theorythe position and momentum serve as the coordinates, so the model can be studied in state space by following its trajectory from the initial state (qo, po ) to some final state (qf, pf ). The evolution equations govern the paththe history of state transitionsof the system in state space.
However, note that there are important assumptions being made here. Namely, that a state of a system is characterized by the values of the crucial variables and that a physical state corresponds to a point in state space through these values. This cluster of assumptions can be called the faithful model assumption. This assumption allows one to develop mathematical models for the evolution of these points in state space and such models are taken to represent (perhaps through a complicated relation) the physical systems of interest. In other words, one assumes that one's mathematical models are faithful representations of physical systems and that the state space is a faithful representation of the space of physically genuine possibilities for the system in question. Hence, one has the connection between physical systems and their laws and models, provided the latter are faithful. It then remains to determine whether these laws and models are deterministic or not.
Laplacean Determinism
Clocks, cannon balls fired from cannons, and the solar system are taken to be paradigm examples of deterministic systems in classical physics. In the practice of physics one is able to give a general and precise description of deterministic systems. For definiteness the focus here is on classical particle mechanics, the inspiration for Pierre Simon Laplace's famous description:
We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world to it nothing would be uncertain, the future as well as the past would be present to its eyes. (Translation from Nagel 1961, pp. 281282)
Given all the forces acting on the particles composing the universe along with their exact positions and momenta, then the future behavior of these particles is, in principle, completely determined.
Two historical remarks are in order here. First, Laplace's primary aim in this famous passage was to contrast the concepts of probability and certainty. Second, Gottfried Wilhelm Leibniz (1924, p. 129) articulated this same notion of inevitability in terms of particle dynamics long before Laplace. Nevertheless, it was the vision that Laplace articulated that has become a paradigm example for determinism in physical theories.
This vision may be articulated in the modern framework as follows. Suppose that the physical state of a system is characterized by the values of the positions and momenta of all the particles composing the system at some time t. Furthermore, suppose that a physical state corresponds to a point in state space (invoking the faithful model assumption). One can then develop deterministic mathematical models for the evolution of these points in state space. Some have thought that the key feature characterizing this determinism was that given a specification of the initial state of a system and the evolution equations governing its states, in principle it should be possible to predict the behavior of the system for any time (recall Laplace's contrast between certainty and probability). Although prima facie plausible, such a condition is neither necessary nor sufficient for a deterministic law because the relationship of predictability to determinism is far too weak and subtle.
Rather, the core feature of determinism is the following condition: "Unique evolution: A given state is always followed (and preceded) by the same history of state transitions." This condition expresses the Laplacean belief that systems described by classical particle mechanics will repeat their behaviors exactly if the same initial and boundary conditions are specified. For example, the equations of motion for a frictionless pendulum will produce the same solution for the motion as long as the same initial velocity and initial position are chosen. Roughly speaking, the idea is that every time one returns the mathematical model to the same initial state (or any state in the history of state transitions), it will undergo the same history of transitions from state to state and likewise for the target system. In other words, the evolution will be unique given a specification of initial and boundary conditions. Note that as formulated, unique evolution expresses state transitions in both directions (future and past). It can easily be recast to allow for unidirectional state transitions (future only or past only) if desired.
Unique Evolution
Unique evolution is the core of the Laplacean vision for determinism (it lies at the core of Leibniz's statement as well). Although a strong requirement, it is important if determinism is to be meaningfully applied to laws and theories. Imagine a typical physical system s as a film. Satisfying unique evolution means that if the film is started over and over at the same frame (returning the system to the same initial state), then s will repeat every detail of its total history over and over again and identical copies of the film would produce the same sequence of pictures. So if one always starts Jurassic Park at the beginning frame, it plays the same. The tyrannosaurus as antihero always saves the day. No new frames are added to the movie. Furthermore, if one were to start with a different frame, say a frame at the middle of the movie, there is still a unique sequence of frames.
By way of contrast, suppose that returning s to the same initial state produced a different sequence of state transitions on some of the runs. Consider a system s to be like a device that spontaneously generates a different sequence of pictures on some occasions when starting from the same initial picture. Imagine further that such a system has the property that simply by choosing to start with any picture normally appearing in the sequence, sometimes the chosen picture is not followed by the usual sequence of pictures. Or imagine that some pictures often do not appear in the sequence, or that new ones are added from time to time. Such a system would fail to satisfy unique evolution and would not qualify as deterministic.
More formally, one can define unique evolution in the following way. Let S stand for the collection of all systems sharing the same set L of physical laws and suppose that P is the set of relevant physical properties for specifying the time evolution of a system described by L : A system s S exhibits unique evolution if and only if every system s S isomorphic to s with respect to P undergoes the same evolution as s.
Two Construals of Unique Evolution
Abstracting from the context of physical theories for the moment, unique evolution can be given two construals. The first construal is as a statement of causal determinism, that every event is causally determined by an event taking place at some antecedent time or times. This reading of unique evolution fits nicely with how a number of philosophers conceive of metaphysical, physical, and psychological determinism as theses about the determination of events in causal chains, where there is a flow from cause to effect that may be continuous or have gaps. The second construal of unique evolution is as a statement of difference determinism characterized by William James as "[t]he whole is in each and every part, and welds it with the rest into an absolute unity, an iron block, in which there can be no equivocation or shadow of turning" (1956, p. 150). This reading of unique evolution maintains that a difference at any time requires a difference at every time.
These two construals of unique evolution are different. For example, consider a fast-starting series of causally linked states (Sobel 1998) where every state in the series has an earlier determining cause, but the series itself has no antecedent deterministic cause (its beginningthe first stateis undetermined by prior events or may have a probabilistic cause) and no state in the series occurs before a specified time. The principle that every event has an earlier cause would fail for a fast-starting series as a whole though it would hold for the events within such a series. This would be an example where causal determinism failed, but where difference determinism would still hold.
However, the causal construal of unique evolution is unsatisfactory. Concepts like event or causation are vague and controversial. One might suggest explicating causal determinism in terms of the laws L and properties P, but concepts like event and cause are not used in most physical theories (at least not univocally). In contrast, unique evolution fits the idea of difference determinism: any difference between s and s is reflected by different histories of state transitions. This latter construal of unique evolution only requires the normal machinery of the theoretical framework sketched earlier to cash out these differences and so avoids controversies associated with causal determinism.
Determinism in Classical Mechanics
Most philosophers take classical mechanics to be the archetype of a deterministic theory. Prima facie Newton's laws satisfy unique evolution. After all, these are ordinary differential equations and one has uniqueness and existence proofs for them. Furthermore, there is at least some empirical evidence that macroscopic objects behave approximately as these laws describe. Still, there are some surprises and controversy regarding the judgment that classical mechanics is a deterministic theory.
For example, as Keith Hutchinson (1993) notes, if the force function varies as the square root of the velocity, then a specification of the initial position and velocity of a particle does not fix a unique evolution of the particle in state space (indeed, the particle can sit stationary for an arbitrary length of time and then spontaneously begin to move). Hence, such a force law is not deterministic. There are a number of such force functions consistent with Newton's laws, but that fail to satisfy unique evolution. Therefore, the judgment that classical mechanics is a deterministic theory is false.
newtonian gravity
One might think that the set of force functions leading to violations of unique evolution represents an unrealistic set so that all force laws of classical mechanics really are deterministic. However, worries for determinism await one even in the case of point-particles interacting under Isaac Newton's force of gravity, the paradigm case of determinism that Laplace had in mind.
In 1897 the French mathematician Paul Painlevé conjectured that a system of point-particles interacting only under Newton's force of gravity could all accelerate to spatial infinity within a finite time interval. (The source of the energy needed for this acceleration is the infinite potential well associated with the inverse-square law of gravitation.) If particles could disappear to "spatial infinity," then unique evolution would break down because solutions to the equations of motion no longer would be guaranteed to exist. Painlevé's conjecture was proven by Zhihong Xia (1992) for a system of five point-masses.
Though provocative, these results are not without controversy. For example, there are two interesting possibilities for interpreting the status of these particles that have flown off to spatial infinity. On the one hand, one could say the particles have left the universe and now have some indefinite properties. On the other hand, one could say that the particles no longer exist. Newton's mechanics is silent on this interpretive question. Furthermore, are events such as leaving the universe to be taken as predictions of Newton's gravitational theory of point-particles, or as indications that the theory is breaking down because particle position becomes undefined? Perhaps such behavior is an artifact of a spatially infinite universe. If the universe is finite, particle positions are always bounded and such violations of unique evolution are not possible.
Other failures of unique evolution in classical mechanics can be found in John Earman's (1986) survey. What is one to say, then, about the uniqueness and existence theorems for the equations of motion, the theorems that appear so suggestive of unique evolution? The root problem of these failures to satisfy unique evolution can be traced back to the fact that one's mathematical theorems only guarantee existence and uniqueness locally in time. This means that the equations of motion only have unique solutions for some interval of time. This interval might be short and, as time goes on, the interval of time for which such solutions exist might get shorter or even shrink to zero in such a way that after some period solutions cease to exist. So determinism might hold locally, but this does not guarantee determinism must hold globally.
Determinism in Special and General Relativity
Special relativity provides a much more hospitable environment for determinism. This is primarily due to two features of the theory: (1) no process or signal can travel faster than the speed of light, and (2) the space-time structure is static. The first feature rules out unbounded-velocity systems, while the second guarantees there are no singularities in space-time. Given these two features, global existence and uniqueness theorems can be proven for cases like source-free electromagnetic fields so that unique evolution is not violated when appropriate initial data are specified on a space-like hypersurface. Unfortunately, when electromagnetic sources or gravitationally interacting particles are added to the picture, the status of unique evolution becomes much less clear.
In contrast, general relativity presents problems for guaranteeing unique evolution. For example, there are space-times for which there are no appropriate specifications of initial data on space-like hypersurfaces yielding global existence and uniqueness theorems. In such space-times, unique evolution is easily violated. Furthermore, problems for unique evolution arise from the possibility of naked singularities (singularities not hidden behind an event horizon). One way a singularity might form is from gravitational collapse. The usual model for such a process involves the formation of an event horizon (i.e., a black hole). Although a black hole has a singularity inside the event horizon, outside the horizon at least determinism is okay, provided the space-time supports appropriate specifications of initial data compatible with unique evolution. In contrast, a naked singularity has no event horizon. The problem here is that anything at all could pop out of a naked singularity, violating unique evolution. To date, no general, convincing forms of hypotheses ruling out such singularities have been proven (so-called cosmic censorship hypotheses).
Determinism in Quantum Mechanics
In contrast to classical mechanics philosophers often take quantum mechanics to be an indeterministic theory. Nevertheless, so-called pilot-wave theories pioneered by Louis de Broglie and David Bohm are explicitly deterministic while still agreeing with experiments. Roughly speaking, this family of theories treats a quantum system as consisting of both a wave and a particle. The wave evolves deterministically over time according to the Schrödinger equation and determines the motion of the particle. Hence, the particle's motion satisfies unique evolution. This is a perfectly coherent view of quantum mechanics and contrasts strongly with the more orthodox interpretation. The latter takes the wave to evolve deterministically according to Schrödinger's equation and treats particle-like phenomena indeterministically in a measurement process (such processes typically violate unique evolution because the particle system can be in the same state before measurement, but still yield many different outcomes after measurement). Pilot-wave theories show that quantum mechanics need not be indeterministic.
Deterministic Chaos
Some philosophers have thought that the phenomenon of deterministic chaosthe extreme sensitivity of a variety of classical mechanics systems such that roughly even the smallest change in initial conditions can lead to vastly different evolutions in state spacemight actually show that classical mechanics is not deterministic. However, there is no real challenge to unique evolution here as each history of state transitions in state space is still unique to each slightly different initial condition.
Of course, classical chaotic systems are typically considered as if there is no such thing as quantum mechanics. But suppose one considers a combined system such that quantum mechanics is the source of the small changes in initial conditions for one's classical chaotic system? Would such a system fail to satisfy unique evolution? The worry here is that, since there is no known lower limit to the sensitivity of classical chaotic systems, nothing can prevent the possibility of such systems amplifying a slight change in initial conditions due to a quantum event so that the evolution of the classical chaotic system is dramatically different than if the quantum event had not taken place. Indeed, some philosophers argue that unique evolution must fail in such circumstances.
However, such sensitivity arguments depend crucially on how quantum mechanics itself and measurements are interpreted as well as on where the cut is made distinguishing between what is observed and what is doing the observing (e.g., is the classical chaotic system serving as the measuring device for the quantum change in initial conditions?). Although considered abstractly, sensitivity arguments do correctly lead to the conclusion that quantum effects can be amplified by classical chaotic systems; they do not automatically render one's classical plus quantum system indeterministic. Furthermore, applying such arguments to concrete physical systems shows that the amplification process may be severely constrained. For example, investigating the role of quantum effects in the process of chaos in the friction of sliding surfaces indicates that quantum effects might be amplified by chaos to produce a difference in macroscopic behavior only if the fluctuations are large enough to break molecular bonds and are amplified quickly enough.
Broader Implications
Finally, what of broader implications of determinism and indeterminism in physical theories? Debates about free will and determinism are one place where the considerations in this entry might be relevant. One of the most discussed topics in this regard is the consequence argument, which may be put informally as follows: If determinism is true, then our acts are consequences of laws and events in the remote past. But what went on before we were born is not up to us and neither are the laws up to us. Therefore, the consequences of these laws and eventsincluding our present actsare not up to us. Whether or not the relevant laws satisfy unique evolution is one factor in the evaluation of this argument.
What of broader philosophical thinking about psychological determinism or the thesis that the universe is deterministic? For the former, it looks difficult to make any connection at all. One simply does not have any theories in the behavioral sciences that are amenable to analysis under the criterion of unique evolution. Indeed, attempts to apply the criterion in psychology do not lead to clarification of the crucial issues (Bishop 2002).
With regards to the universe, it has been common practice since the seventeenth century for philosophers to look to their best scientific theories as guides to the truth of determinism. As one has seen, the current best theories in physics are remarkably unclear about the truth of determinism in the physical sciences, so the current guides do not appear to be so helpful. Even if the best theories were clear on the matter of determinism in their province, there is a further problem awaiting their application to metaphysical questions about the universe as a whole. Recall the crucial faithful model assumption. In many contexts this assumption is fairly unproblematic. However, if the system in question is nonlinearthat is to say, has the property that a small change in the state or conditions of the system is not guaranteed to result in a small change in the system's behaviorthis assumption faces serious difficulties (indeed, a strongly idealized version of the assumption, the perfect model scenario, is needed but also runs into difficulties regarding drawing conclusions about the systems one is modeling). Since the universe is populated with such systemsindeed, it is likely to be nonlinear itselfone's purchase on applying the best laws and theories to such systems or the universe as a whole to answer the large metaphysical question about determinism is problematic.
See also Determinism, A Historical Survey; Determinism in History; Philosophy of Physics; Quantum Mechanics.
Relevant Historical Material on Determinism:
James, William. "The Dilemma of Determinism." In The Will to Believe and Other Essays in Popular Philosophy, and Human Immortality. New York: Dover Publications, 1956.
Laplace, Pierre Simon de. A Philosophical Essay on Probabilities. Translated by Frederick Wilson Truscott and Frederick Lincoln Emory. New York: Dover Publications, 1951.
Leibniz, Gottfried Wilhelm. "Von dem Verhängnisse." In Hauptschriften zur Grundlegung der Philosophie. Vol. 2, edited by Ernst Cassirer and Artur Buchenau. Leipzig, Germany: Meiner, 1924.
Nagel, Ernst. The Structure of Science: Problems in the Logic of Scientific Explanation. New York: Harcourt, Brace, and World, 1961.
Sobel, Jordan Howard. Puzzles for the Will: Fatalism, Newcomb and Samarra, Determinism and Omniscience. Toronto: University of Toronto Press, 1998.
Laplace's vision expressed in the modern framework of physical theories, as well as discussions of chaos, prediction, and determinism, may be found in:
Bishop, Robert C. "On Separating Predictability and Determinism." Erkenntnis 58 (2) (2003): 169188.
Bishop, Robert C., and Frederick M. Kronz. "Is Chaos Indeterministic?" In Language, Quantum, Music: Selected Contributed Papers of the Tenth International Congress of Logic, Methodology, and Philosophy of Science, Florence, August 1995, edited by Maria Luisa Dalla Chiara, Roberto Giuntini, and Federico Laudisa. Boston: Kluwer Academic, 1999.
Hobbs, Jesse. "Chaos and Indeterminism." Canadian Journal of Philosophy 21 (1991): 141164.
Stone, M. A. "Chaos, Prediction, and Laplacean Determinism." American Philosophical Quarterly 26 (1989): 123131.
There are a number of able discussions of problems for determinism in physical theories. The following all discuss classical physics; see Earman (1986, 2004) for discussions of determinism in relativistic physics:
Earman, John. "Determinism: What We Have Learned and What We Still Don't Know." In Freedom and Determinism, edited by Joseph Keim Campbell, Michael O'Rourke, and David Shier, 2146. Cambridge, MA: MIT Press, 2004.
Earman, John. A Primer on Determinism. Dordrecht, Netherlands: D. Reidel, 1986.
Hutchinson, Keith. "Is Classical Mechanics Really Time-Reversible and Deterministic?" British Journal for the Philosophy of Science 44 (1993): 307323.
Xia, Zhihong "The Existence of Noncollision Singularities in Newtonian Systems." Annals of Mathematics 135 (3) (1992): 411468.
Uniqueness and existence proofs for differential equations are discussed by:
Arnold, V. I. Geometrical Methods in the Theory of Ordinary Differential Equations. 2nd ed. Translated by Joseph Szücs; edited by Mark Levi. New York: Springer-Verlag, 1988.
For a discussion of deterministic versions of quantum mechanics, see:
Bohm, David. Causality and Chance in Modern Physics. London: Routledge and Paul, 1957.
Cushing, James T. Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony. Chicago: University of Chicago Press, 1994.
Possible consequences of determinism for free will in terms of the consequence argument may be found in:
Kane, Robert, ed. The Oxford Handbook of Free Will. New York: Oxford University Press, 2001.
Van Inwagen, Peter. An Essay on Free Will. Oxford, U.K.: Clarendon Press, 1983.
For a discussion of difficulties in applying determinism as unique evolution to psychology, see:
Bishop, Robert C. "Deterministic and Indeterministic Descriptions." In Between Chance and Choice: Interdisciplinary Perspectives on Determinism, edited by Harald Atmanspacher and Robert C. Bishop. Thorverton, U.K.: Imprint Academic, 2002.
Elements of the faithful model assumption have received some scrutiny in recent physics literature. In particular, there is evidence that perfect models are not guaranteed to describe system behavior in nonlinear contexts:
Judd, Kevin, and Leonard A. Smith. "Indistinguishable States I: Perfect Model Scenario." Physica D 151 (2001): 125141.
Judd, Kevin, and Leonard A. Smith. "Indistinguishable States II: Imperfect Model Scenarios." Physica D 196 (2004): 224242.
Smith, Leonard A. "Disentangling Uncertainty and Error: On the Predictability of Nonlinear Systems." In Nonlinear Dynamics and Statistics, edited by Alistair I. Mees. Boston: Birkäuser, 2001.
Robert C. Bishop (2005)
More From
You Might Also Like |
7edea057befabe01 | Sunday, August 9, 2015
The integral form of the Schrödinger equation for discrete matter is
Sunday, May 17, 2015
Augmenting Relativity
The overwhelming success of general relativity for mainstream science's macroscopic reality of continuous space and time cannot be overstated. Likewise, quantum mechanics represents an even more successful understanding of our microscopic reality of amplitude and phase. All of relativity’s reported successes, though, are really due to the two key notions of mass-energy equivalence (MEE) and gravity time delay of light. Lorentz invariance, the constant of the speed of light irrespective of velocity, simply follows directly from MEE and means that the gravity deflection of light follows from both light’s gravity MEE and the extra time delay of light.
Likewise, it is the quantum coherence of microscopic matter as amplitude and phase that is largely responsible for quantum's microscopic success stories. The quantum story is built upon space and motion, just as is the GR story, but for GR, space and motion do not apply everywhere in the universe while quantum amplitude and phase apply everywhere. In the GR or mainstream science, velocity and acceleration in empty space make up frames of reference from which emerge changes in inertial matter and time delays of light. Gravity affects light once as light’s MEE mass and then again as gravity’s acceleration and red shift and so gravity deflection of light is twice that of just light’s MEE gravity deflection.
Augmenting continuous space, motion, and time with the more general notions of discrete matter and time extends the validity of gravity to all of the universe. In a sense, this means that space and motion actually lie within the the domain of discrete changes in inertial matter and the time delay of light by gravity, not the other way around. In other words, augmenting continuous space and time means that the basic principles of MEE and gravity time delays still apply to that part of the universe. however, the spatio-temporal tensors of GR do not apply outside of the limits of continuous space and time and so a change in inertial matter emerges as motion in spatial frames of reference and it is from changes in gravity that space emerges from gravity time delay. Thus, space and motion are both within the domain of changes in inertial matter and time delays and not the other way around as shown in the figure below. The total time delay for light due to gravity is after all a factor of two greater that of just light’s gravity MEE time delay.
Any model of the universe with both gravity MEE and time delay will also be consistent with the observed gravity light deflections, but there are further notions of relativity that do not necessarily follow from gravity MEE and time delay. For example, GR lacks an absolute frame of reference even though the CMB seems to be an absolute frame of reference and given an absolute CMB frame simply limits the scale for GR tensor algebra.
Also, the determinate geodesic paths of GR objects in a 4D spacetime are inconsistent with the microscopic probabilistic quantum paths of the very successful quantum action. In fact, the determinate GR geodesics in effect do away with the quantum notion of time since time becomes just a GR displacement and it is the 4D geodesic paths that then determine the futures of all objects from the initial conditions of the universe.
In contrast, quantum mechanics shows by many different measurements that there are no determinate geodesic paths for quantum objects. In fact, there is a fundamental lack of knowledge of certain quantum paths and a fundamental uncertainty principle limits all quantum paths. Yet despite the limitations of GR, the predictions of MEE and gravity time delay corrections allow our GPS satellites to work and explain the deflection of starlight and the time delays of quasar radio sources by the gravity of the sun as well as the lensing of galaxies by other galaxies. All of these measurements are consistent with gravity MEE and time delays and so any theory that incorporates MEE and gravity time delays will also be consistent with all of these observations as well.
The further notions of a lack of an absolute frame of reference in GR and GR determinate geodesics are then both open to question and neither has been verified by measurement. The CMB does seem to represent an absolute frame of reference that then closes all motion in the universe and the well demonstrated quantum uncertainty does seem to rule out any determinate GR geodesics. Thus there are still notable limitations embedded within general relativity despite GR’s notable successes with gravity MEE and time delays. Furthermore, as science better understands the universe, the limitations of GR become even more apparent.
Black hole singularities are inconsistent with quantum action
Probably the most famous of all of general relativity’s limitations is the notion of a black hole singularity. Given enough mass, light’s gravity time delay will eventually be sufficient to capture light into a singularity and therefore stop atomic time at an event horizon, two well worn predictions that simply cannot be the whole story.
Black hole event horizons are inconsistent with quantum action
A particle of matter that encounters the event horizon of a black hole is subject to two quite different predictions; gravity and quantum. According to much of the historical black hole modeling, such a particle simply becomes part of the mass accretion and loses all information about its past.
More recent calculations find that, prior to reaching the event horizon, a particle is ripped into successively smaller pieces until the very, very small Planck limit. Those tiny pieces of matter begin collapsing before they accrete and therefore never actually become part of the primary black hole. These eternally collapsing objects, eco’s, take the place of the primary black hole, but do not really resolve the quantum paradox.
Quantum calculations predict something for a particle of matter at an event horizon, tearing into matter and antimatter particles, resulting in so-called Hawking radiation. The black hole event horizon turns into a quantum firewall and just like with the eco, accretion action stops near the event horizon. There just cannot be these two very different fates for the same neutral particles.
Proper time is inconsistent with quantum time
Proper time is a key notion of GR and that proper time then becomes the fourth displacement of 4D spacetime. Ironically, time as a GR spatial displacement in effect does away with the uncertainty of time. Because all motion in GR occurs as a result of gravity along determinate geodesic paths, the future is completely determined by the past.
Quantum time, on the other hand, is both reversible and uncertain and there is no stopping quantum time at a GR event horizon or anywhere else in the universe. However, time is simply a quantum progress variable and there is therefore no quantum expectation value for a time duration or delay.
It is clear that the future for a given object simply cannot be both deterministic by the principles of GR and probabilistic by the principles of QM and it is likely that both GR and quantum times will therefore need some kind of augmentation.
Dark matter and dark energy not explained
Dark matter is an extra gravity correction that explains the stabilities of galaxies and galaxy clusters while dark energy is yet another gravity correction needed to hold the universe together as the CMB. The absence of any sign of these gravity corrections in GR is a little disconcerting and seems like a major flaw of GR to simply invent matter and energy objects.
Determinate geodesics inconsistent with quantum action
One of the basic assumptions of GR is that gravity action distorts or curves the 4D spacetime and that objects simply follow predetermined geodesics as minimum energy paths. Of course, quantum action not only does not distort 4D spacetime, quantum action results in likely but not determinate futures. In quantum gravity, there will very likely be a number of possible futures instead of a determinate one.
Lack of amplitude, phase coherence, interference, and entanglement
Our quantum reality depends on both the phase as well as the amplitude of matter. However, gravity force in GR only deals with the norms of quadrupole matter and time and so there is no role for phase coherence or interference or entanglement with gravity. Since all of these notions of amplitude and phase figure prominently in quantum action, it is a major flaw in GR that there is no corresponding quantum monopole or quadrupole gravity to complete our quantum reality of dipole charge.
Planck limit inconsistent with quantum uncertainty principle
Once a particle gets small enough, its own gravity will collapse it into a microscopic event horizon where time stops and quantum action does not apply. But quantum action functions everywhere in the universe, even inside of black holes and there is no stopping quantum time. Quantum action limits the divisibility of matter and space to the uncertainty principle and to the quark, but there is still something wrong with quantum time.
No absolute frame in GR
The basic relativistic tensor math of GR depends on the absence of an absolute frame of reference within continuous space and time. However, the CMB seems to represent just such an absolute frame of reference for everywhere in the universe. In GR, the lack of an absolute frame means that we only see light in the universe within our event horizon or light cone and that there are past events that are now beyond that event horizon. For example, the universe expansion means that the CMB will eventually move beyond our event horizon in about one billion years or so.
It would seem to be much more likely that the CMB represents an absolute frame of reference that all can seen and that necessarily closes the universe. We would not then be in an expanding universe at all and the CMB will still be a CMB in one billion years, albeit somewhat evolved.
Quantum time is not consistent with proper time of GR
A determinate time in GR is incompatible with the uncertainty of quantum time. Quantum atomic clocks tick very precisely but their precision is limited by the uncertainty principle. Moreover gravity clocks that tick like millisecond pulsars are also very precise and yet ms pulsar gravity clocks all decay. While that decay can be largely due to gravity and/or EM radiation, there is an average intrinsic decay as well of 0.255 ppb/yr. That intrinsic decay means that ms pulsars tell two distinct times as their pulse periods and as their average decay.
It is therefore likely that quantum time also has both atom pulse periods and the same slow decay of atomic time as ms pulsars; 0.255 ppb/yr. This means that time actually has two dimensions; an atomic time period and a gravity decay period and that two dimensional quantum time would then be consistent with the two dimensional gravity time of gravity ms pulsars.
Quantum space and motion are inconsistent with GR space and motion
Empty space and motion in empty space are both infinitely divisible notions that deeply underscore much of mainstream science. But while quantum space and motion are both quantized, GR space and motion are both continuous and it is clear that notions of space and motion are simply fundamentally incompatible between QM and GR.
Many very smart people have worked very hard for nearly a century to make space and motion consistent between gravity and quantum, but to no avail. In fact, the notions of infinite divisibility for both space and motion have actually been problematic since the time of Zeno of Elea, the Greek philosopher of 460 BCE.
The continuum of sensation of objects that fills time contrasts with the void of sensation that we presume exists as space
Unlike the void of empty space, for which we have no sensation, time is filled with a continuum of waves of sensations. There are no empty voids of time since all of light, sound, touch, smell, and taste shine continuously onto us and our senses with a continuum of sensory information about objects and their backgrounds. Our sensation of object changes and time delays result in neural packets of aware matter from which consciousness extracts information useful for prediction of action.
It is from this continuum of sensation that our consciousness imagines objects and also ignores or renormalizes any background time delays. Even though there are no voids of sensation in time, our minds assign differences between object and background time delays to the lonely nothing of empty space. Space emerges to keep object sensations different from background sensations.
Objects that we sense have a different time delay from the backgrounds that we sense along with those objects. Our minds use space and motion to represent the difference in time delays as an absolute time or Cartesian distance that separates objects from other objects and their backgrounds. Space and motion, in this sense, simply emerge as whatever they need to be in order to properly represent the object changes and time delays of sensation, but space and motion do not exist in the same way that matter and time exist.
Therefore, the lonely nothing of empty space and motion within that space are notions that emerge from a more primitive reality of object changes and time delays. The nothing that we imagine as space and the motion of objects in that nothing of space are both simply very useful representations of consciousness. Notions of space and motion help consciousness keep track of objects and make predictions about the futures of those objects.
Wednesday, April 29, 2015
Cartesian Space and Time Emerge from Quantum Aether
Space is an infinitely divisible empty void that makes up most of the universe according to common understanding. In other words, space is nothing and we do not sense space and it is only the something of objects that we do sense as part of our outer lives. We sense objects and their backgrounds with different time delays and from the difference in those object time delays emerges the nothing of empty voids of time and space between those objects. Although discrete sensations of objects continuously bombard us, we all believe very fervently that a continuum of time and empty space exists as a container for objects. Even though we do not sense space, we are never without sensation of objects of our outer lives. We sense objects from a continuum of discrete sensations at different time delays or perspectives and those time delays involve various sensations and from those discrete neural sensations emerge the singular nothing of space; we simply believe a singular space exists because that is the way the we believe the universe of our outer life is.
Although the notions of space and motion are very useful for making sense out of our reality and predicting action, continuous space and motion do have their limitations, both at very large and very small scales. Space and motion are also very different between macroscopic general relativity and microscopic quantum action and that difference is a source of endless confusion. There is a more primitive reality of discrete matter and time delay from which continuous space and motion emerge and for very large and very small scales, this primitive reality closes the universe.
The figure below shows how the three orders of consciousness represent our perception of the universe. Our first order Cartesian reality represents objects outside of our mind in our outer life on trajectories in an otherwise empty void of space. This is how we view most of reality and that first order consciousness has been very successful for life in general. Gravity is the main force acting in our outer lives and the outer life is called objective or things in and of themselves or Descartes body or Kant’s phenomenon and this is how the inner life of our brains work. We learn a Cartesian consciousness of an outer life even before we learn to speak and simply come to believe with an inner life the outer life just as it is.
The second order is a relational reality in which objects are made up of pieces and parts held together by all of gravity, charge, strong force, weak force, and even the very weak bonds of neural synapses. Our second order reality represents the world of ideas or subjects inside of our mind and of discrete sensation that never stops. We learn these more elaborate stories about how the world works mainly in school, but also on our own and from our parents and friends. This more precise view of reality helps us be part of the cooperative civilization that we now are part of. Although gravity force still determines much of action, it is the amplitude and phase of charge force that is really what holds microscopic matter together.
This relational order is what is called our subjective reality of our inner life where each person has a separate and unique experience with objects as ideas. A relational inner life is Descartes’ mind or Kant’s noumenon and is how reason works. With the reason of our inner lives, we can imagine many more possible futures in the superposition states of the aware matter of our minds. Our minds interact with other aware matter and form bonds and conflicts through sensation and the resultant cooperation and conflicts among people allows us to reach futures that other sentients cannot even imagine.
There is a further spectral order of consciousness, which is a level of consciousness that most people do not experience. With spectral order a matter spectrum represents each object as amplitudes of matter just like a time spectrum represents each object as a pulse of matter in time. The peaks in a matter spectrum are amplitudes with either plus or minus phase and represent all of the interactions or relations of all of the pieces of matter that make up that object's matter spectrum. Thus even the EEG spectrum of neural aware matter represents all of the bonds and conflicts of aware matter in the brain, now as a power spectrum of consciousness. There is also a great deal of phase information embedded in neural aware matter, but the typical EEG does not measure the phase information of neural aware.
Our reality is determined by a continuum of never ending discrete sensations and the actions of sensation always involve the norms or squares of amplitude. Although neural phase coherence does affect our reality, we mainly see those effects with light and electricity. We do not normally sense phase as distinct from the intensity of matter objects and their time delays. Thus spectral consciousness is a level of awareness that is beyond just the typical Cartesian and relational realities that we experience every day.
Although the notions of space and motion are extremely useful in many contexts, space and motion often confuse our notions of matter and time and that confusion has thus far precluded any unification of gravity and charge forces for mainstream science. In order to unify forces, science must first resolve the confusion of space and time by realizing the limits of space and motion. By setting aside the more intuitive conjugates of space and momentum that are such an integral part of our Cartesian and relational consciousness, science might then use our more primitive spectral reality.
In order to build a quantum reality, science must first recognize the limits of space and motion compared with the alternative conjugates of matter and time for the same quantum reality. The conjugates of matter and time nicely unite gravity and charge forces by aligning the concept of a two dimensional time between gravity and charge as a unified quantum force. The notions of space and motion then emerge from the actions of matter in time and we see space for what it is; a convenient white board for keeping track of objects and action.
In fact, it is ironic that we seem to be more certain about the existence of the absolute nothingness of space as an empty object than we are certain of any object that we actually do perceive. After all, it is certainly true that there is something that separates objects from each other in time. So it is quite natural to conclude that there are large amounts of the absolutely static nothing in between objects within the infinitely divisible void of static space.
Even when we sense an object, it is not always apparent whether or not that object actually exists. Our senses are bombarded with a continuum of light, sound, touch, smell, and taste and our minds use only a small fraction of that sensation to represent an object. The object could still be an illusion or it could be a mirror reflection or it could be a picture of an object or even a hologram projection of an object. And yet even though we do not see or sense the nothing that is space, we always sense the something of objects and we invariably conclude that the different time delays of objects means that an empty void exists between objects that we call space. However, there is never an absence of object sensation in the continuum of experience even while we never sense space.
Continuous time is then primal belief that we have as part of the foundation for understanding the universe and it appears that the empty void of an infinitely divisible space as well as motion in space both emerge from the actions of matter. That is, the infinitely divisible nothing in which we all fervently and intuitively believe, really just emerges from a simpler primitive reality of just matter and time.
It should not be too surprising that the three dimensions of Cartesian space and motion emerge from a simpler primitive spectral reality. After all, a belief in space as an infinitely divisible void of nothing is a kind of oxymoron. To believe in the existence of an object like a tree is one thing; but to believe in the existence of the nothing of empty space is quite another thing…literally a belief in nothing as something. We sense objects at different time delays or perspectives and suppose therefore that space exists as a nothing that is what separates those objects. But what then fills space? There was a persistent belief up until the last century that an aether filled space and so gravity and charge forces transmitted by means of aether.
However, once mainstream science became comfortable with the magic of action at a distance for the force fields of gravity and charge in an absolute vacuum, the possibility of the aether of Newton faded into the uncertainty of time.
So why do we continue to believe so fervently in something that is really nothing at all? Space and motion emerge proportional to time and matter to order our reality and we effortlessly sense the motion of objects and actions through the empty spaces outside of our minds. This is the Cartesian reality of the Figure. We imagine those objects on various time trajectories in this object of space even though we never sense the space between objects. Rather we sense objects and their motion at different time delays and from different perspectives emerges the empty void of space to separate objects from each other. Empty space then seems to provide a way for those objects to move about.
In fact, it is time delay and matter change and action that separates objects, not really space. In other words, space and motion emerge from a continuum of matter and action that fills all time and it is rather the conjugates of time and matter that are the true axioms of a primitive quantum action. When we imagine action, we first begin with empty space and then imagine an object moving in that empty space and so time simply becomes equivalent to motion in space.
If instead we first imagine time delay as a primal dimension, object matter changes by exchange of matter with us and other objects in order to bond and conflict in a never ending continuum of sensation that involves exchanges of matter. Our minds extract certain changes in the matter of objects over time as action from which emerges our notion of object motion through Cartesian space. Just like science often uses time as a distance in measuring the cosmos, we also use time delay in many common descriptions of distances on earth.
And yet we continue to believe very fervently in the empty void of a continuum of space that defines the time delays of our journeys in life. If time is a primal dimension that truly separates objects, then it certainly also seems reasonable to suppose that Cartesian space and motion simply emerge from time delays and matter change. All of the spatial dimensions of forwards and backwards, left and right, up and down, seem so intuitive that we forget how complex and difficult it was as children to learn a Cartesian consciousness.
We fully realize that as children we learn to speak and understand language, a likewise difficult and complex skill, but we do not seem to realize that we must first learn about objects and motion well before language would even make sense. We and other objects move so effortlessly through the emptiness of space that existence seems impossible without both an empty and continuous space and time and mainstream science calls its paradigm spacetime for this very reason.
There does not seem to be any science or any Western philosophy that supposes space emerges from the changes of objects in time embedded within a continuum of sensation over time. There is, however, much Eastern thought that teaches about the illusion of reality and it does turn out that our notions of Cartesian space do end up distorting and therefore limiting our understanding of the true primitive natures of the axioms of time and matter.
Instead of recognizing time as a distance that is always connected to a determined future, there is also a second time dimension. In fact, the past is not really a part of time and the past is only the fossil memories and objects that we use to predict the future. Although we think of time as a continuous single dimension with a past, present, and future, this makes time just another dimension of Cartesian space.
Eastern philosophy does reveal the illusion of our sensory reality and Hindu Vedic beliefs emphasize the illusion of reality, the Maya. It is only with a lifetime of ritualistic meditation that one can ever hope to understand this illusion. Buddhism likewise teaches that sensation misleads us about reality and it is only by a highly prescribed ritual meditation that we can hope to understand the illusion of reality. It is only by quieting the maelstrom of the aware matter of our mind that we lose self and thereby achieve a better understanding of the world. However, we can never really step out of the continuum of sensation over time since we are embedded into the universe.
A much more straightforward explanation for these intuitive notions of an illusory reality is that Cartesian space and object motion through space emerges from a simpler reality. Space and motion emerge from the time delays and exchanges of matter among objects, which is the action of matter time that is our primitive reality. The neural packets of aware matter that make up conscious thought come from the mimes of sensation. Mimes are the brain matter structures that mime or replicate the sensation of an object and then allow us to make sense out of sensation. The irony of reality is that our consciousness is really also just matter changes in time and so in a very real sense, space and consciousness both emerge from the primitive characteristics of time delay and matter in our brain.
A finite line in Cartesian space nevertheless has an infinity of points and we associate similar infinities of points with all space and time. On a line, there is a current position as a point as well as preceding and following points and time then emerges as a similar line that has a present that connects past and future. In contrast, a Cartesian line that emerges from time delay is not infinitely divisible but instead is made up of moments since time delays are moments. A series of moments would be a memory of the past, but there is no action to replay this memory and the past is not therefore part of time’s dimension. We imagine a set of future moments as possibilities and so the present is a moment of memory and action while the past is only memory stored as brain matter, a fossil of the past. There is not just one determined future since the present moment is only one of many possible futures, but our sensations represent a continuum of discrete moments of time.
Neither a straight Cartesian line nor even a single connected line emerges from time delays. We can predict the future perfectly well with only the time delays and changes of matter in time, which is action and we do not really need the a priori notion of motion in Cartesian space. However, Cartesian space and motion are still extremely useful and only misleading for predicting action at very large or very small scales.
So a mathematical representation of a quantum reality can predict action equally well with the conjugates of space and motion or with matter changes and time. In fact, our minds fill in most of what we perceive as motion in Cartesian space from just a few sensations that we extract from the continuum over time and that is the reason that a quantum reality without space and motion is therefore difficult to imagine. The very powerful Cartesian notion that evolution has given our minds simplifies the complex time-ordered continuum of sensations of matter changes for objects in time that our minds process. The mimes of sensation then result in our feelings about objects in our primitive minds and those feelings result in both conscious and unconscious actions.
It is important that there are two dimensions for time and not just one; a moment of atomic action and the decay of those moments as memory or intrinsic decay. What we think of as past is just a memory of action as experience and not a dimension of time, so time is not just a memory and yet our past is only such a decaying fossil memory of action. Time is always both a decay along with an action and since we cannot journey into a past memory, it does not make any sense to journey to a past event.
Unlike a return journey in Cartesian space, the past is merely a fossil memory of actions, nothing more and nothing less, even though memory is an intrinsic part of time along with action. As we approach an object, the time distance we journey is the memory of our stride or the turns of a wheel or the clicks of an odometer as well as the action of our stride, wheel turns, or odometer clicks.
Matter changes are a part of what time is and those matter changes can be our own memory or they can be the hands of a clock or the sand of an hourglass or the geological layers of sedimentary rock or the spin the earth or the pulsar timekeepers of the cosmos. The memory of time can also be in the calendar of the year, in the relics of civilizations, or in the fossil record of life. The matter changes that we call the past are different from the action and memory that we call the present and that is different from the superposition of possible futures and so time is not a linear dimension as past, present, and future.
What we call past and present are both simply a part of the time dimension as memories of events, either our own memories of the fossil record of action of a clock or calendar. What we call the present is then the two dimensions of decay and action, which is what time actually is. What we call the future are the many possibilities of action that we imagine and there is not a determinate future.
A principle in science known as the microscopic reversibility of time seems to show that time is reversible. At a microscopic scale, the scientist/philosopher Poincaré supposed that the collisions of atoms or subatomic particles in space are completely symmetric in space and time and therefore completely reversible. In fact, Poincaré showed mathematically that there is therefore a finite probability that any configuration of particles will exactly repeat itself over time.
In the quantum atomic world, there is also a strong principle of time reversal symmetry, but that is simply a characteristic of one time dimension, atomic time, and the principle does not consider the universe decay time as a second time dimension. Once science recognizes that we live in a universe with a second time dimension as matter decay, matter decay introduces a very slight asymmetry in time as well as determining the nature of all force. Thus, even at the microscopic level, matter exchange among objects and therefore also matter action has the well-defined time arrow of matter decay.
Even at the subatomic scale, time is only a memory of action even as Cartesian distance emerges from the time between sensations of objects. The emergence of Cartesian space simplifies the complexity of the continuum of time-ordered sensation and helps us do what we really need to do…predict action. What we really need to do with sensation is predict what is likely out of all the possible futures to where we might journey by our chosen actions. In fact, consciousness itself is really just another representation of time since consciousness is our memory of the actions that are the neural impulses of our minds’ aware matter.
We imagine futures with our minds and then select a desirable future based on the singular feeling of our primitive minds and choose actions to journey to those futures. We never actually reach the exact future that we imagine, both because of the imprecision and uncertainty of action but also because of the imprecision and uncertainty of feeling. During a journey, our feelings evolve, others’ feelings evolve, and the world around us evolves. By the time we reach the future that we desired, the world has changed and along with it, our feelings and the future we imagined have also changed.
The mathematics of science called quantum mechanics can predict action with just the representation of matter, time, and phase. Quantum mechanics and its wavefunctions only depend on a conjugate pair of operators and those operators do not have to be the typical choices of Cartesian space and momentum. In fact, avoiding the empty void of space resolves many quantum conundrums, and that includes the conundrum of quantum gravity.
Coherent quantum states can persist across the time of the universe and coherence is a common feature of quantum action that results in something known as entanglement. But Cartesian space and motion do not permit the coherence of two events across the universe since coherence seems to imply coincidence and instantaneous action by the strictures of relativity.
Our intuition demands that, with increasing separation of an empty void of space between objects, objects become increasingly independent of each other. All effects by this logic must have local causes by local objects and therefore causes and effects are always limited by the speed of light in space. But quantum coherency seems to violate a local causal principle since quantum states can be instantaneously coherent across the universe. Yet this quantum coherence of states is always tied to a single common source and therefore a single local cause. Therefore the fact of coherence with a source is indeed limited by the speed of light in space.
It is the emergence of Cartesian space from matter time that intrudes into our interpretation of motion through space as time and velocity. What about the speed of light? The speed of light actually emerges from the decay rate of the universe matter in this epoch and the radius of the hydrogen atom. We project a gaekron action of time as the void of Cartesian space and the speed of light in this epoch then emerges from the three constants of matter time.
Universal matter, matter exchange, and decay are the sources of all time and gaekron decays more or less uniformly throughout the universe. So gaekron matter and decay together also define space while the objects of observable matter are gaekron matter condensates that are only a portion of the basic gaekron matter of the universe.
The presence of coherent matter across the universe is not just an anomaly of microscopic quantum mechanics, in matter time coherency and interference are causal features of all reality. Every time we observe an object, what we sense is still just one of many possible futures for that object. From a whole series of sensations, we deduce the reality of that object and are then usually very good at predicting the future of an object’s action. Once we sense an object reality, the other possible futures decay away very quickly, even if those possibilities existed on the other side of the universe.
However, we are not always correct about the reality of an object and we can be mistaken. But our very survival often depends on how well we predict object action, so that survival naturally favors a consciousness that better predicts action. Now those objects can be inanimate like cars and houses or they can be people or animals or they can be galaxies or galaxy clusters.
The existence of coherent states across the universe is linked to a coherence of matter amplitude phase, not matter intensity or proper time. In other words, coherent matter amplitudes can evolve as two or more different possibilities from the same precursor source event. The time distances as well as the matter amplitudes between those two possibilities differ only in phase coherence and as long as there is phase coherence, the fates are linked. Normally we do not think of phase as a causal agent, but there are any number of phase effects that exceed the speed of light, so-called superluminality.
Phase coherence can occur over what emerges as a very great Cartesian distance, but those coherent states are linked by the same time distance from a common precursor source event. Thus the time distance to the precursor event necessarily limits to the speed of light any communication of phase by either observer. Even though we imagine that a particle observed on path A instantaneously precludes its observation on path B, that is only one of many possible futures for that particle.
Observer A can not know of any other possibilities without more knowledge and that extra knowledge is necessarily limited by the time action of light from the source. If an observer sees a particle on path A, it is reasonable to assume that that particle was always on a journey from the event along path A. But since it is equally likely that another observer on path B will see the same particle and if that event occurs, the particle will not then appear on path A.
What gives? Which path was the particle on? How can a single particle seem to be simultaneously on both Cartesian paths A and B? Furthermore, observation of the particle on path A seems to instantaneously preclude its observation on path B. How can this cause be instantaneous? This piecemeal reality appears to spread the possibilities of a particle over the wide expanse of the cosmos.
Instead of the speed of light in space, time action is limited by the matter decay rate of the universe. Since all force is due to the exchange and decay of gaekron, all action in the universe is in some sense always coherent with all other action and always limited by that universe decay rate. The appearance of an object simply means that there is constructive interference of gaekron in time while the absence of an object in time means that there is destructive interference of matter, which is what we call space and is the absence of matter in the time between the objects. The absence of objects is due to destructive interference and simply represents dephasing of gaekron matter.
However, gaekron matter does not fill space, but rather space emerges as a convenient and simple representation in our minds for both gaekron matter and its changes in objects over time. Space and motion emerge from the actual complexity of sensation and action in time. The time between sensations is what separates objects and an object matter spectrum shows its relations with all other objects and so the matter spectrum is a complementary representation of an object in matter time.
It is obvious that most of the universe is made up of empty space and that most of an object is also made up of empty space since there is space between atoms of any solid object and there is even more space between electrons and nuclei and then even more space between quarks in the nucleon. But, once again, the Cartesian space within an object emerges from the changes in its matter spectrum over time. One might also say that all of objects and the universe are just different peaks in a gaekron matter spectrum, but that statement would not be very useful either.
The objects of matter exist as gaekron in various time and phase amplitudes according to quantum mechanics. More than one possible realization of an object in very different Cartesian locations may emerge from its matter spectrum. All of these possible futures for an object in time do exist with very different phases and while it seems to our Cartesian logic that action has only local causes, it is rather the case that quantum logic determines causality as the evolution of a matter spectrum.
We imagine ourselves in a frame of reference at rest and further imagine light from a source traveling away from us at the speed of light. If instead we imagine that light source creating stationary photons and moving away, it would rather be us and our comoving frame traveling away from the particular photons that we have emitted given the collapse of that world line.
Certainly it is much simpler to imagine with our Cartesian logic that incoherent photons emit and move in all directions away from a stationary source. But the universe collapses in all directions and from all points into itself and it is the rate of that collapse that determines all force.
Phase is a dimension of matter time that is very common for light but not otherwise explicitly incorporated into the everyday reality of other objects. We are made up of matter that has amplitude as well as phase but sensation is the result of the norm of matter waves and does not include phase. Similar to polarized light, the polarization of matter can contribute to a confusion of causation, but only in very controlled experiments. Polarizing a single light photon along one axis at 0° means that that photon will not pass through an analyzer oriented at 90° and these two devices will not transmit the polarized photon. However, inserting third polarizer at 45° in between the polarizer and analyzer allows that single photon to now pass 50% of the time because the 45° analyzer creates two possible polarization states from that one polarized photon.
Thus even though we imagine a single polarized photon along one axis, a single photon always exists in a superposition of two polarization states. A linearly polarized photon is really a superposition of right and left circularly polarizations even while a right circularly polarized photon is a superposition of linearly polarizations phase shifted by ¼ of a wavelength. In fact, a single photon actually has in general an elliptical polarization because the two possible polarizations can be related to each other by an arbitrary phase angle.
The third polarizer inserted at 45° distributed that single photon polarization between the two orthogonal Cartesian directions, not just one. The phase coherence of a single photon between two Cartesian axes is straightforward to calculate, but difficult to imagine. We want a photon to be polarized in only one way, but then we find out that that one photon always exists as a superposition of two circularly polarized states at different phase angles, one of which we observe as a linear polarization.
Ancient people drew pictures of the realities they saw and those pictures seem to us rather flat images with odd perspectives. Classic Egyptian art, for example, shows people and animals without perspective and with profiles that are not what our cameras of today project. Ancient pictures showed a great variety of object projections onto flat images until the realism of painted images and camera photography in the renaissance. We take for granted the camera-like projections of objects onto flat surfaces, but those projections are actually not what we sense. The imagery of our art tracks the evolution of our civilization and of consciousness itself.
Surrealist and impressionist artists have shown over the last one hundred years or so how we can perceive objects in many ways that contrast with a camera image. Artists often produce images that are manifestations of a projective Cartesian reality. In fact, such art often shows a combination of the two different representations for reality, Cartesian and relational, and we use both of these representations to predict action. Whether we project an object as a Cartesian camera flat image or we project the relations between objects onto a flat image as a relational representation, both projections represent objects for us.
A relational camera would take a very different snapshot of reality. Instead of recording the light intensity projected as an image on a flat surface, a relational spectrum would record the interaction or relational intensities among the objects of a scene onto the same surface. A relational spectrum shows interactions and therefore also shows the many possible futures of objects in a scene as opposed to their static Cartesian projections of that captured moment. That is, the strength of all of the charge and gravitational bonds would mean that matter objects would look like x-ray images, but with gravitational bonding at 1e39th less intensity than charge bonding.
Cartesian projections tend to be image frames that capture a moment of a time-like representation of a scene and so that is why our projection of space is time-like. Relational spectra, on the other hand, tend to be matter-like and action-like and capture the matter relations among objects. A relational spectrum shows the way an object interacts with other objects at a moment, but does not capture the Cartesian distances among objects very well.
If two people have a relationship, that relationship is a bond that represents a peak in each of their relational spectra just as the gravity that bonds each of them to earth as well as all of the charge bonds are also peaks. Just as charge bonds the charges of atoms, molecules, proteins, and lipids of their body’s cells together, the neural bonds of consciousness hold their realities together; their relationships with all the objects around them are also peaks in their relational spectra.
We tell word stories about the relationships that we have with each other and with other objects and these word stories are more like a relational spectrum than just a Cartesian image. As opposed to a photograph of moment, a word story describes the relational spectrum that complements that moment of a Cartesian representation of object time relationships.
Sunday, April 26, 2015
Deflection of Starlight by the Sun
The first verification of Einstein’s relativity came with the observation by Eddington during an eclipse in 1919 of starlight deflection passing close to the sun. Einstein had predicted in 1915 that the sun's gravity would deflect star light, but it actually took many more years to really put this issue to bed. This is because there are two separate but equal terms for that deflection and it even took Einstein time to realize that this was so.
The first term is due to the mass-energy equivalence (MEE) of photon energy and is really then just the Newton deviation of classical gravity of an object trajectory close to a massive body is shown in Fig. 1 and is based on just gravity and mass equivalent momentum. In other words, there is both a classical Newtonian deflection of star light as well as relativity's deflection of light by gravity. The real question is why relativity predicts twice the deflection as predicted based solely on Newton's gravity, but including the mass equivalent energy of light.
And of course, since the energy of a photon is equivalent to a rest mass, this is the Newton deviation for a photon particle as mass or momentum as well. These units are all in radians, Eqs. 1 and 2, where 2π radians equals 360°, [1], [2].
For relativity, though, there is an additional deflection due to the gravity time delay and spatial distortion and a progressive gravitational redshift of light. That is, the deflection of light due to an extra gravity time delay exactly doubles the deflection due to gravity as MEE, Eq. 3.
The fact that these two effects, gravity MEE and time delay, are equivalent but distinct was not immediately apparent to Einstein and others in 1915, but eventually Einstein recognized that his relativistic deflection was indeed twice the Newton gravity deflection for light in a vacuum. So the total deflection is the sum of both Newton and redshift contributions as Eq. 4:
The original Eddington results from 1919 showed a deflection of the starlight by the sun, but those results had a fairly large uncertainty as shown in Fig. 2 and so really did not validate Einstein's Eq. 4 over Newton's Eq. 2. Since then, many different kinds of measurements have indeed verified the extra gravity deflection of light predicted by Einstein. Figure 2 shows starlight deflection data from the 1976 eclipse along with the Einstein and Newton predictions along with the range of data from Eddington in 1919. Although there is substantial scatter in the measured deflections, this paper confirmed Einstein’s prediction over Newton's with a 95% CI.
The much more precise time delays of quasar sources across the sky by VLBI radiotelescopes measures time delays between stations across the width of the earth, ~6,000 miles, to derive the same light deflection for these radiowave quasars. A more generic expression that is valid for objects across the entire sky is Eq. 5 as
where the angle, theta, is the elongation angle between the sun and the source and g = 1 for GR and g = 0 for Newton.
Travel through a gravity gradient in effect delays both photons of light as well as bodies of matter and from the precise measurement of that time delay emerges the deflection of light in space. The measurement of starlight deflections during a 1976 solar eclipse shows a dataset that is consistent with Einstein gamma = 1 and not just Newton gamma = 0. However, the scatter in the starlight deflection data in Fig. 2 shows how difficult this measurement really is.
Figure 2 also shows the three of the five much more precise VLBI results reported in a 2015 paper for a series of VLBI measurements of quasar time delays from 1991-2001 that also followed the expectations of Einstein’s relativity and gamma = 2. Unlike the measurements that depend on an eclipse, measured VLBI time delays occur throughout the year and over ten years and showed circular paths for each of five different quasar radio source deflections. One example is the blazar 1606+106 deflections in Figs. 3 and 4.
This data revealed very precise measurements of the deflection of quasar radio signals over the course of ten years for quasars that were located at the minimum angle 30.9° from the sun, a much greater elongation angle than any previous report. Once again, these datasets support the deflection predicted by Einstein’s relativity and gamma = 1 over that of the mass-energy equivalence of light and gamma = 0.
Instead of measuring starlight deflection only during an eclipse, the VLBI measures radio source deflection over an entire year for all of ten years. Each quasar radio source reveals a circular pattern that shows the same deflection observed with the eclipse datasets. Figures 3 and 4 show the deflections of blazar 1606+106 that is located 31° or 124 solar radii from the ecliptic and would be the elongation at the maximum deflection.
The much more precise VLBI data is also consistent with the nature of relativity to an extent that seems quite convincing. However, there are still other explanations besides Einstein’s relativity for the deflections of starlight and radio sources that are fully consistent with these measurements. These results all derive from approximations that use only the leading terms of various series expansions to simplify the complex tensor algebra of the relativistic equations. As a result, these same approximations are actually valid for any number of alternative scenarios as long as they all incorporate the same basic principle of mass-energy equivalence (MEE), i.e., E = mc2
There are some big flaws in Einstein’s general relativity, but starlight deflection by gravity is not one of them. In fact, far from validating GR, starlight deflection is consistent with any number of other theories as long as those theories incorporate gravity MEE and therefore time delay. For example, MEE is a founding principle of discrete aether and so star light deflection by the sun is not so much of a verification of GR as it is of gravity MEE and time delay.
A spherical gradient index lens, for example, deflects starlight in the same way as a gravity body like the sun. For a gravity lens, the starlight first redshifts in its approach and then blueshifts as it leaves the gravity field deflected as shown in Figs. 1 and 5. Similarly, for a gradient index lens, the starlight redshifts and delays as it travels the index gradient and then starlight blueshifts as leaves the index gradient. Similarly, a body of mass accelerates and gains energy upon entering a gravity gradient, then decelerates and loses energy and is also delayed upon traveling the same gravity gradient, but only one-half as much delay as the starlight.
The dielectric effect delays light that travels through a dielectric medium since light slows down in a medium with an index of refraction greater than vacuum. In a fully consistent manner, a gravity field slows light and therefore results in the same index gradient, which is an alternative explanation from relativity. Whereas Einstein supposed a distorted 4D spacetime where light followed geodesic paths (shown in Fig. 5), light does not change velocity along that spacetime geodesic. Instead, a gravity field dilates time and space by the same Lorentz factor in a gravity field, which maintains a constant speed of light in the moving frame.
In the moving frame, there is no change in the speed of light because both distance and time are dilated by the same MEE factor and so in GR, there is no way for the traveler to know about their motion without communication with the rest frame. However, in the rest frame, the deflection and delay of light in the moving frame is very apparent. The apparent speed of light for the photon does in fact slow down since there is a time delay just as there is a time delay for the matter body as well.
A positive gradient quadrupole gravity wave, shown in Fig. 5, is due to the exchange of image dipoles with the photon dipole and this is a dielectric effect. In effect, the photon travels through the gravity quadrupole field and that same quadrupole gravity field exchanges dipole pairs between the two matter object. It is the exchange of quadrupole photons that results in an increase in that object’s inertial mass and velocity.
A quadrupole gravity exchange with a photon of light results in an apparent red shift or mass loss followed by blue shift and mass gain and an overall photon delay even while the same quadrupole gravity exchange with a matter object first increases the decreases object mass and ends up with only one half of the time delay that a photon experiences.
A photon in a dielectric gradient generates an image dipole that results in an attractive force and a red shift of its frequency. In effect, the quadrupole gravity field derives from photon emissions of matter particles that end up folding back onto the particles with the folding time of the universe. The time quadrupole operator, Fig. 6, is the basic scaling for gravity force from dipole time operator of charge force.
Quadrupole photon gravity is a quantum gravity and is a part of matter time, where all force derives from the same fundamental decay of the universe. Photon delay in a gravity field is twice the delay of an MEE matter object due to the fact that a photon undergoes an additional dielectric delay that is equivalent to its MEE delay.
It is from these time delays that our notion of space emerges from the action of matter. Therefore, the fundamental flaw in Einstein's GR is that a deterministic geodesic path like Fig. 5 exists. Although this is an excellent approximation, that path in a quantum gravity actually emerges from the exchange of biphotons. Similar to the exchange of virtual photon dipoles that represents the basic nature of quantum charge, it is the exchange of virtual biphoton quadrupoles that represents the basic nature of quantum gravity.
The factor of two for relativity's delay of light is actually the same factor of two that shows up in the gyromagnetic precession frequency between relativity and classical frequencies of rotating charge. This means that the g-factor that relates quantum to classical charge is the same g-factor that relates quantum and classical gravity, finally resolve the discrepancies between gravity and charge.
Eddington, Arthur Stanley (1919). “The Total Eclipse of 1919 May 29 and the Influence of Gravitation on Light.” The Observatory 42, 119-122.
Sunday, February 15, 2015
Aethertime Cosmology
Instead of a big bang, the discrete matter and action universe decoheres from its precursor antiverse expansion and the decoherence rate is what drives both charge and gravity forces in the shrinking or collapsing epoch of decoherence. The current decoherence rate is 0.255 ppb/yr, which is about 9.6% per Byr matter decay and force growth and means that the current universe is only about 81% of the mass of when decoherence began at creation but the speed of light at creation was zero. The ratio of the time size of the universe to the time size of the hydrogen atom represents the ratio of charge to gravity forces and force also evolves along with universe decoherence.
Instead of the Hubble constant deriving universe expansion from galaxy red shifts, the red shifts of the Hubble constant just define the size of the universe given the speed of light in this epoch. Equivalently, Hubble is just the product of the current rate of the universe decoherence and the current speed of light, H = αdot c. The aethertime Hubble constant is then purely a classical constant and simply depends on constants that are the ratio of gravity and charge forces, H = mH2G / (q2 rB 1e-7). This means that the size of the universe scales from the size the hydrogen atom and the ratio of gravity and charge forces.
And what do you know...the universe is shrinking...universe is slowly dying reported at 50% over 2 Byr. The paper Galaxy and Mass Low z shows a decay of three times, {2.25, 1.50, 0.75} Byrs as {2.5, 2.25, 1.5} e35 W/Mpc3 at h70. Since the current universe is about 0.32e35 W/Mpc3, which is the Virgo cluster luminosity over its 0.11 Blyr time size.
So the very latest decoherence would show the accelerating collapse of 6.3e35 W/Mpc3/Byr, not just 0.63e35, which is 50% over 2 Byrs. The dephasing of discrete aether shows this decoherence is actually due to universe shrinkage and not expansion, but the time delays are not the same between expanding and shrinking universes. It is fun to suppose that this measure of universe decay is consistent with an aether decoherence that drives all force. The universe actually decoheres at -9.6%/Byr, but the universe decoherence presumes a constant c, which doubles the apparent matter decay to -19%/Byr. |
c124bd1fde618dcc | A computational approach for the analytical solving of partial differential equations. A strategy for the analytical solving of partial differential equations and a first implementation of it as the PDE tools software package of commands, using the Maple V R.3 symbolic computing system, are presented. This implementation includes a PDE-solver, a command for changing variables and some other related tool-commands.
References in zbMATH (referenced in 30 articles )
Showing results 1 to 20 of 30.
Sorted by year (citations)
1 2 next
1. Cancès, Eric (ed.); Friesecke, Gero (ed.); Helgaker, Trygve Ulf (ed.); Lin, Lin (ed.): Mathematical methods in quantum chemistry. Abstracts from the workshop held March 18--24, 2018 (2018)
2. Lange-Hegermann, Markus: The differential counting polynomial (2018)
3. Yavuz, Mehmet; Yaşkıran, Burcu: Conformable derivative operator in modelling neuronal dynamics (2018)
4. Berlyand, Leonid (ed.); Fuhrmann, Jan (ed.); Marciniak-Czochra, Anna (ed.); Surulescu, Christina (ed.): Mini-workshop: PDE models of motility and invasion in active biosystems. Abstracts from the mini-workshop held October 22--28, 2017 (2017)
5. Lange-Hegermann, Markus: The differential dimension polynomial for characterizable differential ideals (2017)
6. Paliathanasis, Andronikos; Tsamparlis, Michael: The reduction of the Laplace equation in certain Riemannian spaces and the resulting type II hidden symmetries (2014)
7. Tsamparlis, Michael; Paliathanasis, Andronikos: Type II hidden symmetries for the homogeneous heat equation in some general classes of Riemannian spaces (2013)
8. Vu, K. T.; Jefferson, G. F.; Carminati, J.: Finding higher symmetries of differential equations using the MAPLE package DESOLVII (2012)
9. Poole, Douglas; Hereman, Willy: Symbolic computation of conservation laws for nonlinear partial differential equations in multiple space dimensions (2011)
11. Bîlă, Nicoleta; Niesen, Jitse: A new class of symmetry reductions for parameter identification problems (2009)
12. Gouveia, Paulo D. F.; Torres, Delfim F. M.: Computing ODE symmetries as abnormal variational symmetries (2009)
13. Calapso, Maria Teresa; Udrişte, Constantin: Isothermic surfaces as solutions of Calapso PDE (2008)
14. Liang, Songxin; Jeffrey, David J.: Automatic computation of the travelling wave solutions to nonlinear PDEs (2008)
15. Kaçar, Ahmet; Terzioğlu, Ömer: Symbolic computation of the potential in a nonlinear Schrödinger equation (2007)
16. Kadamani, S.; Snider, A. D.: USFKAD: an expert system for partial differential equations (2007)
17. Gouveia, Paulo D. F.; Torres, Delfim F. M.; Rocha, Eugénio A. M.: Symbolic computation of variational symmetries in optimal control (2006)
18. Rodionov, Alexei: Explicit solution for Lamé and other PDE systems. (2006)
19. Zeng, Xin; Zeng, Jing: Symbolic computation and new families of exact solutions to the ((2 + 1))-dimensional dispersive long-wave equations (2006)
20. Baldwin, D.; Göktaş, Ü.; Hereman, W.; Hong, L.; Martino, R. S.; Miller, J. C.: Symbolic computation of exact solutions expressible in hyperbolic and elliptic functions for nonlinear PDEs (2004)
1 2 next |
0779f6ba51fbbec6 | Français Français Français Smaller page more readable page
General Epistemology Chapter IV-7
Causes of the laws of physics
Are laws or physics entirely arbitrarian, or in which extent can we deduce them from the theory of the logical self-generation process? Said otherwise, does this theory pose constrains on the possible laws of physics? And if yes, until when does this allow to deduce the laws of physics?
Conservation laws
The main laws of physics are absolute conservation laws: of energy, mass, charge, motion, etc. But also, the constants of physics and the laws of physics never change. This would be a direct logical consequence of the rule 5 seen in chapter III-3: in a system of logical implication, once solved the founding paradoxes, then each logical implication is strictly determined. Clearly, once created into a first implication, the laws of physics reproduce identically at each of the following implication, up to the infinite. And this obviously implies that the constants of physics do not change. This is obviously also true for the quantities they handle. For instance, the electric charge of a system remains constant, and the creation or destruction of charge is automatically accompanied with the creation or destruction of an opposite charge. These constraints have an infinite force, as any logical constraint, to the extent that charge and anticharge can be created or anihilate only in the same quantum event.
We could wonder, after chapter IV-5, why the nib has this «form»?
It is because, if it had no such relativistic properties, there would be no gravity. Gravitation is not a «field» like the electric field, but deformation of space. Deformation too small to be visible to our eyes, but enough to attract our bodies toward the ground. Relativity and Gravitation are linked. Without relativity, there would be no gravity.
We can wonders what would happen in a world without gravity. If our universe had no gravity, but all things otherwise equal, it would be now filled uniformly with a gas of hydrogen and helium at very low pressure and cold temperatures. No sun and no star to light up this dizzying vacuum, or to create the other elements required for forming planets and allow for the emergence of life. Instead, an eternal billiards of neutral atoms, colliding indefinitely without ever organizing together. Such worlds are logically possible, so they probably exist, at least as logical objects. But it is clear that no body allows to incarnate into them.
So we can say that the law of gravitation is anthropic (chapter IV-6). Gravity being a consequence of Relativity (especially of the Minkowski space), thus the later is also essential for the emergence of life. Relativity is also anthropic. Non-relativistic universes, such as series in sets of trinomials, really have three dimensions much looking like our own. However they are not Minkowski space, so they are not relativistic. And so, without gravitation, they do not allow for the emergence of life.
We can still assume that there are other solutions than gravitation to bring matter to clump together, interact and evolve enough to give life. But the relativistic universe, with its gravity, seems the easiest way to achieve this. The case should not be so easy, because even in our universe, only a tiny proportion of the total matter is effectively under the right conditions to give life. So, more complex solutions are even more unlikely.
The Heisenberg uncertainties and fuzzy space at small scale
We said repeatedly here, in previous chapters, that the particles always remain in the three-dimensional space, with a perfect precision. Actually no, because at small scale, space is rough, bumpy. The Heisenberg uncertainty also allow particles to exist for a short time, when they should not. They also allow for fluctuations of space to exist. Thus, in this bumpy space, the particles are not ideally into the three dimensions. The difference is certainly small, but it is observable.
Why is it this way?
Let us compare with a gas: although this gas is formed of a large number of molecules moving in every directions, the gas has an uniform average temperature, uniform pressure, uniform density, etc. The comparison is of course with our vacuum formed of virtual particles which in average stay very close to our three-dimensional space. However, the gas behaves in this way because the particles exchange energy with each other, which leads to an averaging effect. But there is nothing similar with the nibs.
This suggests that gathering into a three-dimensional space is an intrinsic property of nibs... like for the trinomials, which cannot exist out of their three dimensional set structure. Or it would be a consequence of their «form». However, we can also assume the existence of a «shepherd process» bringing back the particles in our three-dimensional space, as soon as they move at a short distance. Such a mechanism could be a logical feedback, as discussed in chapter IV-6: from an immeasurable number of evolution opportunities of our universe, only one is consistent (with three dimensions), so that this one would be selected. Temporary differences (Heisenberg uncertainty, rough space) would not be an issue, but larger discrepencies would lead to inconsistencies, such as the disappearance of matter and charges into other dimensions. Thus, any departure from the norm would be quickly corrected. In this case, it is remarkable that something so abstract as a logical feedback could have such a force, and act tirelessly for countless times, as to be one of the most powerful actors of physics. But it is not a wonder, if our universe is formed of logical elements. In its field, logic has an infinite force, and it never wears out.
The properties of vacuum
The physicists use to say that vacuum has properties, such as the speed of light, the constants of physics, etc. This implies that vacuum would be «something», a «rubber membrane», or even an «aether», which this time was cautiously given relativistic properties, not to be caught again by Michelson and Morley. And regardless of how this aether would be squashed by the equations of Einstein.
In fact, we really saw in chapter IV-4 that the nibs generate space, and even the relativistic space-time. In doing so, they necessarily do in a specified way, always the same. We saw for instance that the limit of the speed of light, and this speed itself, is generated by the angle (in the Minkowski space) where each nib «sees» the previous. We can rely on the same principle to explain all the other constants, gravitation, electric field, etc.
Thus, not only space is the structure (in the meaning of the Sets Theory) of the whole sets of nibs, but in more, these nibs confer it properties, such as being relativistic, to be traversed by electric fields, magnetic fields, weak or strong fields etc. in proportions determined by physical constants such as the permittivity of vacuum.
So, all the complexity of our world goes to the nibs, and how they connect, instead of an hypothetical «aether». But after all, it is much more simple in this way, to have only nibs, rather than assuming nibs more a continuum.
And when we try to measure the properties of vacuum, our measuring instrument shows in facts the properties of the nibs of which this instrument is itself made. It is remarkable that we actually always find the same result, even if we build another device.
And space is really an «abstract» property, which exists only as a convenient way to describe the interactions between nibs.
Matter-antimatter asymmetry (speculation)
Nuclear reactions are in principle symmetrical, for instance always giving the same amount of matter and antimatter. There are however a few violations of symmetry, such as with the decay of kaons (unstable particles formed of two quarks), which gives a little more often a particle than an anti-particle. Today, such phenomena occur only in the laboratory, but shortly after the Big Bang, at a time where all kinds of particle existed in equilibrium, this deviation could favour matter over antimatter matter, explaining that only it exists today.
Such a violation of symmetry is still a mystery for physics. Even for the theory of the logical self-generation, it requires that the founder nib was created with this arbitrary property, by the simple play of the creative absurdity, without external cause. This problem can also be discussed into the frame of the anthropic principle (chapter IV-6). There is however an experimental evidence toward the creative absurdity producing an arbitrary deviation: observation in the laboratory of violations of varying amplitudes, into a quark-gluon plasma. Here appear «domains» with each a different physics! Thus, the same causes resulting into the same effects, physicists managed to get close enough of the Big Bang to observe the creation of a law of physics! However this law has little influence in practice, and anyway these domains disappear when the quark-gluon plasma cools off to ordinary matter.
However there is an hypothesis where this violation of symmetry would be explained in a simple and logical way, that we saw in chapter IV-5, under the title «an elegant explanation of the gravitational field». In this hypothesis, the gravitational field, and the associated deformation of space, is not transmitted like the other fields, but as waves in the front of reification of the logical self-generation system. (at need, this front of reification would be the work of virtual particles, such as for instance the famous Higgs bosons). Thus, mass would be explained by deformation of this front, which is also the relativistic curvature of the space around this mass. Other types of charges could also be explained by other kinds of deformations of this front.
These waves of deformation of the front of reification have symmetrical properties, whether they are ahead or behind the general average. Different types of waves would then correspond to different types of charges. For instance, an advance would correspond to a particle, and a lateness to an anti-particle. However, it is well known that when a wave takes a certain magnitude, its form is no more symmetrical (between the top and the bottom) (in physics, it is said that it appears nonlinear phenomena). Then a similar phenomenon on a front of reification may favor the particle over the anti-particule, or more generally the violations of symmetries in the weak interaction.
An evidence toward the interpretation of the properties of particles as geometric position of the nib into the Minkowski space, is that the kaons, while violating the matter-antimatter symmetry, also violate in the same proportions the left-right symmetry.
Can we demonstrate all the physics with the theory of the logical self-generation?
The previous reasoning allowed to find back some of the most bizarre properties of photons and vacuum.
My intuition commands me to look forward into this direction. It may only miss only one item, to end connecting this part to known physics. We shall already see, in the next chapter IV-8, some encouraging results, such as to predict two types of particles which actually exist: bosons and fermions, and some of their more bizarre properties, such as to be unobservable on their path.
Could we predict other entities, such as the electric field, the weak interaction, the strong interaction?
Could we, from simple geometric considerations on the shape of the nibs, predict the exact values of constants of physics, such as the coupling constants of the various fundamental forces? (Any prediction of this kind would win a well-deserved Nobel to its author, and validate the theory of the logical self-generation process to the eyes of official science).
I'm not sure of this. Indeed, we saw that the nibs can have «ad hoc» non-demonstrable properties, set randomly (an anthropic random, chapter IV-6,) in the time of the the Big Bang (or creative absurdity, see chapter III-3, rule 3). These nibs are then logically constrained to produce other identical nibs, transmitting their properties, without changing them. It is this logical constraint which forbids ordinary physics to change its own laws.
We even have a recent experimental demonstration of this, with the RHIC experiment (chapitre IV-9) where we precisely witnessed the arbitrary attribution, at random, of a value to a parameter of a law of physics. This validates the idea as what the numerous parameters of the laws of physics cannot be predicted, but that they were determined in the Big Bang. And that another universe would have different values for these parameters.
However, Quantum Mechanics predicts the formation of «domains», areas of space having different laws of physics (Ancient publications rather say «textures», and it is the word I used in version 1). It is what was actually observed into quark-glons plasma. What says the theory of the logical self-generation process, is that appeared a logical indeterminism, or a paradox, which resolution would result in the emergence of a new arbitrary law, as explained in Chapter III-3, Rule 6. This new law is then constrained to spread without being changed.
So, after the theory of the Big Bang, the four fundamental forces would have appeared only a few tiny fractions of a second after the Big Bang, during a special event, called «symmetry breaking», a time where, as says the logical self-generation theory, the previous laws were taken in default, forcing the apparition of new different laws. Since, these new laws are forced to propagate without being changed.
It is therefore not sure at all that we can demonstrate all the physics, if it includes such arbitrary elements, see local and accidental elements.
However we might try, for example by posing that a nib with an electric charge is the same than a nib with a neutral charge, but with a different orientation in the Minkowski space (for instance in another dimension). Thus, a neutral particle and a charged particle would have a different local space and universe lines, which then easily explains their very different behaviour, without invoking anything else than Special Relativity. It would be fascinating to find the electric field, see the weak and strong interactions, out of such simple geometry considerations into the Minkowski space!
So this leads us to the current (2012) state of my thinking on this topic. Of course I continue to think, and if I find, I shall add chapters to this part.
The Sets Theory master of physics
(Added January 2017)
For those who know the Set Theory only by its bad reputation of abstruse and useless theory, they will wonder how it can be a major determinant of physics. It is, nevertheless, a perfect demonstration of one of the main theses of this book: that our physical world is logical by itself (logical self-generation process), and not made of objects which would mysteriously behave according to logical laws.
To understand this, consider that some sets have structures (not all). Let us see how.
A simple set is the one of the real numbers (floating point numbers), called R. Let us consider more precisely R3, which is the set of the triplets of number x, y and z. These triplets can be combined according to certain laws of internal composition: addition and multiplication. What is interesting is that a group of triplets is identical, but displaced, if it is subjected to an addition operation. Rotations, scale changes, etc. are also possible, which do not destroy the original group. These apparently abstract notions exactly match what we observe in the physical world: moving or rotating objects does not modify them. Physicists say that these properties are invariant, or invariances, or symmetries, which occur in this case with R3, or more precisely the set structure R3, which corresponds to our three-dimensional space.
Hardly more complicated laws of internal composition (addition of velocities with a maximum c, the speed of light) give Relativity, which makes possible energy, gravitation, black holes, etc. in the Minkowski space (similar to ours on a small scale, but which can be distorted by Relativity).
Thus, our space would not be a homologue of a mathematical space, it would be directly a mathematical space.
At this point, any student learning Sets Theory wondered if there were no other structures than the simple R3. Yes there are. They are not taught at school because they are more complicated. But there are some, corresponding to other invariances or symmetries of the elements of the sets.
The thing is, there are not many.
And that mathematicians most probably discovered all of them, as for the Platonic solids.
Indeed, there are not many laws of internal compositions giving coherent results with the elements of their sets, this meaning keeping some invariances.
The idea here is that each of these structures would generate a law of physics!
Thus their list matches the one of all possible laws of physics, including the ones which occur at levels of energy impossible to attain today.
We find this list on wikipedia (except for Relativity, which I have added in italics):
Conserved quantity
Proper orthochronous
Lorentz symmetry (of space and time)
translation in time
(homogeneity) (R3, or Minkowski space in Relativity)
translation in space
(homogeneity) (R3, or Minkowski space in Relativity)
linear momentum (inertia)
rotation in space
(isotropy) (R3, or Minkowski space in Relativity)
angular momentum (rotation inertia)
Discrete symmetry
P, coordinate inversion
spatial parity (left-right symmetry)
C, charge conjugation
charge parity (electric charge symmetry)
T, time reversal
time parity (time symmetry)
CPT (charge, left-right and time symmetry)
product of parities
Internal symmetry (independent of
spacetime coordinates)
U(1) gauge transformation
electric charge
U(1) gauge transformation
lepton generation number
U(1) gauge transformation
U(1)gauge transformation
weak hypercharge
U(2) [ U(1) × SU(2) ]
electroweak force
SU(2) gauge transformation
isospin (charmed and strange quarks)
SU(2)L gauge transformation
weak isospin
P × SU(2)
SU(3) "winding number"
baryon number (number of protons and neutrons)
SU(3) gauge transformation
quark color
SU(3) (approximate)
quark flavor (type of quark)
S(U(2) × U(3))
U(1) × SU(2) × SU(3) ]
Standard Model (The whole physics)
Quote from wikipedia Creative Common Share alike license
The additions in italics are from me, for more accessible explanations, when possible.
We find several important mathematical symmetries precisely associated with laws of physics: U, P, S, SU. I could not find confirmation in this article that all possible group structures correspond to physical laws. But from other articles I have read, this would be the case. This assumes that there would also be no other mathematical set structures leading to coherent results.
This has various consequences:
-All physical universes would have roughly the same laws of physics
-However, the preceding mathematical considerations do not specify the values of the different parameters of these laws. Thus each physical universe would have different parameters, and thus a different physics (chapter IV-9), although based on Relativity and the same Standard Model as us (provided that both are complete).
-The universes of consciousness, which do not contain particles, but elements of consciousness experience (sensations, ideas, etc.), could also have the equivalent of «laws of physics». Some may even resemble our physical world, for the same reasons, for example space and space invariances. However, the non-Aristotelian nature of the elements of the consciousness experience certainly leads to other laws, different from our physical world, where space does not exist as such, but is an image in a consciousness. This space will therefore not be defined as physical space is, but rather as in a dream. At this point it is difficult to speculate, given the little experimental information (NDE, RR4), but these worlds would function rather like dreams, about which we saw in chapter III-8 that they also have self-generation laws, and even more rigorous as one might think. See also in Chapter V-10, under «Dissolution of Consciousness».
-It is possible that consciousness and the universes of consciousness possess the equivalent of laws of physics, also resulting from mathematical structures. Thus many laws, obligations, or impossibilities in the domain of consciousness could result from analogues of the conservation laws, or analogues of thermodynamics. However, it is difficult to be more accurate: consciousness does not obey the classical mathematics, but non-Aristotelian logics, which have never been studied in details. We can therefore expect enormous differences. We shall see some of these laws throughout the fifth part on consciousness. One of the most precise analogues I found is with entropy, chapter V-7. However in the realm of consciousness, entropy seems constructive instead of destructive. This difference would result from the non-Aristotelian nature of consciousness. But if we use Aristotelian logic to create for example a system of laws, then we fall back on a precise analogue of the destructive physical entropy... which is therefore not just a joke.
A simple experience to
touch the logical causes of physics
(Added in January 2017)
A bit of fun physics, yes!
Take for example a balloon: useless and interesting object by excellence, which sometimes enters through our windows. When this happens, we can make something useful of it.
Beat it.
Well, it makes a kind of a «schtoung», and from that point on, people start screaming and losing their minds. So do the experiment alone.
If you are careful, you will notice another weaker sound, a kind of whistle, which immediately follows the main sound.
Any soccer player will explain you that this sound is due to the Helmholtz resonance of the air inside the balloon, exactly as in a guitar string or an organ pipe. However, unlike the latter, the different frequencies are not distributed in integer ratios but more randomly. This explains why the sound is somewhat unpleasant (except for the balafons, where the art is to harmonize the resonances of the calabashes).
Now, listen to the sound of the sun (recorded by space probes): it looks much like the balloon (if you take caution to shift it at a similar frequency). The reason for this is that the underlying mathematics is the same.
More precisely, there are several modes of vibration for a sphere. A first series of modes, called f, contains only one. A second series, called p, contains three. A third series, d, contains five, f, contains seven, and so on. This is explained in the same way as with the strings of the guitars, except that the division of the resonant space takes place at the same time according to the radius and also according to the circumference, whence the two numbers instead of one for the string.
Now we find exactly the same organization for the solutions of the Schrödinger equation, which defines the orbitals of the electrons around the atom, with the same parameters (there are four in all)! Organization which is in turn at the origin of all the chemistry and crystals!
Even stronger: do the same experiment with a rugby ball. Well, you will not hear any difference with the ear, but a spectrometer will show you: there are more frequencies. Any rugby player will kindly explain you why: in a soccer ball, the three dimensions x, y and z are the same, and therefore these three resonances are at the same frequency. While in a rugby ball, the length x is greater, and therefore its frequency is shifted with respect to y and z. We find exactly the same phenomenon with an atom subjected to a magnetic field: the spectral lines multiply, in pairs, hexuplets, etc. This is called the Zeeman effect.
So it is the Zeeman effect that harmonizes the sound of rugby balls.
And of balafons.
Bravo the Mandingoes, who discovered the Zeeman effect in the 13th century.
String Theory and Supersymmetry (speculation)
These are theories proposed by scientists, in order to explain the apparent contradictions between Relativity and Quantum Mechanics. They are still speculative today (2012).
The String theory assumes that the particles are not points, but small vibrating strings, each resonance producing one of the known particles. This theory in its current state is not compatible with the theory of the nibs, as it remains in the concept of objects arranged in a pre-existing space, of which we must then explain the nature, which is necessarily sub-particulate and extra-particulate. Moreover, this theory predicts a space with 11 dimensions, some of which being «wrapped» in an ad-hoc way, to be unobservable. Too many adjustments, in my opinion.
Supersymmetry, on its side invokes an additional parameter in the classification of the known particles, so that each one has a supersymmetric partner. The existence of such supersymmetric particles has been postulated mainly to explain dark matter in astronomy, because they do not interact with ordinary matter. This theory is fully compatible with everything written into this part, it just is not proven. The lightest supersymmetric particles are within reach of the CERN collider. So, we should soon find them... or if not, abandon the theory of supersymmetry.
General Epistemology Chapter IV-7
Ideas, texts, drawings and realization: Richard Trigaux.
|
df6ea51e4859f327 | Office of the Registrar
Campus Address
Hanover, NH
Phone: (603) 646-xxxx
Fax: (603) 646-xxxx
Email: reg@Dartmouth.EDU
Organization, Regulations, and Courses 2019-20
PHYS 19 Relativistic and Quantum Physics
The general theme of this course is the wave-particle duality of radiation and matter, with an introduction to special relativity. Classical wave phenomena in mechanical and electromagnetic systems including beats, interference, and diffraction. Quantum aspects of electromagnetic radiation include the photoelectric effect, Compton scattering and pair production and annihilation. Quantum aspects of matter include DeBroglie waves, electron diffraction, and the spectrum of the hydrogen atom. The Schrödinger equation is discussed in one and three spatial dimensions.
PHYS 14 and MATH 13, or permission of the instructor.
Distributive and/or World Culture
19F, 20S, 20F, 21S: 9; Laboratory: Arrange |
c3e4679289d2b306 | Main Handbook of Optoelectronics, Second Edition Volume 2: Enabling Technologies
Handbook of Optoelectronics, Second Edition Volume 2: Enabling Technologies
Handbook of Optoelectronics offers a self-contained reference from the basic science and light sources to devices and modern applications across the entire spectrum of disciplines utilizing optoelectronic technologies. This second edition gives a complete update of the original work with a focus on systems and applications.
Volume I covers the details of optoelectronic devices and techniques including semiconductor lasers, optical detectors and receivers, optical fiber devices, modulators, amplifiers, integrated optics, LEDs, and engineered optical materials with brand new chapters on silicon photonics, nanophotonics, and graphene optoelectronics. Volume II addresses the underlying system technologies enabling state-of-the-art communications, imaging, displays, sensing, data processing, energy conversion, and actuation. Volume III is brand new to this edition, focusing on applications in infrastructure, transport, security, surveillance, environmental monitoring, military, industrial, oil and gas, energy generation and distribution, medicine, and free space.
No other resource in the field comes close to its breadth and depth, with contributions from leading industrial and academic institutions around the world. Whether used as a reference, research tool, or broad-based introduction to the field, the Handbook offers everything you need to get started. (The previous edition of this title was published as Handbook of Optoelectronics, 9780750306461.)
John P. Dakin, PhD, is professor (emeritus) at the Optoelectronics Research Centre, University of Southampton, UK.
Robert G. W. Brown, PhD, is chief executive officer of the American Institute of Physics and an adjunct full professor in the Beckman Laser Institute and Medical Clinic at the University of California, Irvine.
Year: 2017
Edition: 2
Publisher: CRC Press
Language: english
Pages: 721 / 722
ISBN 10: 1482241803
ISBN 13: 978-1-4822-4180-8
Series: Series in Optics and Optoelectronics Volume 1
File: PDF, 36.55 MB
Download (pdf, 36.55 MB)
You may be interested in
Most frequently terms
Handbook of Optoelectronics
Second Edition
Series in Optics and Optoelectronics
Series Editors:
E. Roy Pike, Kings College, London, UK
Robert G. W. Brown, University of California, Irvine, USA
Handbook of Optoelectronics, Second Edition: Concepts, Devices, and
Techniques – Volume One
John P. Dakin and. Robert G. W. Brown (Eds.)
Handbook of Optoelectronics, Second Edition: Enabling Technologies – Volume Two
John P. Dakin and Robert G. W. Brown (Eds.)
Handbook of Optoelectronics, Second Edition: Applied Optical Electronics – Volume Three
Handbook of GaN Semiconductor Materials and Devices
Wengang (Wayne) Bi, Hao-chung (Henry) Kuo, Pei-Cheng Ku, and Bo Shen (Eds.)
Handbook of Optoelectronic Device Modeling and Simulation: Fundamentals, Materials,
Nanostructures, LEDs, and Amplifiers – Volume One
Joachim Piprek (Ed.)
Handbook of Optoelectronic Device Modeling and Simulation: Lasers, Modulators,
Photodetectors, Solar Cells, and Numerical Methods – Volume Two
Joachim Piprek (Ed.)
Nanophotonics and Plasmonics: An Integrated View
Dr. Ching Eng (Jason) Png and Dr. Yuriy Akimov
Handbook of Solid-State Lighting and LEDs
Zhe Chuan Feng (Ed.)
Optical Microring Resonators: Theory, Techniques, and Applications
V. Van
Optical Compressive Imaging
Adrian Stern
Singular Optics
Gregory J. Gbur
The Limits of Resolution
Geoffrey de Villiers and E. Roy Pike
Polarized Light and the Mueller Matrix Approach
José J Gil and Razvigor Ossikovski
Handbook of Optoelectronics
Second Edition
Enabling Technologies
Volume 2
Edited by
John P. Dakin
Robert G. W. Brown
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2018 by Taylor & Francis Group, LLC
No claim to original U.S. Government works
Printed on acid-free paper
International Standard Book Number-13: 978-1-4822-4180-8 (Hardback)
For permission to photocopy or use material electronically from this work, please access (http:// or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
identification and explanation without intent to infringe.
Library of Congress Cataloging‑in‑Publication Data
Names: Dakin, John, 1947- editor. | Brown, Robert G. W., editor.
Title: Handbook of optoelectronics / edited by John P. Dakin, Robert G. W.
Description: Second edition. | Boca Raton : Taylor & Francis, CRC Press,
2017. | Series: Series in optics and optoelectronics ; volumes 30-32 |
Includes bibliographical references and index. Contents: volume 1.
Concepts, devices, and techniques -- volume 2. Enabling technologies -volume 3. Applied optical electronics.
Identifiers: LCCN 2017014570 | ISBN 9781482241808 (hardback : alk. paper)
Subjects: LCSH: Optoelectronic devices--Handbooks, manuals, etc.
Classification: LCC TK8320 .H36 2017 | DDC 621.381/045--dc23
LC record available at
Visit the Taylor & Francis Web site at
and the CRC Press Web site at
Series preface
Introduction to the Second Editionix
Introduction to the First Editionxi
Part I
Optical transmission
Michel Joindot and Michel Digonnet
Optical network architectures
Ton Koonen
Optical switching and multiplexed architectures
Dominique Chiaroni
Part II
Camera technology
Kenkichi Tanioka, Takao Ando, and Masayuki Sugawara
Vacuum tube and plasma displays
Makoto Maeda, Tsutae Shinoda, and Heiju Uchiike
Liquid crystal displays
J. Cliff Jones
Technology and applications of spatial light modulators
Uzi Efron
Organic electroluminescent displays
Euan Smith
Three-dimensional display systems
Nick Holliman
Optical scanning and printing
Ron Gibbs
vi Contents
Optical fiber sensors
John P. Dakin, Kazuo Hotate, Robert A. Lieberman, and Michael A. Marcus
Remote optical sensing by laser
J. Michael Vaughan
Optical information storage and recovery
Susanna Orlic
Optical information processing
John N. Lee
Spectroscopic analysis
Günter Gauglitz and John P. Dakin
Optical to electrical energy conversion: Solar cells
Tom Markvart and Fernando Araujo de Castro
Optical nano- and microactuation
George K. Knopf
The art of practical optoelectronic systems
Anthony E. Smart
Series preface
This international series covers all aspects of
theoretical and applied optics and optoelectronics.
Active since 1986, eminent authors have long been
choosing to publish with this series, and it is now
established as a premier forum for high-impact
monographs and textbooks. The editors are proud
of the breadth and depth showcased by published
works, with levels ranging from advanced undergraduate and graduate student texts to professional
references. Topics addressed are both cutting edge
and fundamental, basic science and applicationsoriented, on subject matter that includes lasers,
photonic devices, nonlinear optics, interferometry, waves, crystals, optical materials, biomedical
optics, optical tweezers, optical metrology, solidstate lighting, nanophotonics, and silicon photonics. Readers of the series are students, scientists,
and engineers working in optics, optoelectronics,
and related fields in the industry.
Proposals for new volumes in the series may be
directed to Lu Han, executive editor at CRC Press,
Taylor & Francis Group (lu.han@taylorandfrancis.
Introduction to the Second Edition
There have been many detailed technological changes
since the first edition of the Handbook in 2006, with
the most dramatic changes can be seen from the far
more widespread applications of the technology. To
reflect this, our new revision has a completely new
Volume III focused on applications and covering
many case studies from an ever-increasing range of
possible topics. Even as recently as 2006, the high
cost or poorer performance of many optoelectronics
components was still holding back many developments, but now the cost of many high-spec components, particularly ones such as light-emitting
diodes (LEDs), lasers, solar cells, and other optical
detectors, optoelectronic displays, optical fibers,
and components, including optical amplifiers, has
reduced to such an extent that they are now finding
a place in all aspects of our lives. Solid-state optoelectronics now dominates lighting technology and
is starting to dominate many other key areas such as
power generation. It is revolutionizing our transport
by helping to guide fully autonomous vehicles, and
CCTV cameras and optoelectronic displays are seen
everywhere we go.
In addition to the widespread applications
now routinely using optoelectronic components,
since 2006 we have witnessed growth of various
fundamentally new directions of optoelectronics
research and likely new component technologies
for the near future. One of the most significant new
areas of activity has been in nano-optoelectronics;
the use of nanotechnology science, procedures, and
processes to create ultraminiature devices across
all of the optoelectronics domain: laser and LED
sources, optical modulators, photon detectors, and
solar cell technology. Two new chapters on silicon
photonics and nanophotonics and graphene optoelectronics attempt to cover the wide range of nanotechnology developments in optoelectronics this
past decade. It will, however, be a few years before
the scale-up to volume-manufacturing of nanobased devices becomes an economically feasible
reality, but there is much promise for new generations of optoelectronic technologies to come soon.
Original chapters of the first edition have been
revised and brought up-to-date for the second edition, mostly by the original authors, but in some cases
by new authors, to whom we are especially grateful.
Introduction to the First Edition
Optoelectronics is a remarkably broad scientific
and technological field that supports a multibillion
US dollar per annum global industry, employing
tens of thousands of scientists and engineers. The
optoelectronics industry is one of the great global
businesses of our time.
In this Handbook, we have aimed to produce
a book that is not just a text containing theoretically sound physics and electronics coverage,
nor just a practical engineering handbook, but
a text designed to be strong in both these areas.
We believe that, with the combined assistance of
many world experts, we have succeeded in achieving this very difficult aim. The structure and contents of this Handbook have proved fascinating to
assemble, using this input from so many leading
practitioners of the science, technology, and art of
Today’s optical telecommunications, display,
and illumination technologies rely heavily on
optoelectronic components: laser diodes, LEDs,
liquid crystal, and plasma screen displays, etc. In
today’s world, it is virtually impossible to find a
piece of electrical equipment that does not employ
optoelectronic devices as a basic necessity—from
CD and DVD players to televisions, from automobiles and aircraft to medical diagnostic facilities
in hospitals and telephones, from satellites and
space-borne missions to underwater exploration
systems—the list is almost endless. Optoelectronics
is in virtually every home and business office in
the developed modern world, in telephones, fax
machines, photocopiers, computers, and lighting.
“Optoelectronics” is not precisely defined in
the literature. In this Handbook , we have covered
not only optoelectronics as a subject concerning
devices and systems that are essentially electronic
in nature, yet involve light (such as the laser diode),
but we have also covered closely related areas of
electro-optics, involving devices that are essentially optical in nature but involve electronics (such
as crystal light-modulators).
To provide firm foundations, this Handbook
opens with a section covering “Basic Concepts.”
The “Introduction” is followed immediately by a
chapter concerning “Materials,” for it is through
the development and application of new materials
and their special properties that the whole business of optoelectronic science and technology now
advances. Many optoelectronic systems still rely on
conventional light sources rather than semiconductor sources, so we cover these in the third chapter,
leaving semiconductor matters to a later section.
The detection of light is fundamental to many
optoelectronic systems, as are optical waveguides,
amplifiers, and lasers, so we cover these in the
remaining chapters of the Basic Concepts section.
The “Advanced Concepts” section focuses
on three areas that will be useful to some of our
intended audience, both now, in advanced optics
and photometry, and now and increasingly in
the future concerning nonlinear and short-pulse
“Optoelectronics Devices and Techniques” is
a core foundation section for this Handbook, as
today’s optoelectronics business relies heavily on
such knowledge. We have attempted to cover all
the main areas of semiconductor optoelectronics
devices and materials in the eleven chapters in this
section, from LEDs and lasers of great variety to
fibers, modulators, and amplifiers. Ultrafast and
integrated devices are increasingly important,
as are organic electroluminescent devices and
photonic bandgap and crystal fibers. Artificially
engineered materials provide a rich source of possibility for next-generation optoelectronic devices.
xii Introduction to the First Edition
At this point, the Handbook “changes gear”—
and we move from the wealth of devices now
available to us—to how they are used in some
of the most important optoelectronic systems
available today. We start with a section covering
“Communication,” for this is how the developed
world talks and communicates by Internet and
email today—we are all now heavily dependent
on optoelectronics. Central to such optoelectronic systems are transmission, network architecture, switching, and multiplex architectures—the
focus of our chapters here. In communication, we
already have a multi-tens-of-billions-of-dollarsper-annum industry today.
“Imaging and displays” is the other industry measured in the tens of billions of dollars per
annum range at the present time. We deal here
with most if not all of the range of optoelectronic
techniques used today from cameras, vacuum and
plasma displays to liquid crystal displays and light
modulators, from electroluminescent displays
and exciting new three-dimensional display technologies just entering the market place in mobile
telephone and laptop computer displays to the
very different application areas of scanning and
“Sensing and Data Processing” is a growing
area of optoelectronics that is becoming increasingly important—from non-invasive patient
measurements in hospitals to remote sensing in
nuclear power stations and aircraft. At the heart of
many of today’s sensing capabilities is the business
of optical fiber sensing, so we begin this section of
the Handbook there, before delving into remote
optical sensing and military systems (at an unclassified level—for here-in lies a problem for this
Handbook—that much of the current development
and capability in military optoelectronics is classified and unpublishable because of its strategic and
operational importance). Optical information storage and recovery is already a huge global industry
supporting the computer and media industries in
particular; optical information processing shows
promise but has yet to break into major global utilization. We cover all of these aspects in our chapters here.
Applications” of optoelectronics abound, and
we cannot possibly do justice to all the myriad
inventive schemes and capabilities that have been
developed to date. However, we have tried hard to
give a broad overview within major classification
areas, to give you a flavor of the sheer potential of
optoelectronics for application to almost everything that can be measured. We start with the
foundation areas of spectroscopy—and increasingly important surveillance, safety, and security
possibilities. Actuation and control—the link from
optoelectronics to mechanical systems is now pervading nearly all modern machines: cars, aircraft,
ships, industrial production, etc.—a very long list
is possible here. Solar power is and will continue
to be of increasing importance—with potential
for urgently needed breakthroughs in photon to
electron conversion efficiency and cost of panels. Medical applications of optoelectronics are
increasing all the time, with new learned journals
and magazines regularly being started in this field.
Finally, we come to the art of practical optoelectronic systems—how do you put optoelectronic
devices together into reliable and useful systems,
and what are the “black art” experiences learned
through painful experience and failure? This is
what other optoelectronic books never tell you, and
we are fortunate to have a chapter that addresses
many of the questions we should be thinking about
as we design and build systems—but often forget or
neglect at our peril.
In years to come, optoelectronics will develop
in many new directions. Some of the more likely
directions to emerge by 2010 will include optical
packet switching, quantum cryptographic communications, three-dimensional and large-area
thin-film displays, high-efficiency solar-power
generation, widespread biomedical and biophotonic disease analyses and treatments, and optoelectronic purification processes. Many new
devices will be based on quantum dots, photonic
crystals, and nano-optoelectronic components. A
future edition of this Handbook is likely to report
on these rapidly changing fields currently pursued
in basic research laboratories.
We are confident you will enjoy using this
Handbook of Optoelectronics, derive fascination
and pleasure in this richly rewarding scientific and
technological field, and apply your knowledge in
either your research or your business.
John P. Dakin, PhD, is professor (Emeritus) at the
Optoelectronics Research Centre, University of
Southampton, UK. He earned a BSc and a PhD at
the University of Southampton and remained there
as a Research Fellow until 1973, where he supervised
research and development of optical fiber sensors
and other optical measurement instruments. He
then spent 2 years in Germany at AEG Telefunken;
12 years at Plessey, research in Havant and then
Romsey, UK; and 2 years with York Limited/York
Biodynamics in Chandler’s Ford, UK before returning to the University of Southampton.
He has authored more than 150 technical and
scientific papers, and more than 120 patent applications. He was previously a visiting professor at
the University of Strathclyde, Glasgow.
Dr. Dakin has won a number of awards, including “Inventor of the Year” for Plessey Electronic
Systems Limited and the Electronics Divisional
Board Premium of the Institute of Electrical and
Electronics Engineers, UK. Earlier, he won open
scholarships to both Southampton and Manchester
He has also been responsible for a number of
key electro-optic developments. These include the
sphere lens optical fiber connector, the first wavelength division multiplexing optical shaft encoder,
the Raman optical fiber distributed temperature
sensor, the first realization of a fiber optic passive
hydrophone array sensor, and the Sagnac location
method described here, plus a number of novel
optical gas sensing methods. More recently, he
was responsible for developing a new distributed
acoustic and seismic optical fiber sensing system,
which is finding major applications in oil and gas
exploration, transport and security systems.
Robert G. W. Brown, PhD, is at the Beckman
Laser Institute and Medical Clinic at the University
of California, Irvine. He earned a PhD in engineering at the University of Surrey, Surrey, and a BS in
physics at Royal Holloway College at the University
of London, London. He was previously an applied
physicist at Rockwell Collins, Cedar Rapids, IA,
where he carried out research in photonic ultrafast computing, optical detectors, and optical
materials. Previously, he was an advisor to the
UK government, and international and editorial
director of the Institute of Physics. He is an elected
member of the European Academy of the Sciences
and Arts (Academia Europaea) and special professor at the University of Nottingham, Nottingham.
He also retains a position as adjunct full professor at the University of California, Irvine, in the
Beckman Laser Institute and Medical Clinic,
Irvine, California, and as visiting professor in the
department of computer science. He has authored
more than 120 articles in peer-reviewed journals
and holds 34 patents, several of which have been
successfully commercialized.
Dr. Brown has been recognized for his entrepreneurship with the UK Ministry of Defence Prize
for Outstanding Technology Transfer, a prize from
Sharp Corporation (Japan) for his novel laserdiode invention, and, together with his team at
the UK Institute of Physics, a Queen’s Award for
Enterprise, the highest honor bestowed on a UK
company. He has guest edited several special issues
of Applied Physics and was consultant to many
companies and government research centers in
the United States and the United Kingdom. He is a
series editor of the CRC Press “Series in Optics and
Takao Ando
Research Institute of Electronics
Shizuoka University
Hamamatsu, Japan
Michel Joindot
Laboratoire FOTON, UMR
Lannion, France
Dominique Chiaroni
NOKIA Bell Labs
Paris-Saclay, France
J. Cliff Jones
School of Physics and Astronomy
University of Leeds
Leeds, United Kingdom
John P. Dakin
Optoelectronics Research Centre
University of Southampton
Southampton, United Kingdom
Fernando Araujo de Castro
Materials Division
National Physical Laboratory
Middlesex, United Kingdom
Michel Digonnet
Stanford Photonics Research Center
Stanford University
Stanford, California
Uzi Efron
Holon Institute of Technology
Holon, Israel
Günter Gauglitz
Department of Analytical Chemistry
University of Tübingen
Tübingen, Germany
Ron Gibbs
Gibbs Associates
Dunstable, United Kingdom
Nicholas Holliman
University of Durham
Durham, United Kingdom
Kazuo Hotate
University of Tokyo
Tokyo, Japan
George K. Knopf
Department of Mechanical and Materials
University of Western Ontario
Ontario, Canada
Ton Koonen
Department of Electrical Engineering
Technische Universiteit Eindhoven
Eindhoven, the Netherlands
John N. Lee
Naval Research Laboratory
Washington, District of Columbia
Robert A. Lieberman
Lumoptix Inc
Redondo Beach, CA
Makoto Maeda
Home Network Company
Kanagawa, Japan
Michael A. Marcus
Lumetrics Inc.
Rochester, New York
Tom Markvart
University of Southampton
Southampton, United Kingdom
xvi Contributors
Susanna Orlic
Department of Optics
Technische Universität Berlin
Berlin, Germany
Tsutae Shinoda
Home Network Company
Kanagawa, Japan
Anthony E. Smart
Scattering Solutions, Inc.
Costa Mesa, California
Euan Smith
Light Blue Optics
Cambridge, United Kingdom
Masayuki Sugawara
Kochi University of Technology
Tokyo, Japan
Kenkichi Tanioka
Kochi University of Technology
Tokyo, Japan
Heiju Uchiike
Home Network Company
Kanagawa, Japan
J. Michael Vaughan
Worcestershire, United Kingdom
Enabling technologies for
Optical transmission
Stanford University
1.1 Introduction
1.2 History of the introduction of optics
in backbone networks
1.3 General structure of optical
transmission systems
1.3.1 Modulation and detection: RZ
and NRZ codes
1.3.2 Basic architecture of amplified
WDM communication links
1.3.3 Basic architectures of
repeaterless systems
1.3.4 Optical reach and amplification
1.4 Limitations of optical transmission
1.4.1 Noise sources and bit error rate Amplifier noise Photoreceiver thermal
noise Relationship between
bit error rate and noise Accumulation of noise
1.4.2 Signal distortions induced by
propagation Chromatic dispersion Nonlinear effects Self-phase modulation Cross-phase modulation Four-wave mixing
Stimulated Brillouin
scattering Stimulated Raman
scattering Polarization mode
Design of an optical WDM system
1.5.1 Global performance of a
system: BER and OSNR
1.5.2 Critical parameters and tradeoffs for terrestrial, undersea,
and repeaterless systems
State of the art and future of the
WDM technology
1.6.1 State-of-the-art WDM system
capacity and distance
1.6.2 Forward error-correcting codes
1.6.3 Ultralong-haul technology:
New problems arising
1.6.4 Raman amplification
1.6.5 Diversification of fibers
and international
telecommunications union
fiber standards
1.6.6 Toward the future: WDM 40G
systems and beyond Increasing the
number of channels
by increasing the
amplification bandwidth
4 Optical transmission
Increasing the number
of channels with a
closer channel spacing
Optics has become the unique transmission technology in backbone networks, providing capacities
absolutely unknown before, a very high transmission quality and a reduction of operational costs
per transmitted bit. This is due to the development
of wavelength division multiplexing (WDM) introduced in 1995 and allowing to transmit around 800
Gbit/s (80 and 10 Gbit/s channels) over one single
mode fiber in 2005. This chapter describes this history, puts in evidence the basics of optical transmission and the development of WDM technology, and
shows how the capacity of WDM systems can be
increased by extending the used bandwidth, increasing the channel number or the bit rate per channel.
In 2005, coherent reception which had been
explored by a many academic and industrial laboratories in the 1980s came back in foreground, but
receivers were implemented quite differently from
what had been proposed 30 years before. In fact,
this very important technological step is closely
related to the progress of electronics, allowing the
implementation of complex and powerful digital
signal processing (DSP) algorithms. The receiver
consists now in a very “simple” optical front end
(local oscillator and photodiodes), analogue to
digital converters and a DSP unit compensating
for all the transmission impairments due to the
propagation over the fiber. Due to the fact that
the optical demodulator just translates the channel transfer function into baseband, the baseband
transfer function to be compensated for by the
DSP unit is just the transfer function of the optical
channel, which is not true with a quadratic detection. Coherent reception opens the way to complex
and more than binary modulation schemes, which
results into an increase of the transmitted bit rate
within a given bandwidth.
Chromatic dispersion is no more in line
compensated, which eliminates the dispersioncompensation fiber in each amplification site,
polarization mode dispersion (PMD), which was
a very serious problem can be compensated for
very efficiently. Moreover, both polarizations
can be used, which doubles the potential capacity of each channel by using polarization division
Modulation schemes
Selected recent results
multiplexing in conjunction with WDM. And DSP
can cancel the interference between polarizations
and then separate them without any problem.
Commercial coherent systems transmitting 80
100 Gbit/s channels in C band (8 Tbit/s over one
single mode fiber) have been available since 2011
and are installed in backbone networks to face the
always increasing traffic demand, while 200 and
beyond 400 Gbit/s per channel are actively investigated in industrial laboratories.
The enormous potential of optical waves for highrate transmission of information was recognized
as early as the 1960s. Because of their very high
frequency, it was predicted that light waves could
be ultimately modulated at extremely large bit
rates, well in excess of 100 Gbit·s−1 and orders of
magnitudes faster than possible with standard
microwave-based communication systems. The
promise of optical waves for high-speed communication became a reality starting in the late 1980s
and culminated with the telecommunication
boom of the late 1990s, during which time a worldwide communication network involving many
tens of millions of miles of fiber was deployed in
many countries and across many oceans. In fact,
much of the material covered in this handbook
was generated to a large extent as a result of the
extensive optoelectronics research that was carried
out in support of this burgeoning industry. The
purpose of this chapter is to provide a brief overview of the basic architectures and properties of
the most widely used type of optical transmission
line, which exploit the enormous bandwidth of
optical fiber by a general technique called WDM.
After a brief history of optical network development, this chapter examines the various physical
mechanisms that limit the performance of WDM
systems, in particular, their output power [which
affects the output signal-to-noise ratio (SNR)],
capacity (bit rate times number of channels), optical reach (maximum distance between electronic
regeneration), and cost. The emphasis is placed on
the main performance-limiting effects, namely
fiber optical nonlinearities, fiber chromatic and
1.2 History of the introduction of optics in backbone networks 5
group velocity dispersions (GVDs), optical amplifier noise and noise accumulation, and receiver
noise. Means of reducing these effects, including
fiber design, dispersion management, modulation schemes, and error-correcting codes, are also
reviewed briefly. The text is abundantly illustrated
with examples of both laboratory and commercial
optical communication systems to give the reader
a flavor of the kinds of system performance that are
available. This chapter is not meant to be exhaustive, but to serve as a broad introduction and to
supply background material for the following two
chapters (optical network architecture and optical
switching and multiplexed architectures), which
dwell more deeply into details of system architectures. We also refer the reader to the abundant
literature for a more in-depth description of these
and many other aspects of optical communication
systems (see, for example, [1,17,32,34]).
Enabling the implementation of the optical communication concept required the development of a
large number of key technologies. From the 1960s
through the 1980s, many academic and industrial
laboratories around the world carried out extensive research towards this goal. The three most difficult R&D tasks were the development of reliable
laser sources and photodetectors to generate and
detect the optical signals, of suitable optical fibers
to carry the signals, and of the components needed
to perform such basic functions as splitting, filtering, combining, polarizing, and amplifying light
signals along the fiber network. Early silica-based
fibers had a large core and consequently carried a
large number of transverse modes, all of which
travel at a different velocity, leading to unavoidable spreading of the short optical bits that carry
the information and thus to unacceptably low bit
rates over long distances. Perhaps, the most crucial
technological breakthrough was the development
of single-mode fibers, which first appeared in the
mid-1970s and completely eliminated this problem.
Over the following decade, progress in both material quality and manufacturing processes led to a
dramatic reduction in the propagation loss of these
fibers, from tens of decibels per kilometer in early
prototypes to the amazingly low typical current loss
of 0.18 dB·km−1 around 1.5 μm used in submarine
systems (or an attenuation of only 50% through a
slab of glass about 17 km thick!). The typical attenuation of fibers used in long-distance terrestrial networks today is around 0.22 dB·km−1 at 1550 nm.
Fiber components were developed in the 1980s,
including such fundamental devices as fiber couplers, fiber polarizers and polarization controllers, fiber wavelength division multiplexers [5,48],
and rare-earth-doped fiber sources and amplifiers [17,18,41]. The descendants of these and several
other components now form the building blocks of
modern optical networks. Interestingly, the original
basic research on almost all of these components was
actually done not with communication systems in
mind, but for fiber sensor applications, often under
military sponsorship, in particular for development
of the fiber optic gyroscope [5]. Parallel work on
optoelectronic devices produced other cornerstone
active devices, including high quantum efficiency,
low-noise photodetectors, efficient and low-noise
semiconductor laser diodes in the near infrared, in
particular distributed-feedback (DFB) lasers, as well
as semiconductor amplifiers, although these were
eclipsed in the late 1980s by rare-earth-doped fiber
amplifiers. The development of high-power laser
diodes, began in the 1980s to pump high-power solidstate lasers, in particular for military applications,
sped up substantially in the late 1980s in response
for the growing demand for compact pump sources
around 980 and 1480 nm for then-emerging erbiumdoped fiber amplifiers. Another key element in the
development of optical communication networks
was the advent of a new information management
concept called Synchronous Optical NETworks [31],
especially matched to optical signals but also usable
for other transmission technologies.
Up until the mid-1980s, long-distance communication network systems were based mostly
on coaxial cable and radio frequency technologies.
Although the maximum capacity of a single coaxial
cable could be as high as 560 Mbit·s−1, most installed
systems operated at a bit rate of 140 Mbit·s−1, while
radio links could support typically eight 140
Mbit·s−1 radio channels. Intercontinental traffic was
shared between satellite links and analogue coaxial
undersea systems; digital undersea coaxial systems
never existed. The switch to optical networks was
motivated in part by the need for a much greater
capacity, in part by the need for improved security
6 Optical transmission
and reliability of radio-based and cable-based systems. These systems were commonly affected by
two different types of failures, namely signal fading and cable breaks due to civil engineering work,
respectively. The first optical transmission systems
were introduced in communication networks in the
mid-1980s. Early prototypes were classical digital
systems with a capacity that started at 34 Mbit·s−1
and rapidly grew to 140 Mbit·s−1, i.e., comparable to
established technologies. Optical communication
immediately outperformed the coaxial technology
in terms of regeneration span, which was tens of kilometers compared to less than 2 km for high-capacity
coaxial-cable systems. However, there was no significant advantage compared with radio links, in terms
of either capacity or regeneration span length, the
latter being typically around 50 km. One could thus
envision future long-distance networks based on a
combination of secure radio links and optical fibers.
Soon after optical devices became reliable enough
for operation in a submerged environment, optical
fiber links rapidly replaced coaxial-cable systems.
The very first optical systems used multimode fibers
and operated around 800 nm. This spectral window
was changed to 1300 nm for the second generation
of systems, when lasers around this wavelength first
became available. In Japan, where optical communication links were installed early on, prior to the
development of the 1550-nm systems, many systems
operate in this window. Most of the current systems
for backbone networks, especially in Europe and the
United States, operate in the spectral region known
as the C-band (1530–1565 nm). This has become the
preferred window of operation because the attenuation of silica-based single-mode fiber is minimum
around 1550 nm. The first transatlantic optical
cable, TAT-8, was deployed in 1988. Containing two
fiber pairs and a large number of repeaters, it spans a
distance of about 6600 km under the Atlantic Ocean
between Europe and the United States and carries 280 Mbits of information per second. In 1993,
optical transmission systems carrying 2.5 Gbit·s−1
(16 × 155 Mbit·s−1) over a single fiber with a typical
regeneration span of 100 km began to be added to
the growing worldwide optic–optic network. In
terms of both capacity and transmission quality,
radio-based systems could no longer compete, and
optics became the unique and dominating technology in backbone networks.
The single most important component that
made high-speed communication possible over
great distances (≫100 km) without electronic
regeneration is the optical amplifier. Although
the loss of a communication optic around 1.5 μm
is extremely small, after a few tens of kilometers,
typically 50–100 km, the signal power has been
so strongly attenuated that further propagation
would cause the SNR of the signal at the receiver
to degrade significantly, and thus the transmission
quality, represented by the bit error rate (BER),
to be seriously compromised. The SNR can be
improved by increasing the input signal power,
but the latter can only be increased so much before
the onset of devastating nonlinear effects in the
optic, in particular stimulated Raman scattering
(SRS), stimulated Brillouin scattering (SBS), and
four-wave mixing (FWM). Moreover, the gain in
distance would be limited: a transmission over
200 km instead of 100 km of current optic would
require the input power to be increased by roughly
20 dB!
This distance limitation was initially solved by
placing optoelectronics repeaters along the optical line. Each repeater detects the incoming data
stream, amplifies it electronically, and modulates
the current of a new laser diode with the detected
modulation. The modulated diode’s output signal
is then launched into the next segment in the optic
link. This approach works well, but its cost is high
and its bit rate is limited, on both counts by the
repeaters’ high-speed electronics. A much cheaper
alternative, which requires high-speed electronics only at the two ends of the transmission line,
is optical amplification. Each electronic repeater is
now replaced by an in-line optical amplifier, which
amplifies the low-power signals that have traveled
through a long optic span before their SNR gets
too low and then reinjects them into the next segment in the optic link. The advantage of this alloptical solution is clearly that the optical signal is
never detected and turned into an electronic signal, until it reaches the end of the long-haul optical
line, which can be thousands of kilometers long.
Because the noise figure (NF) of optical amplifiers
is low, typically 3–5 dB, the SNR can still be quite
good even after the signals have traveled through
dozens of amplifiers.
Starting as early as the 1960s, much research was
devoted to several types of in-line optical amplifiers, first with semiconductor waveguide amplifiers
[51], then with rare-earth-doped optic a mplifiers [18],
and more recently Raman optic amplifiers [28].
1.3 General structure of optical transmission systems 7
Semiconductor amplifiers turned out to have
the highest wall-plug efficiency. However, at bit
rates under about 1 Gbit·s−1, in WDM systems
they induce cross-talk between signal channels.
Although solutions have been recently proposed,
semiconductor amplifiers have not yet entered the
market in any significant way, partly because of
the resounding success of the erbium-doped optic
amplifier (EDFA). First reported in 1987 [42], this
device provides a high small-signal gain around
1.5 μm (up to ∼50 dB) with a high saturation
power and with an extremely high efficiency—the
record is 11 dB of small-signal single-pass gain
per milliwatt of pump power [52]. EDFAs used
in telecommunication systems operate in saturation and have a lower gain, but it is still typically
as high as 20–30 dB. The EDFAs can be pumped
with a laser diode, at either 980 or 1480 nm, and
they are thus very compact. Another key property is their wide gain spectrum, which stretches
from ~1475 to ~1610 nm, or a total bandwidth
of 135 nm (~16.4 THz!). For technical reasons, a
single EDFA does not generally supply gain over
this entire range, but rather over one of three
smaller bands, called the S-band (for “short,”
~1480–1520 nm), the C-band (for “conventional,”
~1530–1565 nm), and the L-band (for “long,”
~1565–1610 nm). Amplification in the S-band can
also be accomplished with a thulium-doped fiber
amplifier (TDFA) [49]. Gain has been obtained
over the S- and C-band by combining an EDFA
and a TDFA [50].
Perhaps more importantly, EDFAs induce negligible channel cross-talk at modulation frequencies
above about 1 MHz. These unique features make
it nearly ideally suited for optical communication
systems around 1.5 μm. Since the mid-1990s, it has
been the amplifier of choice in the overwhelming
majority of deployed systems, thus eliminating the
electrical regeneration bottleneck.
The very large gain bandwidth of EDFAs and
other optical amplifiers also provided the opportunity of amplifying a large number of modulated
optical carriers at different wavelengths distributed over the amplifier bandwidth. This concept of
WDM had of course already been applied in radio
links. One significant advantage of WDM optical
systems is that the same amplifier amplifies many
optical channels, in contrast with classical regenerated systems, which require one repeater per channel. Optical amplifiers thus reduce the installation
cost of networks in two major ways. First, the
WDM technique results in an increase in capacity without laying new fibers, which reduces optic
cost. Second, the cost of amplification is shared
by a large number of channels, and because the
use of a single optical amplifier is cheaper than
implementing one regenerator per channel, the
transmission cost is reduced proportionally. This
critical economic advantage provided the final
impetus needed to displace regenerated systems
and launch the deployment of the worldwide optical WDM backbone networks that took place at the
end of the 1990s.
1.3.1 Modulation and detection:
RZ and NRZ codes
While radio systems use a wide variety of modulation formats in order to improve spectrum utilization, in optical systems data have been so far
transmitted using binary intensity modulation. A
logic 1 (resp. 0) is associated to the presence (resp.
absence) of an optical pulse. Two types of line codes
are mainly encountered: nonreturn to zero (NRZ),
where the impulse duration is equal to the symbol
duration (defined as the inverse of the data rate)
and return to zero (RZ), where the impulse duration is significantly smaller than the symbol duration. This property explains why the name “return
to zero” is used; if the impulse duration equals
roughly one half of the symbol time, the modulation format is designed as RZ 50%. So at a bit rate
of 10 Gbit·s−1, NRZ uses impulses with a width of
approximately 100 ps, while RZ 50% or RZ 25%
will use 50 and 25 ps wide pulses, respectively. RZ
has, for a given mean signal power, a higher signal
peak power. This property can be used to exploit
nonlinear propagation effects, which under certain conditions can improve system performance.
Details will be provided further on.
Research is actively being conducted to investigate new modulation schemes for future high-bitrate systems. For instance, duobinary encoding, a
well-known modulation scheme in radio systems,
has been proposed because of its higher resistance to chromatic dispersion. Carrier-suppressed
8 Optical transmission
RZ, an RZ modulation format with an additional
binary phase modulation, is also extensively studied, as well as recently differential phase shift keying (DPSK); both provide a higher resistance to
nonlinear effects. However, only NRZ and RZ are
used in installed systems today.
Detection of the optical signals at the end of a
transmission line is performed with a photodetector, which is typically a PIN or an avalanche
diode. Photons are converted in the semiconductor in electron–holes pairs and collected in an
electrical circuit. The generated current is then
amplified and sent to a decision circuit, where the
data stream is detected by comparing the signal
to a decision threshold, as in any digital system.
Detection errors can occur in particular because
of the presence of noise on the signal and in the
detector. The error probability is a measurement
of the transmission quality. In practice, the error
probability is estimated by the BER, defined as the
ratio of error bits over the total number of transmitted bits.
Several sources of noise are typically present in
the detection process of an optical wave. Shot noise,
the most fundamental one, arises from the discontinuous nature of light. Thermal noise is generated
in the electrical amplifiers that follow the photodetector. In PIN receivers, thermal noise is typically
15–20 dB larger than the quantum limit, and if the
optical signal is low, thermal noise dominates shot
noise. In the case of amplified systems under normal operating conditions, the amplified spontaneous emission (ASE) noise of the in-line amplifiers
is largely dominant compared to the receiver noise,
which can thus be neglected.
1.3.2 Basic architecture of amplified
WDM communication links
A typical amplified WDM optical link is illustrated in Figure 1.1. The emitter consists of N lasers
of different wavelengths, each one representing a
communication channel. The lasers are typically
DFB semiconductor lasers with a frequency stabilized by a number of means, including temperature control and often Bragg gratings. Each laser is
amplitude modulated by the data to be transmitted. This modulation is performed with an external modulator, such as an amplitude modulator
based on lithium niobate waveguide technology.
Direct modulation of the laser current would be
simpler and less costly, but it introduces chirping
of the laser frequency, which is unacceptable at
high modulation frequencies over long distances
[1,29]. The fiber-pigtailed laser outputs are combined onto the optical fiber bus using a wavelength
division multiplexer, then generally amplified by a
booster fiber amplifier.
The multiplexer can be based on concatenated
WDM couplers (for low number of channels) or
arrayed waveguide grating multiplexers. This is the
technology of choice for high channel counts, in
particular in the so-called dense wavelength division multiplexed systems: although this term has
no precise definition, it applies usually to systems
with a channel spacing less than 200 GHz. In-line
N data
Figure 1.1 Structure of an amplified WDM optical system.
N data
1.3 General structure of optical transmission systems 9
optical amplifiers are distributed along the fiber
bus to periodically amplify the power in the signals, depleted by lossy propagation along the fiber.
Ideally, each amplifier provides just enough gain
in each channel to compensate for the loss in that
channel, i.e., such that each channel experiences
a net gain of unity. Because the gain of an optical
amplifier and to a smaller extent the loss along the
fiber are wavelength dependent, the net gain is different for different channels. If the difference in net
gain between extreme channels is too large, after
a few amplifier/bus spans the power in the strongest channel will grow excessively, thus robbing
the gain for other channels and making their SNR
at the receiver input unacceptably low. This major
problem is typically resolved by flattening or otherwise shaping the amplifier spectral gain profile, or
equivalently equalizing the power in the channels,
using one of several possible techniques, either passive (for example, long-period fiber gratings) [58]
or dynamic (e.g., with variable optical attenuators).
While the first WDM systems that appeared in the
mid-1980s used basic amplifiers without control
system, the new very long reach systems include
complex gain flattening devices that compensate
for the accumulation of gain tilt. The gain of an
in-line amplifier ranges approximately from 15 to
30 dB per channel. The distance between amplifiers is typically 30–100 km, depending on fiber loss,
number of channels, and other system parameters.
In deployed undersea systems, the amplifiers are
equally spaced, whereas in terrestrial networks
the amplifier location depends on geographical
constraints, for instance, building location and
amplifiers tend to be unevenly spaced. At the output of the transmission line, a wavelength division
demultiplexer separates the N optical channels,
which are then sent individually to a receiver, then
electronically processed.
1.3.3 Basic architectures of
repeaterless systems
A repeaterless communication system aims to
accomplish a very long optical reach without inline amplifiers. A common application is connecting two terrestrial points on each side of a
straight or narrow arm of sea, in which case it is
generally not worth incurring the cost of undersea
amplifiers. Some deployed repeaterless systems are
extremely long, as much as hundreds of kilometers,
and consequently they exhibit a high span attenuation, up to 50 or even 60 dB. The problem is then to
ensure that the output power at the receiver is high
enough, in spite of the high span loss, to achieve
the required SNR at the end of the line.
This goal has been achieved with a number of
architectures. A common solution involves using
a preamplifier, i.e., an amplifier placed before the
receiver to increase the detected power and reduce
the receiver NF, as is also often done in classical WDM systems. Another one is to use a highpower amplifier, i.e., an amplifier placed between
the emitter and the transmission fiber to boost the
signal power launched into the fiber. Although
such a booster amplifier is also present in some
of the amplified WDM systems described before,
the typical feature in repeaterless systems is the
high power level, which can reach up to 30 dBm.
In both cases, the amplifier can be either an EDFA
or a Raman amplifier, or a combination of both.
This solution, as we will see further on, is limited
by nonlinear effects in the fiber, although they can
be somewhat mitigated with proper dispersion
A third solution specific to repeaterless systems
is to place an amplifier fiber in the transmission
fiber itself and to pump it remotely with pump
power launched into the transmission fiber from
either end. The amplifier fiber can then be a length
of Er-doped fiber; the entire transmission fiber can
be lightly doped with erbium (the so-called distributed fiber amplifier); or the transmission fiber
can be used as a Raman amplifier. The drawback
of this general approach is that it requires a substantially higher pump power than a traditional
EDFA, and it is therefore more costly. The reason
is that the pump must propagate through a long
length of transmission fiber before reaching the
amplifier fiber, and because the transmission fiber
is much more lossy at typical pump wavelengths
than in the signal band, some of the pump power is
lost. A fourth general solution, which is not specific
to repeaterless systems, is to use powerful errorcorrecting schemes [12].
1.3.4 Optical reach and
amplification span
Two important features of a WDM communication system are its total capacity, usually expressed
as N × D, where N is the number of optical channels
10 Optical transmission
and D the bit rate per channel, and the optical
reach, which is the maximum distance over which
the signal can be transmitted without regeneration.
Even in amplified systems with a nominally unity
net gain transmission, due to the accumulation of
noise from the optical amplifiers and signal distortions, after a long enough transmission distance
the bit error becomes unacceptably high, and the
optical signals need to be regenerated. In practical
deployed WDM systems in 2001, this limitation
typically occurs after about seven amplifier spans
with a loss of roughly 25 dB per span.
Another important parameter is the amplification span, i.e., the distance between adjacent amplifiers. The performance of an optical WDM system
cannot be expressed only in terms of optical reach;
the number of spans must also be introduced. As
an example, the optical reach of commercially
available terrestrial systems in 2002 was around
800 km, compared to 6500 km in transatlantic
systems. A key difference between them is the
amplification span, as will be explained in the next
section. In the following, WDM systems with a bit
rate per channel of 2.5, 10, and 40 Gbit·s−1 will be
designated as WDM 2.5G, WDM 10G, and WDM
40G, respectively.
1.4.1 Noise sources and bit error rate AMPLIFIER NOISE
Amplification cannot be performed without adding noise to the amplified signals. In optical amplifiers, this noise originates from ASE [17], which
is made of spontaneous emission photons emitted by the active ions (Er3+ in the case of EDFAs)
via radiative relaxation subsequently amplified as
they travel through the gain medium. The spectral
power density of the ASE signal per polarization
mode is given by
γ ASE = nsp hv (G – 1)
where G is the amplifier gain, h Planck’s constant,
and v the signal optical frequency. nsp is a dimensionless parameter larger or equal to unity called
the spontaneous noise factor. It depends on the
amplifier’s degree of inversion, and it approaches
unity (lowest possible noise) for full inversion of
the active ion population. The ASE is a broadband
noise generated at all frequencies where the amplifier supplies gain, and its bandwidth is nominally
the same as that of the amplifier gain. The ASE
power coming out of the amplifier, concomitantly
with the amplified signals, is obtained by the integration of γASE over the frequency bandwidth of the
gain. As an example, in a particular C-band EDFA
amplifying ten signals equally spaced between
1531 and 1558 nm and with a power of 1 μW each,
and with a peak gain of 33 dB at 1531 nm, the total
power in the amplified signals is 5.5 mW, whereas
the total ASE output power is 0.75 mW, i.e., more
than 10% of the signals’ power.
The photodetectors used in receivers are the socalled quadratic detectors, i.e., they respond to the
square of the optical field. Detection of an optical
signal S corrupted by additive noise N (ASE noise
in the case of amplified systems) in a photodetector thus gives a signal proportional to |S + N|2.
Expansion of this signal gives rise to the signal |S|2
(the useful signal) plus two noise terms. The first
term (2SN) is the beat noise between the signal and
the ASE frequency component at the signal frequency; it is called the signal–ASE beat noise. The
second term (N2) is the beat noise between each
frequency component of the ASE with itself (the
ASE–ASE beat noise). The signal–ASE beat noise
varies from channel to channel, but the ASE–ASE
beat noise is the same for all channels. A third and
fourth noise terms are of course the shot noise of
the amplified signal and the shot noise of the ASE,
and to this must be added a fifth term, namely
the receiver noise discussed earlier. In high-gain
amplifiers with low input signals, which are applicable to most in-line amplifiers in communication
links, the dominant amplifier noise term is the
signal–ASE beat noise. In the amplifier example
given at the end of the previous paragraph, the
SNR degradation (also known as the NF) is ~3.4 dB
for all 10 signals, and it is due almost entirely to
signal–ASE beat noise. This noise term is typically
large compared to the receiver noise, which can
usually be neglected. Note that the NF is defined
as the SNR degradation of a shot-noise-limited
input signal. The SNR degradation at the output
of an amplifier is therefore equal to the NF only
when the input signal is shot-noise limited. In a
chain of amplifier, this is true for the first amplifier
that the signal traverses. However, after traveling
through several amplifiers, the signal is no longer
1.4 Limitations of optical transmission systems 11
shot-noise limited but dominated by signal–ASE
beat noise, and the SNR degradation is smaller
than the NF. Refer to “Accumulation of Noise” section for further detail on noise accumulation in
As mentioned earlier, the photoreceiver thermal
noise is generally fairly large compared to shot
noise. However, it can become negligible when the
signals are amplified with a preamplifier placed
before the detector. To justify this statement, consider a receiver consisting of an optical preamplifier
of gain G followed by a photodetector. The optical
signal and ASE noise powers at the receiver input
are proportional to G and G − 1, respectively (in
practice, G is very large and G − 1 ≈ G). Because the
thermal noise does not depend on G, and because
it is typically 15–20 dB worse than the quantum
limit, it is clear that if the preamplifier gain is large
enough, say 20 dB, the thermal noise is negligible
compared to the signal–ASE beat noise. This is
exactly the same phenomenon as in electronics,
where the high-gain first stage of a receiver masks
the noise of the following stages. This property
illustrates another advantage brought by optical
amplifiers: optical preamplifiers allow to get away
from the relatively poor NF of electronic circuits
and thus to achieve much better performance. RELATIONSHIP BETWEEN BIT ERROR
How does the error rate at the receiver depend
on the noise level, or more exactly on the optical signal-to-noise ratio (OSNR), of the detected
signal? To answer this question, we must make
some assumptions regarding both the signal and
the noise. First, because the signal–ASE beat noise
depends on the signal power, it also depends on the
state of the signal, i.e., on the transmitted data. If
we assume an ideal on–off keying (OOK) modulation, signal–ASE beat noise is present only when
the signal is on, whereas the ASE–ASE beat noise
is present even in the absence of signal. Because
the data can be assumed to be equally often on
and off, the mean signal power is equal to half the
peak power.
Second, to obtain an analytical expression for
the bit error probability requires another assumption, common in communication theory, which is
that the noise has a Gaussian statistics. This is true
for signal–ASE beat noise, as a result of the linear
processing of Gaussian processes, but it is not true
for ASE–ASE beat noise. However, under normal
operating conditions of amplified systems (i.e.,
with a sufficiently high OSNR), the influence of
ASE–ASE beat noise remains relatively small. After
a large number of optical amplifiers, however, the
ASE–ASE beat noise component, which depends
on the total ASE noise, can become significant.
An effective way to reduce this noise component
is then to place before the receiver an optical filter
that cuts down the ASE power between the optical signals. This can be accomplished with a comb
filter or with the demultiplexer that separates the
channels. Such a filter reduces the ASE–ASE beat
noise, but of course it does not attenuate either the
signals or the ASE at the signals’ frequencies, so it
does not affect the signal–ASE beat noise. In the
following, we assume that such a filter, with a rectangular transmission spectrum of optical bandwidth Ba, is placed before the receiver.
Third, because the noise variance is not the
same conditionally to the transmitted data, the
best decision threshold is not just at equal distance
between the two signal levels associated with the
two possible data values at the sampling time, but
rather some other optimum threshold value that
depends on signal power. Assuming that this optimum threshold value is used, the bit error probability (or BER) can be expressed as [30]:
Pexact = erfc
m + m + 4R
where erfc is the complementary error function and the SNR, R is the ratio of the mean signal power to the ASE power within the electrical
bandwidth B, i.e., γASEB. The electrical bandwidth
B is the bandwidth of the electronic post-detection
circuits. The parameter m is the normalized optical filter bandwidth, m = Ba/B. Equation 1.2 can be
easily derived by computing the variances of the
signal–ASE beat noise and ASE–ASE beat noise
contributions. For the computation of the first
term, the average power of the signal is used. An
ideal rectangular optical filter is assumed, as well
as a rectangular electrical filter
The SNR is usually measured not within the signal bandwidth, but over a much larger bandwidth
B0, corresponding generally to 0.1 nm in wavelength (or 12.5 GHz near 1550 nm). Calling this
12 Optical transmission
parameter the optical SNR R0, the bit error probability can be rewritten as
Q
2
Pexact = erfc
where Q is the quality factor:
µ + µ + 4βR0
Pn = 2 Nnsp (G − 1)hvB0 = nsp
β is the ratio B/B0 of the electrical to measurement bandwidths, R0 = βR and μ = mβ2. An error
probability of 10−9 (resp. 10−15) requires, for example, Q = 6 (resp. 8). Neglecting the ASE–ASE beat
noise contribution (μ → 0), the BER can be simply
expressed as
R
Pexact = erfc 0
2β
R0 = βQ 1 +
Q0
= 2nsp αL
As an example, consider a 10 Gbit·s−1 system with
an optical bandwidth Ba of 50 GHz, an electrical
bandwidth B of ~7 GHz (as a rule of thumb, the
electrical bandwidth is taken to be 70% of the bit
rate), and a filter bandwidth B 0 of ~12.4 GHz (0.1
nm). Then β = 0.56 and m = 7, and a BER of 10−15
(Q 0 = 8) requires an OSNR R0 of ~17 dB.
Based on the various degradation mechanisms
that induce power penalty along the transmission system, equipment vendors specify a minimum SNR required by a given system. Typically,
for WDM 10G systems, the minimum OSNR is in
the range of 21–22 dB. It must be noted that the
required OSNR increases with increasing electrical bandwidth, which is proportional to the bit
rate. For instance, by going from 2.5 to 10 Gbit·s−1
the OSNR needs to be increased by about 6 dB (see
Equation 1.6). This is a very important constraint,
for system designers as well as operators. ACCUMULATION OF NOISE
The accumulation of noise generated by successive amplifiers along an optical line degrades the
OSNR. This accumulated noise limits the number
G −1
1n G
(exp(αZ a ) − 1)hvB0
where L is the total length of the link, Za the length
of the amplification span, and α the attenuation
factor of the fiber. The SNR R is
From Equation 1.4, the value of R0 needed to
achieve a given quality factor Q 0 is given by
of successive amplifiers that can be used, and thus
the optical reach. Assuming a link with N equally
spaced amplifiers of mean output power per channel P0, an inversion parameter nsp, and a gain G
compensating exactly for the attenuation of the
fiber span between them, the total noise power Pn
(including both polarization modes) within a bandwidth B0 at the last amplifier output is given by [29]:
R(N ) =
2 Nnsp (G − 1)hvB0
or in dB:
OSNR (dB) = P0 (dBm) – Pn (dBm)
These relationships show that for a given launched
power and a given amplification span, the maximum transmission distance, represented by the
maximum number of spans, is limited by the
minimum required OSNR at the receiver input.
For a given distance L, the OSNR increases when
G decreases, i.e., when the amplification span
becomes shorter. As an example, Equation 1.7
shows that for a link with a fixed length L and a fiber
attenuation of 0.2 dB·km−1, an OSNR improvement
of 7 dB is achieved when the span length is reduced
from Z a = 100 km (a span loss of 20 dB) to Z a = 50 km
(a span loss of 10 dB). The link with the 50-km
span length does require twice as many amplifiers,
but the gain they each need to supply is reduced
by 10 dB. So is their output noise power, and as a
result the OSNR is improved. This illustrates how
important a parameter the fiber attenuation is. For
a given amplification span and a given number of
spans, any reduction in this attenuation will result
in a better OSNR simply because the amplifier gain
G will be smaller. Equivalently, a lower fiber loss
allows increasing the optical reach. For example, a
fiber loss reduction as small as 0.02 dB·km−1 from
0.23 to 0.21 dB·km−1, in 100-km spans, will allow
to reduce the gain by 2 dB and thus to improve the
1.4 Limitations of optical transmission systems 13
Signal power per channel (dBm)
Loss/span = 28 dB
25 dB
20 dB
12.5 dB
Number of spans
Figure 1.2 Output signal power per channel required to achieve an OSNR of 20 dB as a function of
the number of spans for different span losses.
OSNR by as much as 2 dB. If with the 0.23 dB·km−1
fiber, the required OSNR was reached after eight
spans; with the 0.21 dB·km−1 fiber, it will be possible to have 12 spans (10 log (12/8) = 1.7 dB), the same
noise power being then produced by a larger number of less noisy amplifiers. This dependence of the
OSNR on the span loss explains why the amplification span is significantly shorter in undersea light
wave systems compared to terrestrial ones, because
transmission distances are much longer.
Figure 1.2 shows the output signal power per channel (calculated from Equation 1.8) needed to reach an
OSNR of 20 dB as a function of the number of spans
for different span losses. The inversion parameter of
the optical amplifiers is taken to be nsp = 1.6, and the
output ASE is assumed to be filtered with a 0.1-nm
narrowband filter (B0 = 12.5 GHz). These curves also
allow comparing the power required for achieving
a given optical reach. For instance, for a link length
L = 1000 km and a fiber attenuation α = 0.25 dB km−1,
if there are N = 20 spans, each one of them will have
a loss αL/N = 12.5 dB, and Figure 1.2 shows that the
required power per channel will be −7.5 dBm. When
the number of spans is divided by two (N = 10), the
span loss increases to 25 dB and the required signal
power jumps nearly 10-fold, to 2 dBm.
1.4.2 Signal distortions induced by
In addition to SNR degradation due to optical
amplifiers, transmitted optical signals also suffer
distortions induced by propagation along the fiber.
These effects become more and more important as
the bit rate increases CHROMATIC DISPERSION
Within a narrow bandwidth around the carrier angular frequency ω 0, a fiber of length L
can be viewed as an all-pass linear filter, with an
attenuation nearly independent of wavelength
over the small signal bandwidth prevailing at
the bit rates under consideration. Expanding the
phase up to the second order in frequency ω about
ω 0 allows to write the transfer function H(ω) of
this filter as
H (ω) = A exp i Φ(ω0 ) +β1L(ω − ω0 ) + β2L(ω − ω 0 )2
Higher order terms in the expansion must be considered when, for instance, β2 equals zero. The first
and second terms of the exponent represent a constant phase shift and the delay of the impulse (also
called group delay), respectively. The third term,
proportional to the second derivative of the signal
mode index with respect to the wavelength, originates physically from the dependence of the mode
group velocity on wavelength. It is often referred
to as GVD or chromatic dispersion. In an optical
waveguide such as a fiber, GVD is approximately
the sum of the material dispersion and the fiber
dispersion. It is mathematically represented by β2
usually expressed in ps2·km−1 or by the so-called
chromatic dispersion D, which is a more familiar
14 Optical transmission
parameter to system designers, expressed in ps
nm−1·km−1. It is the group delay variation over a
1-nm bandwidth after propagation along a 1-km
length of fiber. In a standard communication
fiber, D is around 15 ps·nm−1·km−1. D (expressed in
ps·nm−1·km−1) is related to β2 (in ps2·km−1) and λ (in
nm) by
D (λ) = −6 π105
β 2 (λ)
Chromatic dispersion results in broadening of the
signal as it propagates through the fiber. When the
signal pulse amplitude is Gaussian, the pulse width
evolution along the fiber can be computed analytically and pulse broadening can be expressed with
simple expressions [1]. If a Gaussian pulse with
a complex impulse envelope u(t, 0) of temporal
width θ 0,
t2
u(t ,0) = U 0 exp − 2
2θ0
is launched into the fiber at z = 0, the impulse at
distance L is given by [1]:
T2
u(t , L ) = U exp − 2
+ iΨ(x ,t )
2θ (x )
where U is the amplitude taking into account the
fiber attenuation, T = it − β1L the time in local coordinates associated to the signal and Ψ(x, T) a phase
term. The parameter θ(x) is the temporal pulse
width at distance L, given by
θ(x ) = θ0 1 + x 2
where x = L/LD is the propagation distance normalized to the characteristic dispersion length
LD = θ02 / β2 .
As expected from physical arguments, in the
presence of chromatic dispersion the pulse width
expands along the fiber in much the same way as
a spatial beam expands in space due to diffraction
(see Equation 1.14). Here the dispersion length
LD plays the same role as the Rayleigh range does
in the diffraction of Gaussian beams. For a fiber
of length L and dispersion coefficient β2, there is
an optimum value of the incident pulse width θ 0
that minimizes the pulse width at the fiber output.
This optimum pulse width, obtained by setting the
derivative of Equation 1.14 with respect to θ 0 equal
to zero, is given by
θ0,opt = β2 L
and the output pulse width is
θ(L ) = 2θ0,opt = 2 β 2 L
Stated differently, Equation 1.15 shows that for
a given input pulse width θ 0 the optimum fiber
length that minimizes the output temporal pulse
width is L = θ02 / β2 = LD , i.e., one dispersion
This analysis shows that the larger the chromatic dispersion is, the narrower the initial pulse
needs to be, and the larger the pulse width will
be at the output of the fiber. It implies that if the
input pulse width is not properly selected, i.e., if it
is either too narrow or too wide, chromatic dispersion will cause successive pulses to overlap, which
creates what is known as intersymbol interference
(ISI). This deleterious effect alters the decision process and thus increases the BER. It must be noted
that a Gaussian pulse extends indefinitely in time
and there is theoretically always a finite amount of
ISI; the maximum distance is thus set by a “tolerable” level of ISI. This distance is exactly LD if we
define this acceptable ISI is reached when the initial width of the pulse has been multiplied by 2
(which is somewhat arbitrary). As an example,
assuming Gaussian pulses with θ 0 equal to half
the symbol duration, for a standard communication fiber (D = 17ps·nm−1 km−1, i.e., β2 = 20 ps2·km−1)
this distance is equal to 2000 km at 2.5 Gbit·s−1, but
only 125 km at 10 Gbit·s−1.
This result demonstrates that propagation
at bit rates of 10 Gbit·s−1 or greater in a standard
fiber is not possible over distances longer than a
few tens of kilometer without significant ISI. This
problem is circumvented in practice by introducing along the transmission line components with
a negative dispersion coefficient to compensate for
chromatic dispersion, in much the same way as an
optical lens is used to refocus a free-space beam
after it has expanded as a result of diffraction. This
method has been demonstrated in the laboratory
with a number of optical filters, especially fiber
Bragg gratings for dynamic compensation [19],
or more simply and commonly with a length of
1.4 Limitations of optical transmission systems 15
dispersion-compensating fiber (DCF) designed to
exhibit a strong negative dispersion coefficient D
[17], in the range of −90 ps·nm−1·km−1 for standard
fiber. This last solution is the only one used in commercial systems today, and fiber suppliers try to
develop the best compensation fiber matched to the
fiber they sell. DCFs are typically more lossy than
standard communication fibers, with attenuation
coefficients around 0.5 dB·km−1. To make up for
this additional loss, the DCF is typically inserted
near an amplifier. In order to reduce the impact of
the DCF loss on the amplifier NF, in WDM systems the DCF is usually placed in the middle of a
two-stage amplifier.
Chromatic dispersion depends on wavelength.
This dependence is characterized by the dispersion slope (expressed in ps·nm−2·km−1). In order to
completely compensate for the dispersion at any
wavelength, the fiber and the associated DCF must
exhibit the same D/S ratio, where S is the slope of
the dispersion D. The existence of a perfectly slopecompensating DCF depends strongly on the type
of fiber. For example, a DCF very well matched to
standard single-mode fiber (SSMF) is available,
but this is not true for all fibers. If the slope is not
matched, some channels will exhibit a finite residual chromatic dispersion outside the “acceptance
window,” i.e., the interval within which the dispersion must lie to ensure a correct transmission quality. This window is typically 1000 ps·nm−1 wide for
a WDM 10G system. In general, unless carefully
designed a dispersion-compensation filter does not
cancel dispersion perfectly for all channels. So even
after correcting dispersion to first order, in longhaul WDM systems the residual dispersion can
still limit the transmission length and/or the number of channels. To illustrate the magnitude of this
effect, consider a link of length L carrying N channels spaced by Δλ (i.e., NΔλ is the total multiplexed
width), with a dispersion slope after compensation
S. The cumulated dispersion at the output of the
link is then SLNΔλ. The receiver can be designed
to tolerate a certain amount of residual cumulated dispersion within some spectral window, for
instance, typically 1000 ps·nm−1 for a 10 Gbit·s−1
system, as stated earlier. For a typical dispersion
slope S = 0.08 ps·nm−2·km−1, N = 64, and Δλ = 0.8 nm
(or a multiplexed width of 51.2 nm), the maximum
possible fiber length for which the cumulated dispersion reaches 1000 ps·nm−1 is 244 km. This effect
can of course be avoided by reducing the length
or the number of channels, which impacts system
performance. Better solutions include designing
broadband dispersion-compensation filters with a
dispersion curve matched to that of the fiber link.
This is a key issue for WDM systems, which has
received a lot of attention from system designers. NONLINEAR EFFECTS
The maximum power that can be transmitted
through an optical fiber, and thus the SNR of the
signal at the fiber output are ultimately limited by
a number of optical nonlinearities present in the
optical fiber. These nonlinear effects are the Kerr
effect (dependence of the fiber refractive index
on the signal intensity), SBS (conversion of signal
power into a frequency-shifted backward wave),
SRS (conversion of signal power into a forward
frequency-shifted wave), and FWM (optical mixing of a signal with itself or other signals and concomitant generation of spurious frequencies). The
magnitude of these effects generally increases with
increasing signal intensity, which can be relatively
high in a single-mode fiber, even at low power,
because of the fiber’s large optical confinement. For
example, in a typical single-mode fiber at 1.55 μm
with an effective mode area of 80 μm2, a 20-dBm
signal has an intensity of ∼1.2 kW·mm−2! Because
this intensity is sustained over very long lengths,
and because the conversion efficiency of nonlinear effects generally increases with length, even
the comparatively weak nonlinear effects present
in silica-based fibers can have a substantial impact
on system performance, even at low power. This
section provides background on the magnitude of
these nonlinear effects, describes their impact on
system performance, and mentions typical means
of reducing them. SELF-PHASE MODULATION
When a signal propagates through an optical fiber,
through the Kerr effect it causes a change Δn in the
refractive index of the fiber material. In turn, this
modification of the medium property reacts on the
signal by changing its velocity and thus its phase.
This nonlinear effect is known as self-phase modulation (SPM). For a signal of power P, the index
perturbation Δn is expressed as
∆n = n2
16 Optical transmission
where n2 is the Kerr nonlinear constant of the fiber
(n2 ≈ 3.2 × 10−20 m2·W−1 for silica) [35] and Aeff the
signal mode effective area. The resulting change in
the mode propagation constant β is
∆β =
ω∆n 2 n2
where I = P/Aeff is the signal intensity. In the case
of a modulated signal, because the Kerr effect has
an extremely fast response time ( 1ps) each portion of the signal pulse modulates its own phase
independently of other portions of the pulse. If I0(t)
is the instantaneous intensity, or equivalently the
intensity profile, of the signal launched into the
fiber, and if α is the fiber loss at the signal wavelength, then the signal intensity at a point z along
the fiber is I(t, z) = I0(t) exp(−αz). The amount of
SPM experienced by the signal pulse after a propagation length L is simply [1]:
Φ(t , L ) =
∫ ∆βdz = 2πλn ∫ I exp(−αz )dz = 2πnλ I
2 0
where L eff =(1 − e−α)/α is the effective fiber length.
In principle, because photodetectors are quadratic and thus phase insensitive, SPM should not
cause any detrimental effect. This is true in AM
systems provided the fiber is free of dispersion.
However, in practice, the presence of chromatic
dispersion converts SPM into amplitude fluctuations [1]. When β2 is positive, SPM combines to
chromatic dispersion to produce pulse broadening, just like chromatic dispersion alone does.
When β2 is negative, SPM combines to chromatic
dispersion to produce pulse narrowing, i.e., they
have opposite effects. In this case, SPM can be
used to compensate for chromatic dispersion and
thus improve the system performance. There is in
fact a particular regime in which linear and nonlinear effects compensate mutually exactly at any
moment in time. This particular solution of the
nonlinear Schrödinger equation, valid only in a
lossless fiber, is called an optical soliton. Provided
that it has the proper shape and intensity, a soliton
propagates without any temporal deformation.
This phenomenon was extensively studied in the
1990s, and it continues to be an active research
topic, because fiber-optic solitons are very promising for ultralong distance transmission at high
bit rates [26,43,44]. A soliton-based transmission encrypts the information in extremely short
pulses that neither spread nor compress as they
propagate along the fiber because the soliton has
just the right peak power for the Kerr nonlinear
phase shift to exactly compensate for chromatic
dispersion. Soliton-based communication links
are, however, not compatible with WDM-based
links because in order to have a relatively low
peak power, a soliton needs a low-dispersion fiber,
which is not well suited for WDM (see “FourWave Mixing (FWM)” section). The WDM solution has obviously won so far, even for long-haul
transmission. But the concept of soliton-based
communication systems remains an interesting
and promising approach that continues to stimulate a lot of research and development.
In parallel to these various schemes used to
combat SPM, the most effective first-order solution
to reduce SPM is to use a transmission fiber with
a large mode effective area Aeff. This is of course
also applicable to other undesirable fiber nonlinear effects, in particular cross-phase modulation
(XPM), FWM, and stimulated scattering processes. The reason is that the magnitude of all of
these processes increases as the reciprocal of Aeff,
so a fiber with a higher Aeff can tolerate a higher
signal power. Large mode effective areas are typically accomplished by designing fibers with a
larger core and a concomitantly lower numerical
aperture to ensure that the fiber carries a single
mode. Communication-grade fibers have mode
effective areas in the range of 50–100 μm2, for
example, 80 μm2 for the so-called standard fiber.
Substantially higher values are typically precluded
for transmission fibers because they require such
low numerical apertures that the fiber becomes
overly susceptible to bending loss. CROSS-PHASE MODULATION
XPM has the same physical origin as SPM, namely
the Kerr effect, except that the phase modulation is
not induced by a signal on itself, but by one or more
different signals propagation through the fiber. A
different signal means any signal with a different
wavelength, a different polarization, and/or a different propagation direction. In a WDM system,
the phase of a signal of wavelength λi is therefore
1.4 Limitations of optical transmission systems 17
modulated by itself (wavelength λi SPM) and by all
the other channels (wavelengths λj≠i, XPM).
The XPM affecting a particular channel i of a
WDM system depends on the power (and therefore
on the data) and wavelength of all other channels
j ≠ i. As in the case of SPM, XPM is converted into
amplitude fluctuations through chromatic dispersion. However, the main detrimental effect of
XPM is time jitter, due to the fact that the other
signals also change the group delay of channel
i. The position of the impulses is thus changed
randomly around an average position, and sampling before decision does not occur always at the
same instant within the pulse, which cause a BER
penalty. If we consider the case of one interfering channel, interaction occurs when two pulses
overlap. Because they propagate at different speeds
(the group velocities at the wavelengths of the two
channels), the interaction begins when the fastest
impulse starts to overlap with the slowest pulse
and ends when it has completely passed it. This
phenomenon is called a collision. After one collision between symmetrical pulses, there is theoretically no memory on the perturbed pulse. The
problem occurs in the case of an incomplete collision, for instance, when it begins just before an
amplifier and then the powers change during the
collision. In this case, the affected pulse keeps the
memory through a shifted temporal position. A
key parameter is to characterize this effect is the
difference of group velocity between the two channels, which is equal to DΔλ, where D is the dispersion and Δλ the channel spacing. If this parameter
is high, the effect will be smaller, because collisions
will be very rapid. Increasing the channel spacing
will then reduce the interaction because the difference between group velocities is larger. The influence of chromatic dispersion is more complex. A
higher dispersion reduces channel interaction and
thus phase modulation, but as discussed earlier it
also increases conversion into amplitude fluctuations. Further details can be found in Section 1.5. FOUR-WAVE MIXING
FWM is another nonlinear process that results
directly from the Kerr effect. Channels of a WDM
system beat together in the receiver, giving rise to
intermodulated sidebands at frequencies that are
sums and differences of the channel’s frequencies. Each of these sidebands is modulated with the
information encrypted on the channels that gave
rise to it. When a sideband frequency happens to
fall on or close to one of the channel’s frequencies,
this channel becomes modulated with unwanted
information from other channels. This intermodulation has the same undesirable side effects as similar effects well known in radio systems.
As an illustration, consider a communication
system utilizing channels that are equally spaced in
frequency, which is usual in deployed systems, i.e.,
the channel frequencies are f0 + mΔf, where m is an
integer. The third-order beating between channels
0, 1, and 2 at respective frequencies f0, f1 = f0 + Δf,
and f2 = f0 + 2Δf produces sideband signals at frequencies pf0 + qf1 + rf2, where |p| + |q| + |r| = 3. In
particular, an intermodulated sideband is generated at frequency f0 + Δf by interaction of three
channels together (p = 1, q = −1, r = −1) but also by
interaction of channels 0 and 1 only (p = 1, q = 2).
This sideband has the same frequency as channel 1
and thus adds to channel 1 data modulation from
channels 0 and 2. The same argument applied to
other channels clearly shows that if the interaction
is strong enough, every channel becomes contaminated with information from all other channels.
The magnitude of FWM effects can be characterized by the power in the intermodulation
sideband Pintermod. This power can be calculated
analytically for pure unmodulated waves, in which
case it is given by [15,56]:
Pintermod = ηFWM d 2 γ2 P 3exp ( –2αL )
where γ = 2πn2/λAeff represents the strength of the
Kerr nonlinearity in the fiber, P is the power per
channel, assumed the same for all channels, d is a
constant equal to 6 if all channels are distinct and
9 if there are not. The factor ηFWM is the FWM efficiency, defined as
ηFWM =
4exp(−αL )sin 2 (∆βFWML / 2)
1 +
α 2 + ∆βFWM
(1 − exp(−αL ))2
where ΔβFWM is the phase mismatch between interacting waves, which depends on chromatic dispersion coefficient D, on its slope, and on the channel
spacing Δf according to
18 Optical transmission
∆βFWM =
2πλ 2
λ 2 ∂D
∆f 2 D + ∆f
c ∂λ (1.22)
In the usual case where the total attenuation of the
span is high enough (exp(−αL ) 1), the efficiency
(Equation 1.21) is well approximated by
ηFWM =
α + ∆βFWM
Interference to carrier ratio (dB)
FWM is a phase-matched process: for energy to
flow effectively from one channel to another, the
channels must remain in phase, i.e., the phase
mismatch ΔβFWM must be small. It means that the
closer the channel frequencies are (small Δf ), the
more efficient FWM is, as indicated mathematically by Equations 1.21 and 1.22. This explains why
the intermodulation power increases with decreasing channel spacing. Chromatic dispersion plays a
beneficial role by increasing the phase mismatch
between channels and thus reducing the FWM
efficiency, as shown by Equation 1.22. The intermodulation power also increases with increasing
channel power, and it does so rapidly (as the third
power in P) because FWM is a nonlinear process.
Figure 1.3 shows the effect of both dispersion
and channel spacing on the interference-to-carrier ratio, i.e., the difference between the channel
power and the intermodulation product power.
This quantity is plotted versus channel spacing
for four values of the dispersion typical for channels located near the zero-dispersion wavelength.
This figure simulates a fiber link with a length
L = 100 km, a fiber attenuation of 0.2 dB·km−1, a
dispersion slope of 0.08 ps·nm−2·km−1, a nonlinear
coefficient γ = 3W−1·km−1, and a launched power per
channel of 4 dBm. It is clear that a higher dispersion reduces FWM and thus allows a better utilization of the available bandwidth. For example, if a
ratio of −60 dB is required, Figure 1.3 shows that
this can be accomplished with a 100-GHz channel
spacing in a standard fiber (D = 17 ps·nm−1·km−1),
but only 210 GHz or higher in a typical nonzerodispersion-shifted fiber (NZDSF, family G.655)
with a chromatic dispersion of 3 ps/(nm·km).
In single-channel transmission, a low dispersion is beneficial because it reduces the amount of
pulse spreading induced by (1) dispersion and (2)
SPM combined with dispersion, and thus it reduces
the amount of dispersion compensation needed to
correct for these effects. In multichannel transmission, the situation is not as simple because dispersion now brings protection against interchannel
effects, XPM and FWM. But the situation depends
strongly on the channel spacing: for WDM 10G systems with a typical channel spacing of 100 GHz or
less, interchannel effects are dominant compared
to intrachannel effects (SPM). This is the reason
why a dispersion-shifted fiber (DSF) with zero dispersion around 1550 nm is much worse for WDM
transmission than a standard G.652 fiber, and also
why this fiber provides the smallest channel spacing at this bit rate (25 GHz). When higher bit rates
are considered, the channel spacing cannot be
reduced so much due to the spectral width of the
modulated signals, and then intrachannel cannot
be neglected compared to interchannel effects. STIMULATED BRILLOUIN
SBS belongs to the family of parametric amplification processes. Through interaction between the
D = 0 ps/(nm km)
D = 3 ps/(nm km)
D = 8 ps/(nm km)
D = 17 ps/(nm km)
Channel spacing (GHz)
Figure 1.3 Dependence of the interference-to-carrier ratio due to FWM on the channel spacing.
1.4 Limitations of optical transmission systems 19
optical signal and acoustic phonons, it causes power
conversion from the signal into a counterpropagating signal shifted in frequency by the acoustic
phonon frequency [1]. The power in the SBS signal
grows as exp(g BPp − α)z, where g B is the SBS gain,
in W−1·km−1, which depends on the wavelength
separation between the two signals, and α is the
attenuation of the medium. SBS is a narrowband
process. In a silica fiber, the Brillouin frequency
shift at 1.55 μm is v B ≈ 11 GHz and the gain bandwidth is only Δv B ≈ 100 MHz. It is customary to
characterize SBS by its pov er threshold, i.e., the
power required to compensate for the medium
attenuation and thus just begin to provide a positive gain. For an unmodulated signal with a line
width smaller or equal to the SBS gain bandwidth,
the SBS threshold in a typical 1.5-μm fiber (mode
effective area of 50 μm2) is around 21 mW·km, i.e.,
21 mW in a 1-km fiber and 2.1 mW in a 10-km fiber
[1]. For powers larger than the threshold, a fraction or all of the signal is converted in the backward SBS signal. It is therefore essential to keep the
power in each signal below the SBS threshold, and
because this threshold is fairly low, SBS limits the
power of a narrowband signal that can be transported over a given distance. This is particularly
critical in repeaterless systems [25].
Several solutions have been demonstrated and are
routinely applied to increase the Brillouin threshold
and thus increase the power and/or d
istance over
which signal can be transported. When the signal amplitude is modulated at bit rates higher than
~100 MHz, as is the case in WDM systems, the signal
bandwidth power exceeds the Brillouin line width
and SBS is reduced. For this reason, SBS is generally not a concern in WDM systems. One caveat is
that the carrier component of the modulated signal
retains the original line width of the unmodulated
signal, and it is still backscattered by SBS. Because
the carrier component carries only half the signal
average power, the SBS threshold is increased (by
3 dB) compared to an unmodulated narrowband
signal, but in OOK schemes using high powers SBS
acting on the carrier has been observed to induce
signal distortion [33].
Because the SBS gain decreases with increasing
carrier line width Δv as Δv B/(ΔvB + Δv), another
solution to further increase the SBS threshold is to
use a larger carrier line width. This can be done
with a directly modulated laser (direct modulation
tends to chirp the laser frequency), by applying to
the laser either a phase modulation [25,36] or a
small amount of frequency modulation (at a frequency much lower than the bit rate) [45]. Other
techniques include using a duobinary modulation
scheme to suppress the carrier component [37],
concatenating fibers with different Brillouin shifts
to reduce the interaction length [40], and placing
isolators along the fiber to periodically suppress
the backward SBS signal [54]. STIMULATED RAMAN SCATTERING
Although caused by a different physical mechanism (interaction with vibrational modes of the
medium structure instead of acoustic phonons),
SRS can be modeled in a very similar manner, but
its characteristics are quite different and so are its
effects on transmission systems [1]. SRS is an optical process that causes power transfer between an
optical pump and a co- or counterpropagating
signal. Most solid media exhibit SRS, including
silica-based fibers. Spontaneous Raman scattering occurs when a pump photon of frequency ωp is
scattered by a host phonon of frequency Ω, which
results in the annihilation of the pump photon and
the spontaneous emission of a signal photon at a
frequency ωs = ωp − Ω. This scattering process can
also be stimulated when an incident signal photon
of frequency ωs interacts with a pump photon and
a phonon, thus yielding the emission of a stimulated photon at frequency ωs. This stimulated process thus provides what is known as Raman gain.
The SRS gain spectrum is centered around a frequency downshifted from the pump frequency
ωp by the mean phonon frequency Ω of the material. The Raman gain spectrum and bandwidth
are set by the finite-bandwidth phonon spectrum
of the material. The Raman shift of silica is typically 13 THz (or ~100 nm at 1550 nm), which is
much higher than for SBS. Similarly, the gain full
width at half maximum is larger, around 8 THz
(70 nm for a pump around 1.55 μm). However, the
Raman gain coefficient for a silica fiber is much
weaker than the SBS gain, by a factor of about 500,
so the Raman threshold is typically much higher,
for example, around 1.2 W in a 10-km length of
1.55-μm communication fiber with a 50-μm2 effective mode area [1]. Although much weaker, SRS
can still be deleterious in WDM systems because
optical channels located at the highest gain frequencies act as pumps and can be depleted, while
other channels can be amplified. In conventional
20 Optical transmission
systems using only the C-band (30 nm wide), SRS
does not occur because the maximum separation between channels is much smaller than the
Raman shift. But in systems using both the C- and
L-bands, power transfer between channels of Cand L-bands can be induced by SRS and must be
taken into account in system design.
It must be noted that SRS is also a useful mechanism: a pump signal injected in the fiber can transfer its power via SRS to one or more signals and
thus provide amplification. This is the basic principle of fiber Raman amplifiers, which will be considered in Section 1.6.4. POLARIZATION MODE DISPERSION
A standard single-mode optical fiber does not
actually carry a single mode but two modes with
orthogonal, nearly linear polarizations. Because
the index difference between the fiber core and the
cladding is small, these two polarization modes
have nearly degenerated propagation constants.
However, these propagation constants are not
exactly the same. In a communication link, the
signal launched into the fiber is typically linearly
polarized. The fiber exhibits random linear and
circular birefringence, and as the signal propagates through it the signal polarization evolves
through many states. Because the two orthogonal polarization modes travel at slightly different
velocity, one lags behind the other, and because the
signal is temporally modulated into short pulses,
after long-enough propagation each pulse is split
into two pulses. This produces two electrical pulses
with amplitudes that depend on the polarization
of the optical signal at the receiver, and separated
by a random delay called differential group delay
(DGD). For fibers with strong coupling between
polarization modes, DGD follows a Maxwell distribution. The mean value of DGD is called the
PMD [3,16,20,22,23].
This multipath effect causes ISI and thus
degrades the BER. Furthermore, random variations
in the birefringence of the long fiber cause the DGD
to be a random variable and thus the properties
of the transmitted signals to be time dependent.
Communication systems must then be characterized by their outage probability, or outage time, i.e.,
the probability that the BER exceeds the maximum
tolerable value, above which transmission is no
longer possible with the required quality. PMD is a
linear effect, which, just like chromatic dispersion,
acts on each channel individually but does not
cause coupling between them.
WDM systems are usually designed to tolerate a
PMD approximately equal to one-tenth of the symbol duration or around 10–12 ps for a 10 Gbit·s−1
bit rate. When this value is exceeded by a small
amount, transmission can still be sustained with
fewer channels, which allows increasing the SNR
of the remaining channels and provides a better
resistance against PMD. When PMD is too high
(for instance 20 ps or more for a WDM 10G system), distortion can cause closure of the eye diagram and increasing the power does not bring
any improvement. Currently manufactured fibers
allow 10 Gbit·s−1 transmission over several thousands of kilometers, and PMD is not a problem at
this bit rate. Recent advances in manufacturing
processes have led to fibers with low enough PMD
values for 40 Gbit·s−1 transmission over more than
2000 km.
PMD compensation has been investigated in
several laboratories, for example, using feedback
equalizers in either the optical or the electrical
domain [46,59]. The main application was the
implementation of WDM 10G systems in existing
fiber links that could originally not support this
high bit rate because the fiber exhibited a high
PMD. Although this method was successful, its
economic viability has been questioned because
it requires one equalizer per channel and its high
cost cannot be shared. PMD compensation will
certainly need to be implemented in the future
in communication systems with higher bit rates
over long distances, which have a reduced tolerance to PMD. For example, a WDM 40G system
typically requires no more than 2 or 2.5 ps of
1.5.1 Global performance of a
system: BER and OSNR
As mentioned earlier, the performance of a WDM
system is expressed in terms of its BER, which is
obtained by measuring the number of error bits
occurring over a given time interval. A minimum
OSNR value is required in order to achieve the
required transmission quality, i.e., the BER needs
to be lower than a given threshold. Commercial
1.5 Design of an optical WDM system 21
equipment is typically specified in terms of OSNR:
the maximum number of spans is specified for different losses per amplification span. For example,
an OSNR of 22 dB will be guaranteed for seven
spans of 25-dB loss (7 × 25 dB) or for ten spans of
23-dB loss (10 × 23 dB).
Another important feature is the system sensitivity to chromatic dispersion and dispersioncompensation strategy. The residual dispersion
at the receiver input must remain within some
interval. As an example, a 10 Gbit·s−1 receiver
will only accept a cumulated dispersion between
−600 and + 800 ps·nm−1. For a particular channel, it is always possible to bring the cumulated
dispersion within this range with proper in-line
compensation. However, due to the finite dispersion slope, the other channels will experience a
different cumulated dispersion, and if the link is
too long and/or the dispersion slope is too high,
it will not be possible to meet this specification
for all channels. This limitation could be lifted
by adjusting the cumulative dispersion channel
by channel, but this is not practical for economic
reasons. As a result, chromatic dispersion generally imposes an upper limit on the bit rate and
the optical reach.
1.5.2 Critical parameters and tradeoffs for terrestrial, undersea,
and repeaterless systems
As discussed earlier, the optical reach and amplification span are critical parameters in communication systems. For a given optical reach
and a fixed launched power, a shorter amplification span improves the OSNR. Conversely, for
a given required OSNR it increases the optical
reach. However, a shorter amplification span also
results in a more expensive system, more complex monitoring, and a higher operating cost.
Moreover, in a terrestrial network the location of
the amplification sites and the network topology
in general are parameters that the operator does
not want to change. The network infrastructure
and fibers are long-term investments, and they
are required to be compatible with several generations of systems. In particular, the attenuation
per span is a constrained parameter. Its value is
imposed by the characteristics of networks where
systems have to be installed, and it is typically
in the range of 20–25 dB. Technical improvement goals in terrestrial WDM systems therefore
consist in increasing the capacity and the optical
reach within the framework of this attenuation
per span.
Undersea systems benefit from an additional
degree of freedom. Unlike in terrestrial networks,
the fiber and the system are laid |
9575fa411445c161 | (The following is from Skeptic Magazine Vol. 21 No. 2 2016)
Virtual Immortality
Why the Mind-Body Problem is still a Problem
Robert Lawrence Kuhn
Virtual immortality is the theory that the fullness of our mental selves can be uploaded with first-person perfection to non-biological media, so that when our mortal bodies die our mental selves will live on. I am all for virtual immortality and I hope it happens (rather soon, too). Alas, I don’t think it will (not soon, anyway). I’d deem it virtually impossible for centuries, if not millennia. Worse, virtual immortality could wind up being absolutely impossible, forbidden even in principle.
This is not the received wisdom of optimotechno-futurists, who believe that the exponential development of technology in general, and of artificial intelligence (AI) in particular (including the complete digital duplication of human brains in the near or mid term), will radically transform humanity through two revolutions. The first is the “singularity,” when AI will redesign itself recursively and progressively, such that it will become vastly more powerful than human intelligence (superstrong AI). The second, they claim, will be virtual immortality
AI singularity and virtual immortality would mark a startling, transhuman world that optimotechno-futurists envision as inevitable in the long run and perhaps just over the horizon in the short run. They do not question whether their vision can be actualized; they only debate when will it occur, with estimates ranging from 10 to 100 years.
I’m skeptical. I think the complexity of the science is vastly underrated, and I challenge the philosophical foundation of the claim. Consciousness is the elephant in the room, though many refuse to see it. They assume, almost as an article of faith, that superstrong AI (post-singularity) will inevitably be conscious (almost ipso facto). They may be correct, but to make that judgment requires an analysis that is surely multifaceted and, I suspect, likely inconclusive.
Whatever consciousness may be, it determines whether virtual immortality is even possible. So I focus here on consciousness. First, however, there are two other potential obstacles to virtual immortality. I consider them briefly.
One is sheer complexity. What would it take to duplicate the human brain such that our first-person inner awareness, and all that it entails, could not be distinguished from the original?
Consider some (very) rough data for the human brain: it contains about 85 billion neurons (specialized nerve cells that convey electrical information); 100 to 1000 trillion synapses (small space between neurons, the junction across which information is transmitted by neurochemicals); one to five trillion glial cells (traditionally assumed limited to metabolic support for neurons, now suspected as also participating in brain functions); up to 1000 moments per second for positioning action potentials (the electrical spark of information in neurons); ten billion proteins per neuron (some of which form memories); innumerable 3-dimensional structural forms for proteins and their geometric interactions; various extracellular molecules (some of which may be involved in brain functions). The list goes on.
How much of all of this complexity is required for total virtual duplication such that the mental fullness of the original person can be said to exist? Who knows?
Granted, not all of the brain is needed for consciousness and its contents, and much of the machinery is metabolic. The bodily control mechanisms, such as regulating breathing and heart rate and digestion, will not be needed in non-biological substrates. On the other hand, contemporary philosophy of mind suggests that bodily sense is needed for normal cognition (i.e., “embodied brain” and “extended mind”).
Take all the brain data together and consider all possible combinations and permutations that work to generate the more than 100 billion distinct human personalities who have ever lived (each of whom differs from moment to moment). I hesitate even to estimate the number of specifications that would be required. How could all these be accessed non-invasively, in sufficient detail, in real time, and simultaneously? The technologies exceed my imagination. But in principle, they are possible.
A second potential deterrent to virtual immortality is quantum mechanics, the inherent indeterminacies of which could make creating a perfect mental duplicate problematic or even impossible. After all, if quantum events (like radioactive decays) are in principle non-predictable, how then would it be possible to duplicate a brain perfectly?
But quantum indeterminacies exist everywhere, in bricks just as well as in brains, so its special applicability to brain function and hence to virtual immortality is questionable. The crux of the issue is at which level in the hierarchy of causation, if any, does quantum mechanics make meaningful contributions to brain function and to consciousness? Certainly the vast majority of neuroscientists think quantum mechanics works only at bedrock levels of fundamental physics, way too low to play any special role at the higher levels where brains work and minds happen.
So while the sheer complexity of the brain would deter virtual immortality, and the indeterminacy of quantum mechanics might be an insurmountable obstacle to perfect duplication, the former would only delay its advent while the latter is deemed not relevant.
That leaves consciousness—that elephant in the room—around which optimo-techno-futurists have gathered to plan their virtual afterlife.
What is Consciousness?
Consciousness is a main theme of Closer To Truth, my public television series on science and philosophy, and among the subtopics I discuss with scientists and philosophers on the program is the classic “mind-body problem”—what is the relationship between the mental thoughts in our minds and the physical activities in our bodies/brains? What is the deep cause of consciousness? (All quotes that follow are from Closer To Truth: www.closertotruth.com.)
NYU philosopher David Chalmers famously described the “hard problem” of consciousness: “Why does it feel like something inside? Why is all our brain processing—vast neural circuits and computational mechanisms—accompanied by conscious experience? Why do we have this amazing inner movie going on in our minds? I don’t think the hard problem of consciousness can be solved purely in terms of neuroscience.”
“Qualia” are the core of the mind-body-problem. “Qualia are the raw sensations of experience,” Chalmers continued. “I see colors—reds, greens, blues—and they feel a certain way to me. I see a red rose; I hear a clarinet; I smell mothballs. All of these feel a certain way to me. You must experience them to know what they’re like. You could provide a perfect, complete map of my brain [down to elementary particles]—what’s going on when I see, hear, smell—but if I haven’t seen, heard, smelled for myself, that brain map is not going to tell me about the quality of seeing red, hearing a clarinet, smelling mothballs. You must experience it.”
Can a Computer be Conscious?
To Berkeley philosopher John Searle, computer programs can never have a mind or be conscious in the human sense, even if they give rise to equivalent behaviors and interactions with the external world. (In Searle’s “Chinese Room” argument, a person inside a closed space can use a rule book to match Chinese characters with English words and thus appear to understand Chinese, when, in fact, she does not.)
“But,” I asked Searle, “Will it ever be possible, with hyperadvanced technology, for non-biological intelligences to be conscious in the same sense that we are conscious? Can computers have ‘inner experience’?”
“It’s like the question, ‘Can a machine artificially pump blood as the heart does?” Searle responded. “Sure it can—we have artificial hearts. So if we can know exactly how the brain causes consciousness, down to its finest details, I don’t see any obstacle, in principle, to building a conscious machine. That is, if you knew what was causally sufficient to produce consciousness in human beings and if you could have that [mechanism] in another system, then you would produce consciousness in that other system. Note that you don’t need neurons to have consciousness. It’s like saying you don’t need feathers in order to fly. But to build a flying machine, you do need sufficient causal power to overcome the force of gravity.”
Searle then cautioned: “The one mistake we must avoid is supposing that if you simulate it, you duplicate it. A deep mistake embedded in our popular culture is that simulation is equivalent to duplication. But of course it isn’t. A perfect simulation of the brain—say, on a computer—would be no more conscious than a perfect simulation of a rainstorm would make us all wet.
Robotics entrepreneur (and MIT professor emeritus) Rodney Brooks agrees that consciousness can be created in non-biological media, but disagrees on the nature of consciousness itself. ”There’s no reason we couldn’t have a conscious machine made from silicon,” he said. Brooks’ view is a natural consequence of his beliefs that the universe is mechanistic and that consciousness, which seems special, is an illusion. He claims that, because the external behaviors of a human, animal or even a robot can be similar, we “fool ourselves” into thinking “our internal feelings are so unique.”
Can We Ever Really Assess Consciousness?
“I don’t know if you’re conscious. You don’t know if I’m conscious,” said Princeton neuroscientist Michael Graziano. “But we have a kind of gut certainty about it. This is because an assumption of consciousness is an attribution, a social attribution. And when a robot acts like it’s conscious and can talk about its own awareness, and when we interact with it, we will inevitably have that social perception, that gut feeling, that the robot is conscious.”
“But can you really ever know if there’s ‘anybody home’ internally, if there is any inner experience?” he continued. “All we do is compute a construct of awareness.”
Warren Brown, a psychologist at Fuller Theological Seminary and a member of UCLA’s Brain Research Institute, stressed “embodied cognition, embodied consciousness,” in that “biology is the richest substrate for embodying consciousness.” But he didn’t rule out that consciousness “might be embodied in something non-biological.” On the other hand, Brown speculated, “consciousness may be a particular kind of organization of the world that just cannot be replicated in a non-biological system.”
Neuroscientist Christof Koch, president and chief scientific officer of the Allen Institute for Brain Science, takes a strong philosophical stance based on his work as a neuroscientist. “I am a functionalist when it comes to consciousness,” he said. “As long as we can reproduce the same kind of relevant relationships among all the relevant neurons in the brain, I think we will have recreated consciousness. The difficult part is, what do we mean by ‘relevant relationships’? Does it mean we have to reproduce the individual motions of all the molecules? Unlikely. It’s more likely that we have to recreate all the relevant relationships of the brain’s synapses and the brain’s wiring (today known as the ‘connectome’) in a different medium, like a computer. If we can do all of this reconstruction at the right level, this entity, this software construct, would be conscious.”
Koch stresses that “experience” requires new, perhaps radical, scientific thinking. “You need to expand the traditional laws of physics,” he told me. “In physics there is space, time, energy, mass. Those by themselves are sufficient to explain the physics of the brain. The brain is subject to the same laws of physics as any other object in the universe. But in addition there is something else. There is experience. The experience of pain. The experience of falling in love. And to account for experience, you need to enhance the laws of physics.
Radical Visions of Consciousness
A new theory of consciousness—developed by Giulio Tononi, a neuroscientist at the University of Wisconsin (and supported by Koch)—is based on “integrated information” such that distinct conscious experiences are represented by distinct structures in a specialized and heretofore unknown kind of space.
“Integrated information theory means that you need a very special kind of mechanism organized in a special kind of way to experience consciousness,” Tononi said. “A conscious experience is a maximally reduced conceptual structure in a space called ‘qualia space.’ Think of it as a shape. But not an ordinary shape—a shape seen from the inside.”
Tononi stressed that simulation is “not the real thing.” To be truly conscious, he said, an entity must be “of a certain kind that can constrain its past and future—and certainly a simulation is not of that kind.”
Koch envisions how Tononi’s theory of integrated information could explain how experience—how consciousness—arises out of matter. “The theory makes two fundamental axiomatic assumptions,” Koch explained. “First, conscious experiences are unique and there are a vast number of different conscious experiences. Just think of all the frames of all the movies you’ve ever seen or movies that will ever be made until the end of time. Each one is a unique visual experience and you can couple that with all the unique auditory experiences, pain experiences, etc. All possible conscious experiences are a gigantic number. Second, at the same time, each experience is integrated—what philosophers refer to as unitary. Whatever I am conscious of, I am conscious of as a whole. I apprehend as a whole. So the idea is to take these two axioms seriously and to cast them into an information theory framework. Why information theory? Because information theory deals with different states and their interrelationships. We don’t think the stuff the brain is made out of is really what’s critical about consciousness. It’s the interrelationship that’s critical.”
I asked Koch if he’d be “comfortable” with nonbiological consciousness.
“Why should I not be?” he responded. “Consciousness doesn’t require any magical ingredient.”
Mathematician Roger Penrose claims that consciousness is non-computable and that only a noncomputational physical process could explain consciousness. He is not saying that consciousness is beyond physics, rather that it is beyond today’s physics. “Conscious thinking can’t be described entirely by the physics that we know,” Penrose said, explaining that he “needed something that had a hope of being non-computational.” He focuses on “the main gap in physics”: the contradiction between the continuous, probabilistic evolution given by the Schrödinger equation in quantum mechanics and the discrete, deterministic events when you make a measurement in classical physics—“how rules like Schrödinger’s cat being dead and alive at the same time in quantum mechanics do not apply at the classical level.”
Penrose argues that the missing physics that describes how the quantum world becomes the classical world “is the only place where you could have non-computational activity.” But he admits that it’s “a tall order” to sustain quantum information in the hot, wet brain, because “whenever quantum systems become entangled with the environment, ‘environmental decoherence’ occurs and information is lost.”
“Quantum mechanics acting incoherently is not useful [to account for consciousness],” Penrose explains; “it has to act coherently. That’s why we call [our mechanism] ‘OrchOR’, or ‘orchestrated objective reduction’—the ‘OR’ stands for objective reduction, which is where the quantum state collapses to one alternative or another, and ‘Orch’ stands for orchestrated. The whole system must be orchestrated, or organized, in some global way, so that the different reductions of the states actually do make a big difference to what happens to the network of neurons.”
So how can the hot, wet brain operate a quantum information system? A biological mechanism utilizing microtubules in neurons was proposed by Dr. Stuart Hameroff, an anesthesiologist, who then together with Penrose, developed their quantum theory of consciousness
“Objective reduction in the quantum world is occurring everywhere,” Hameroff recognizes,“so protoconscious, undifferentiated moments are ubiquitous in the universe. Now in our view when orchestrated objective reduction occurs in neuronal microtubules, the process gives rise to rich conscious experience.” But, he asked rhetorically,“could your consciousness be downloaded into some artificial medium as the singularity folks have been saying for years, but without any progress whatsoever?” Hameroff thinks it is possible.“It could happen in an alternative medium that has the proper properties,” he said,“perhaps artificial nanotubes made of carbon fullerenes. [Creating consciousness in non-biological media] can be done as long as you have enough mass superposition to reach threshold in a reasonable time.”
Inventor and futurist extraordinaire Ray Kurzweil believes that“we will get to a point where computers will evidence the rich array of emotionally subtle behaviors that we see in human beings; they will be very intelligent, and they will claim to be conscious. They will act in ways that are conscious; they will talk about their own consciousness and argue about it just the way you and I do. And so the philosophical debate will be whether or not they really are conscious—and they will be participating in the debate.”
Kurzweil argues that assessing the consciousness of other [possible] minds is not a scientific question. “We can talk scientifically about the neurological correlates of consciousness, but fundamentally, consciousness is this subjective experience that only I can experience. I should only talk about it in first-person terms—although I’ve been sufficiently socialized to accept other people’s consciousness. There’s really no way to measure the conscious experiences of another entity.”
“But I would accept that these non-biological intelligences are conscious,” Kurzweil concluded. “And that’ll be convenient, because if I don’t, they’ll get mad at me.”
AI Consciousness: Precursor of Virtual Immortality
It is my conjecture that unless humanlike inner awareness can be created in non-biological intelligences, uploading one’s neural patterns and pathways, however complete, could never preserve the original, first-person mental self (the private “I”), and virtual immortality would be impossible. That’s why a precursor to the question of virtual immortality is the question of AI consciousness. Can robots, however advanced their technology, ever have inner awareness and first-person experience?
I submit that the nature of the AI singularity would differ profoundly in the case where it is literally conscious, with humanlike inner awareness, from the case where it is not literally conscious— even though in both cases superstrong AI would be vastly more powerful than human intelligence and by all accounts they would appear to be equally conscious. This difference between being conscious and appearing conscious would become even more fundamental if, by some objective, absolute standard, humanlike inner awareness conveys some kind of “intrinsic worthiness” to entities possessing it.
For example, the first colonizers of the cosmos will likely be robots, eventually self-replicating robots, and whether such non-biological probes are conscious or not-conscious could radically affect the intrinsic nature of such colonization. It would differ profoundly, I suggest, in the case where such robots were literally conscious, with humanlike inner awareness and thus experiencing the cosmos, from the case where they were not literally conscious with no inner awareness and not experiencing anything.
I agree that after superstrong AI exceeds some threshold, science could never, even in principle, distinguish actual inner awareness from apparent inner awareness, say in our cosmos-colonizing robots. But I do not agree with what usually follows: that this everlasting uncertainty about inner awareness and conscious experience in other entities (non-biological or biological) makes the question irrelevant. I think the question maximally relevant. Unless our robotic probes were literally conscious, even if they were to colonize every object in the galaxy, the absence of inner experience would mean a diminished intrinsic worth.
That’s why, to explore the possible meaning of AI consciousness as well as to assess the real-world viability of virtual immortality, the deep cause of consciousness is critical.
Alternative Causes of Consciousness
Through my conversations (and decades of nightmusings) on the philosophy of mind, I can array nine alternative theories or causes of consciousness. (There are others, and different categorizations). Traditionally, the clash is between physicalism/materialism (No. 1 below) and dualism (No. 8), but such oversimplification may be part of the problem—the other six alternatives have standing.
1. Physicalism or Materialism. Consciousness is entirely physical, solely the product of biological brains, and all mental states can be fully “reduced” to (wholly explained by) physical states—which, at their deepest levels, are the fields and particles of fundamental physics. Overwhelmingly for scientists, physicalism/ materialism is the prevailing theory of consciousness. To them, the utter physicality of consciousness is an assumed premise, supported strongly by incontrovertible evidence. “Eliminative materialism” is the boundary position that our common-sense view of the mind is misleading and that consciousness is in a sense an illusion. A preferred mechanism of physicalism/materialism is identity theory, where mental states literally are physical states. (Though the terms “materialism” and “physicalism” are generally interchangeable, materialism is older and connotes a more metaphysical or ontological meaning, whereas physicalism emerged in the early 20th century and conveys a more methodological or linguistic usage.)
2. Epiphenomenalism. Consciousness is entirely physical, solely the product of biological brains, but mental states cannot be entirely reduced to physical states (brains or otherwise), though mental states have no powers. The mind is entirely inert; our awareness of consciousness is real but our sense of mental causation is not. There is no “top-down causation”; our feelings that our thoughts can cause things are an illusion. In this manner, epiphenomenalism is a weaker form of non-reductive physicalism (see next). The classic analogy for consciousness as an epiphenomenon is “foam on a wave,” always there but never doing anything.
3. Non-reductive Physicalism. Consciousness is entirely physical, solely the product of biological brains, but mental states are real and cannot be reduced to physical states (brains or otherwise). While mental states are generated entirely by physical states (of the brain), they are truly other than physical (i.e., mental states are ontologically distinct). A prime feature of non-reductive physicalism is “top-down causation,” where the content of consciousness is causally efficacious—qualia can do real work. The mechanism of non-reductive physicalism is emergence, where novel properties at higher levels of integration are not discernible (and perhaps not even predictable) from all-you-can-know at lower or more fundamental levels. (There is a close relationship between non-reductive physicalism and property dualism—both recognize real mental states and yet only one kind of substance—but, as expected, some adherents of each reject the claims of the other.)
4. Quantum Consciousness. Consciousness is non-computational and relates to (or resides in) the fundamental gap between the quantum and the classical worlds. Consciousness is still explained by the physics of neurons, but a physics enlarged from that which we know currently. Though dismissed by most scientists, the claim is that these two great mysteries, consciousness and quantum theory, can be solved simultaneously.
5. Qualia Force. Consciousness is an independent, nonreducible feature of physical reality that exists in addition to (and probably not derived from) the fields and particles of fundamental physics. This heretofore unknown aspect of the world may take the form of a new, independent, fundamental physical law or force (fifth force?).
6. Qualia Space. Consciousness is an independent, non-reducible feature of physical reality that exists in addition to the mass-energy and space-time of fundamental physics. This heretofore unknown aspect of the world may take the form of a radically new structure or organization of reality, perhaps a different dimension of reality (e.g., “qualia space” as postulated by “integrated information theory”).
7. Panpsychism. Consciousness is a non-reducible feature of each and every physical field and particle of fundamental physics. Everything that exists has a kind of inherent “proto-consciousness” which, in certain aggregates and under certain conditions, can generate real inner awareness. Panpsychism is one of the oldest theories in philosophy of mind (going back to pre-modern animistic religions and the ancient Greeks). It is being revived, in various forms, by some contemporary philosophers in response to the seemingly intractable “hard problem” of consciousness.
8. Dualism. Consciousness requires a radically separate, nonphysical substance that is not only independent of the physical brain but also apart from the physical world. This would mean that reality consists of two, ontologically distinct parts—physical and nonphysical substances, divisions, dimensions or planes of existence. (The two distinct parts account for the origin of the term “dualism”). While human consciousness would require, under dualism, both a physical brain and a non-physical substance (somehow working together), following the death of the body and the dissolution of the brain, this nonphysical substance by itself could maintain some kind of conscious existence. (Though this nonphysical substance is traditionally called a “soul”—a term laden with theological burdens—a soul is not the only kind of thing, or form, that such a nonphysical substance could be.)
9. Consciousness as Ultimate Reality. The age-old claim, rooted in some wisdom traditions, is that the only thing that’s really real is consciousness—everything else, especially the entire physical world and all it contains (including physical brains), is derived from an all-encompassing “cosmic consciousness.” Each individual instance of consciousness—human, animal, robotic or otherwise—is a part of this cosmic consciousness. Eastern religions, in general, espouse this kind of view. (See Deepak Chopra for contemporary arguments that ultimate reality is consciousness.)
“Functionalism” is the theory in philosophy of mind that mechanisms are more important than mediums, that what’s critical is how mental states work, not in what substrates mental states are found. As long as the activities (functions) are conducive to creating consciousness, it does not matter whether the substrates are neural tissue or computer chips or anything else that can support or enable the same activities (functions). As such, functionalism would apply to the categories 1, 2, 3 and 4 above, but not to categories 7, 8 and 9. (I’m not sure about 5 and 6, which, pending details, can be argued either way.)
Will Superstrong AI be Conscious?
I’m not going to evaluate each competing cause of consciousness. (That would require a course, not an essay.) Rather, for each potential cause, I assess the implications for virtual immortality, asking whether true first-person survival is in principle possible.
But first, to prepare a systematic analysis, I address the related but less complex issue of whether non-biological intelligences with superstrong AI (post-singularity) could be conscious and possess inner awareness. To the extent that the case for non-biological intelligences to be conscious can be made, the case for virtual immortality improves. To the extent that the case for non-biological intelligences to be conscious is weak, the case for virtual immorality is weaker. So for each cause of consciousness, could non-biological intelligences become conscious? The list follows.
If physicalism/materialism explains consciousness entirely (without remainder), then it would be almost certainly true that non-biological intelligences with superstrong AI would eventually have the same kind of inner awareness that humans do. Moreover, as AI would rush past the singularity and become ineffably more sophisticated than the human brain, it would likely express forms of consciousness higher than we today could even imagine.
If epiphenomenalism or non-reductive physicalism is true, then it would be highly likely that non-biological intelligences could eventually be conscious —though the increasing reality of mental states attenuates (slightly, unpredictably) the likelihood of inner awareness—an argument that is itself countered by functionalism (if functionalism is true).
A similar line of reasoning holds if quantum physics is the key to consciousness—with one difference being that the physical constraints of manipulating myriad quantum events, with their inherent indeterminacies, would seem even more daunting.
If consciousness requires an independent, nonreducible feature of physical reality—qualia force or qualia space—then it would remain an open question whether non-biological intelligences could ever experience true inner awareness. (It would depend on the deep nature of the consciousness-causing feature, the qualia force or qualia space, and whether this feature could be controlled by technology.)
If panpsychism is true and consciousness is a non-reducible property of each and every elementary physical field and particle, then it would seem likely that non-biological intelligences with superstrong AI could experience true inner awareness (because consciousness would be an intrinsic part of the fabric of physical reality).
If dualism is true and consciousness requires a radically separate, nonphysical substance not causally determined by the physical world, then it would seem impossible that non-biological intelligences, no matter how superstrong their AI, could ever experience true inner awareness. (An exception might be in the extremely remote condition that somehow the physical actions of the brain could exert causal force on the supposed nonphysical substance.)
If consciousness is ultimate reality (cosmic consciousness), then anything could be (or is) conscious (whatever that may mean), including nonbiological intelligences.
Remember, in each of these cases, no one could detect, using any conceivable scientific test, whether the non-biological intelligences with superstrong AI had the inner awareness of true consciousness. (They would claim to, of course, and do so convincingly.)
In all aspects of behavior and communications, these non-biological intelligences, such as cosmoscolonizing robots, would seem to be equal to (or superior to) humans. But if they did not, in fact, have the felt sense of inner experience, they would be “zombies” (“philosophical zombies” to be precise), externally identical to conscious beings, but there’s no mental content, there’s nothing inside.
This stark dichotomy between conscious and non-conscious entities spotlights (a bit circularly) our probative questions about self-replicating robots that, unless we destroy ourselves or our planet, will eventually colonize the cosmos. Post-singularity, will superstrong AI without inner awareness be in all respects as powerful as superstrong AI with inner awareness, and in no respects deficient? That is, are there kinds of cognition that, in principle or of necessity, require true consciousness? The answer could affect what it means to colonize the cosmos.
Moreover, would true conscious experience and inner awareness in these galaxy-traversing robots represent a higher form of intrinsic worthiness, some kind of absolute, universal value (however anthropomorphic this may seem)? For assessing the fundamental nature of robotic probes colonizing the cosmos, the question of consciousness is profound.
Is Virtual Immortality Possible?
Can the fullness of our first-person mental selves (our “I”) be digitized and uploaded perfectly to nonbiological media so that our mental selves can live on beyond the death of our bodies and the destruction of our brains? Whether virtual immortality is even possible has never changed, of course; always it has been determined by the unchanging cause of consciousness. It’s just that there is no consensus on what that cause actually is. Let’s assess each of the nine alternatives with this question in mind.
1. Physical/materialism. If physicalism/materialism explains consciousness entirely (without remainder), then our first-person mental self would be (almost certainly) uploadable and virtual immortality would be attainable. The technology might take hundreds or thousands of years—not decades, as optimotechno-futurists predict—but, barring human-wide catastrophe, it would happen.
2. Epiphenomenalism. If epiphenomenalism is true, then it is highly likely that some kind of virtual immortality would be attainable. The inert “foam” of consciousness should have little impact.
3. Non-reductive Physicalism. If non-reductive physicalism is true, then it is also highly likely that some kind of virtual immortality would be attainable. The causative power of mental states should not affect virtual immortality because a perfect duplication of the physical states would ipso facto produce a perfect duplication of the mental states.
4. Quantum Consciousness. If quantum consciousness is true, then it is likely that some kind of virtual immortality would be attainable. However, the indeterminacies, probabilistics and strangeness of quantum physics add a degree of uncertainty that cannot as yet be evaluated. The test, as with all potential causes of consciousness, is whether advanced technology can manipulate and control the cause of consciousness, and do so comprehensively and precisely. The quantum nature of consciousness, if true, would introduce unpredictability and perhaps undermine perfect duplicability. Note: The theory of functionalism would support virtual immortality for categories 1, 2, 3 and 4 above
5. Qualia Force. If consciousness requires an independent, non-reducible feature of physical reality that may take the form of a new, independent, fundamental physical force, then it would be possible but remain an open question whether our first-person mental self could be uploadable. Virtual immortality would be less likely with Qualia Force than it would be in 1, 2, 3 and (probably) 4 above, because not knowing much about this consciousness-causing new force, we would not know whether it could be manipulated by technology, no matter how advanced. But because consciousness would still be physical, efficacious manipulation and successful uploading would seem possible.
6. Qualia Space. If consciousness requires an independent, non-reducible feature of physical reality that may take the form of a radically new structure or organization of reality, perhaps a different dimension of reality (e.g., as postulated by “integrated information theory”), then virtual immortality would be possible, but it would be remain an open question whether our first-person mental self could be uploadable. Not understanding this consciousness-causing feature, we could not now know whether it could be manipulated by technology, no matter how advanced. If this qualia space could be in some sense directed by activities in the brain, with predictable regularities, then virtual immortality would be more likely
7. Panpsychism. If panpsychism is true and consciousness is a non-reducible property of each and every elementary physical field and particle, then it would seem probable that our first-person mental self could be uploadable. There would be two reasons: (i) consciousness would be an intrinsic part of the fabric of physical reality, and (ii) there would probably be regularities in the way particles would need to be aggregated to produce consciousness, and if there are regularities, then advanced technologies could learn to control them
8. Dualism. If dualism is true and consciousness requires a radically separate, nonphysical substance not causally determined by the physical world, then it would seem impossible to upload our firstperson mental self by digitally duplicating the brain, because a necessary cause of our consciousness, this nonphysical component, would be absent. (An exception might be in the extremely remote case that somehow the physical actions of the brain could exert causal force on the supposed nonphysical substance.)
9. Consciousness as Ultimate Reality. If consciousness is ultimate reality, then consciousness would exist of itself, without any physical prerequisites. But would the unique digital pattern of a complete physical brain (derived, in this case, from consciousness) favor a specific segment of the cosmic consciousness (i.e., our unique first-person mental self)? It’s not clear, in this extreme case, whether uploading would make much difference (or much sense)
Whereas most neuroscientists assume that whole brain duplication can achieve virtual immortality, Giulio Tononi is not convinced. According to his theory of integrated information, “what would most likely happen is, you would create a perfect ‘zombie’—somebody who acts exactly like you, somebody whom other people would mistake for you, but you wouldn’t be there.”
So, in pursuit of virtual immortality, would a perfect digital duplication of a human brain generate first-person consciousness? Here are my (tentative) conclusions for each alternative: 1, surely; 2 and 3, highly likely; 4, somewhat likely; 5 and 6, possibly but uncertain; 7, probably; 8, no; 9, doesn’t matter.
The Trouble with Duplicates
In trying to distinguish among these alternative causes of consciousness, and thus assess the viability of virtual immortality, I am troubled by a simple observation. Assume that a perfect digital duplication of my brain does, in fact, generate my first-person consciousness—which is the minimum requirement for virtual immortality. This would mean that my first-person self and personal awareness could be uploaded to a new medium (non-biological or even, for that matter, a new biological body). But here’s the problem: If “I” can be duplicated once, then I can be duplicated twice; and if twice, then an unlimited number of times
What happens to my first-person inner awareness? What happens to my “I”? Assume I do the digital duplication procedure and it works perfectly—say, five times. Where is my first-person inner awareness located? Where am I? Each of the five duplicates would state with unabashed certainty that he is “Robert Kuhn,” and no one could dispute any of them. (For simplicity of the argument, physical appearances of the clones are neutralized.) Inhabiting my original body, I would also claim to be the real “me,” but I could not prove my priority. (See David Brin’s novel Kiln People, a thought experiment about “duplicates,” and his comments on personal identity.)
I’ll frame the question more precisely. Comparing my inner awareness from right before to right after the duplication process, will I feel or sense differently? Here are four duplication scenarios, with their implications:
1. I do not sense any difference in my first-person awareness. This would mean that the five duplicates are like super-identical twins—they are independent conscious entities, such that each, after his creation, begins instantly to diverge from the others. This would imply that consciousness is the local expression or manifestation of a set of physical factors or patterns. (An alternative explanation would be that the duplicates are zombies, with no inner awareness—a charge, of course, they would deny.)
2. My first-person awareness suddenly has six parts—my original and the five duplicates in different locations—and they all somehow merge or blur together into a single conscious frame, the six conscious entities fusing into a single composite (if not coherent) “picture.” In this way, the unified effect of my six conscious centers would be like the “binding problem” on steroids. (The binding problem in psychology asks how our separate sense modalities like sight and sound come together such that our normal conscious experience feels singular and smooth, not built up from discrete, disparate elements). This would mean that consciousness has some kind of overarching presence or a kind of supra-physical structure
3. My personal first-person awareness shifts from one conscious entity to another, or fragments, or fractionates. These states are logically (if remotely) possible, but only, I think, if consciousness would be an imperfect, incomplete expression of evolution, devoid of fundamental grounding.
4. My personal first-person awareness disappears upon duplication, although each of the six (original plus five) claims to be the original and really believes it. (This, too, would make consciousness even more mysterious.)
Suppose, after the duplicates are made, the original (me) is destroyed. What then? Almost certainly my first-person awareness would vanish, although each of the five duplicates would assert indignantly that he is the real “Robert Kuhn” and would advise, perhaps smugly, not to fret over the deceased and discarded original.
If Virtual Immortality, Then Colonize the Cosmos?
There’s a further implication, and an odd one at that. Assuming that our superstrong AI, cosmoscolonizing robots could become conscious, I can make the case that such galaxy-traveling, consciousness-bearing entities could include you—yes you, your first-person inner awareness, exploring the cosmos virtually forever.
Here’s the argument. If virtual immortality and superstrong AI consciousness are possible—and there is high correlation between the two—then human personality can be uploaded (ultimately) into space probes and we ourselves can colonize the cosmos!
I’d see no reason why we couldn’t choose where we would like our virtual immortality to be housed, and if we choose a cosmos-colonizing robot, we could experience the galactic journeys through robotic senses (while at the same time enjoying our virtual world, especially during those eons of dead time traveling between star systems).
Would I Take the Plunge?
At some time in the (far) future, scientists will likely assure us that the technology is up and working. If I were around, would I believe the scientists and upload my consciousness? Moreover, entranced by what I assume will be commercial advertisements, would I select a cosmos-colonizing robot as my medium of storage so that I could spend my virtual immortality touring the galaxy? I might, if only because I’d be confident that duplication possibility 1 (above) is true and 2, 3 and 4 are false, and that the duplication procedure would not affect my first-person mental self one whit. (I sure wouldn’t let them destroy the original, though the duplicates may call for it.)
So while all the duplicates wouldn’t feel like me (as I know me), I’d kind of enjoy sending “Robert Kuhn” out there exploring star systems galore. (There’s more. If my consciousness is entirely physical and can be uploaded without degradation, then it can be uploaded without degradation to as many cosmos-colonizing robots as I’d like—or can afford. It gets crazy.)
Whether non-biological entities such as robots can be conscious, or not, presents us with two disjunctive possibilities, each with profound consequences. If robots can never be conscious, then there may be a greater moral imperative for human beings to colonize the cosmos. If robots can be conscious, then there may be less reason for humans, with our fragile bodies, to explore space—but your personal consciousness could be uploaded into cosmos-colonizing robots, probably into innumerable such galactic probes, and you yourself (or your mental clones) could colonize the cosmos.
My intuition, for what it’s worth, is that it’s a pipedream. I deem virtual immortality for my firstperson inner awareness to be not possible, and to be never possible, though in the (far) future duplicates may convince us otherwise. But confident in my conclusion, I am not.
For me for now, I’m convinced of only this: Virtual immortality, like the AI singularity, must confront the deep cause of consciousness.
"Virtual Immortality" by Robert Lawrence Kuhn |
9e464a6b9634cd12 | Wednesday, January 06, 2010
Is Physics Cognitively Biased?
Recently we discussed the question “What is natural?” Today, I want to expand on the key point I was making. What humans find interesting, natural, elegant, or beautiful originates in brains that developed through evolution and were shaped by sensory input received and processed. This genetic history also affects the sort of question we are likely to ask, the kind of theory we search for, and how we search. I am wondering then may it be that we are biased to miss clues necessary for progress in physics?
It would be surprising if we were scientifically entirely unbiased. Cognitive biases caused by evolutionary traits inappropriate for the modern world have recently received a lot of attention. Many psychological effects in consumer behavior, opinion and decision making are well known by now (and frequently used and abused). Also the neurological origins of religious thought and superstition have been examined. One study particularly interesting in this context is Peter Brugger et al’s on the role of dopamine in identifying signals over noise.
If you bear with me for a paragraph, there’s something else interesting about Brugger’s study. I came across this study mentioned in Bild der Wissenschaft (a German popular science magazine, high quality, very recommendable), but no reference. So I checked Google scholar but didn’t find the paper. I checked the author’s website but nothing there either. Several Google web searches on related keywords however brought up first of all a note in NewScientist from July 2002. No journal reference. Then there’s literally dozens of articles mentioning the study after this. Some do refer to, some don’t refer to the NewScientist article, but they all sound like they copied from each other. The article was mentioned in Psychology Today, was quoted in Newspapers, etc. But no journal reference anywhere. Frustrated, I finally wrote to Peter Brugger asking for a reference. He replied almost immediately. Turns out the study was not published at all! Though it is meanwhile, after more than 7 years, written up and apparently in the publication process, I find it astonishing how much attention a study could get without having been peer reviewed.
Anyway, Brugger was kind enough to send me a copy of the paper in print, so I know now what they actually did. To briefly summarize it: they recruited two groups of people, 20 each. One were self-declared believers in the paranormal, the other one self-declared skeptics. This self-description was later quantified with commonly used questionnaires like the Australian Sheep-Goat Scale (with a point scale rather than binary though). These people performed two tasks. In one task they were briefly shown (short) words that sometimes were sensible words, sometimes just random letters. In the other task they were briefly shown faces or just random combination of facial features. (These both tasks apparently use different parts of the brain, but that’s not so relevant for our purposes. Also, they were shown both to the right and left visual field separately for the same reason, but that’s not so important for us either.)
The participants had to identify a “signal” (word/face) from the “noise” (random combination) in a short amount of time, too short to use the part of the brain necessary for rational thought. The researchers counted the hits and misses. They focused on two parameters from this measurement series. The one is the trend of the bias: whether it’s randomly wrong, has a bias for false positives or a bias for false negatives (Type I error or Type II error). The second parameter is how well the signal was identified in total. The experiment was repeated after a randomly selected half of the participants received a high dose of levodopa (a Parkinson medication that increases the dopamine level in the brain), the other half a placebo.
The result was the following. First, without the medication the skeptics had a bias for Type II errors (they more often discarded as noise what really was a signal), whereas the believers had a bias for Type I errors (they more often saw a signal where it was really just noise). The bias was equally strong for both, but in opposite directions. It is interesting though not too surprising that the expressed worldview correlates with unconscious cognitive characteristics. Overall, the skeptics were better at identifying the signal. Then, with the medication, the bias of both skeptics and believers tended towards the mean (random yes/no misses), but the skeptics overall became as bad at identifying signals as the believers who stayed equally bad as without extra dopamine.
The researcher’s conclusion is that the (previously made) claim that dopamine generally increases the signal to noise ratio is wrong, and that certain psychological traits (roughly the willingness to believe in the paranormal) correlates with a tendency to false positives. Moreover, other research results seem to have shown a correlation between high dopamine levels and various psychological disorders. One can roughly say if you fiddle with the dose you’ll start seeing “signals” everywhere and eventually go bonkers (psychotic, paranoid, schizoid, you name it). Not my field, so I can’t really comment on the status of this research. Sounds plausible enough (I’m seeing a signal here).
In any case, these research studies show that our brain chemistry contributes to us finding patters and signals, and, in extreme, also to assign meaning to the meaningless (there really is no hidden message in the word-verification). Evolutionary, type I errors in signal detection are vastly preferable: It’s fine if a breeze moving leaves gives you an adrenaline rush but you only mistake a tiger for a breeze once. Thus, today the world is full of believers (Al Gore is the antichrist) and paranoids who see a tiger in every bush/a feminist in every woman. Such overactive signal identification has also been argued to contribute to the wide spread of religions (a topic that currently seems to be fashionable). Seeing signals in noise is however also a source of creativity and inspiration. Genius and insanity, as they say, go hand in hand.
It seems however odd to me to blame religion on a cognitive bias for Type I errors. Searching for hidden relations on the risk that there are none per se doesn’t only characterize believers in The Almighty Something, but also scientists. The difference is in the procedure thereafter. The religious will see patterns and interpret them as signs of God. The scientist will see patterns and look for an explanation. (God can be aptly characterized as the ultimate non-explanation.) This means that Brugger’s (self-)classification of people by paranormal beliefs is somewhat besides the point (it likely depends on the education). You don’t have to believe in ESP to see patterns where there are none. If you read physics blogs you know there’s an abundance of people who have “theories” for everything from the planetary orbits, over the mass of the neutron, to the value of the gravitational constant. One of my favorites is the guy who noticed that in SI units G times c is to good precision 2/100. (Before you build a theory on that noise, recall that I told you last time the values of dimensionful parameters are meaningless.)
The question then arises, how frequently do scientists see patterns where there are none? And what impact does this cognitive bias have on the research projects we pursue? Did you know that the Higgs VEV is the geometric mean of the Planck mass and the 4th root of the Cosmological Constant? Ever heard of Koide’s formula? Anomalous alignments in the CMB? The 1.5 sigma “detection?” It can’t be coincidence our universe is “just right” for life. Or can it?
This then brings us back to my earlier post. (I warned you I would “expand” on the topic!) The question “What is natural” is a particularly simple and timely example where physicists search for an explanation. It seems though I left those readers confused who didn’t follow my advice: If you didn’t get what I said, just keep asking why. In the end the explanation is one of intuition, not of scientific derivation. It is possible that the Standard Model is finetuned. It’s just not satisfactory.
For example Lubos Motl, a blogger in Pilsen, Czech Republic, believes that naturalness is not an assumption but “tautologically true.” As “proof” he offers us that a number is natural when it is likely. What is likely however depends on the probability distribution used. This argument is thus tautological indeed: it merely shifts the question what is a natural from the numbers to what is a natural probability distribution. Unsurprisingly then, Motl has to assume the probability distribution is not based on an equation with “very awkward patterns,” and the argument collapses to “you won't get too far from 1 unless special, awkward, unlikely, unusual things appear.” Or in other words, things are natural unless they’re unnatural. (Calling it Bayesian inference doesn’t improve the argument. We’re not talking about the probability of a hypothesis, the hypothesis is the probability.) I am mentioning this sad case because it is exactly the kind of faulty argument that my post was warning of. (Motl also seems to find the cosine function more natural than the exponential function. As far as I am concerned the exponential function is very natural. Think otherwise? Well, zis why I’m saying it’s not a scientific argument.)
The other point that some readers misunderstood is my opinion on whether or not asking questions of naturalness is useful. I do think naturalness is a useful guide. The effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained), but it’s definitely well documented. Dimensionless numbers that are much larger or smaller than one have undeniably an itch-factor. I’m not claiming one should ignore this itch. But be aware that this want for explanation is an intuition, call it a brain child. I am not saying thou shell disregard your intuition. I say thou shell be clear what is intuition and what derivation. Don’t misconstrue for a signal what is none. And don’t scratch too much.
But more importantly it is worthwhile to as ask what formed our intuitions. On the one hand they are useful. On the other hand we might have evolutionary blind spots when it comes to scientific theories. We might ask the wrong questions. We might be on the wrong path because we believe to have seen a face in random noise, and miss other paths that could lead us forward. When a field has been stuck for decades one should consider the possibility something is done systematically wrong.
To some extend that possibility has been considered recently. Extreme examples for skeptics in science are proponents of the multiverse, Max Tegmark with his Mathematical Universe ahead of all. The multiverse is possibly the mother of all Type II errors, a complete denial that there is any signal.
In Tegmark’s universe it’s all just math. Tegmark unfortunately fails to notice it’s impossible for us to know that a theory is free of cognitive bias which he calls “human baggage.” (Where is the control group?) Just because we cannot today think of anything better than math to describe Nature doesn't mean there is nothing. Genius and insanity...
For what the multiversists are concerned, the “principle of mediocrity” has dawned upon them, and now they ask for a probability distribution in the multiverse according to which our own universe is “common.” (Otherwise they had nothing left to explain. Not the kind of research area you want to work in.) That however is but a modified probabilistic version of the original conundrum: trying to explain why our theories have the features they have. The question why our universe is special is replaced by why is our universe especially unspecial. Same emperor, different clothes. The logical consequence of the multiversial way is a theory like Lee Smolin’s Cosmological Natural Selection (see also). It might take string theorists some more decades to notice though. (And then what? It’s going to be highly entertaining. Unless of course the main proponents are dead by then.)
Now I’m wondering what would happen if you gave Max Tegmark a dose of levodopa?
It would be interesting if a version of Brugger’s test was available online and we could test for a correlation between Type I/II errors and sympathy for the multiverse (rather than a believe in ESP). I would like to know how I score. While I am a clear non-believer when it comes to NewScientist articles, I do see patterns in the CMB ;-)
[Click here if you don't see what I see]
The title of this post is of course totally biased. I could have replaced physics with science but tend to think physics first.
Conclusion: I was asking may it be that we are biased to miss clues necessary for progress in physics? I am concluding it is more likely we're jumping on clues that are none.
Purpose: This post is supposed to make you think about what you think about.
Reminder: You're not supposed to comment without first having completely read this post.
1. Ooh, isn't that weird?! Your initials are carved into the CMB!
I saw the blue face of the devil first. That's definitely there as well.
Isn't science great! :)
2. If modern particle physics is cognitively biased, the biases are subtle. I'd say subtler than the assumptions about geometry (Euclidean) and time (Newtonian) that prevailed before Einstein.
3. Now if we could only look at the CMBs of all of Tegmark's other universes, what would be the great message be that the Romans placed there ?-)
Of course, thought and perception are necessarily biased by the sense receptors and the brains and physiology each of us is equipped with – and by the history of our experiences, personal and collective.
What we try to do, especially with science, is to use experience to gradually separate signal from noise. And we can do that no better than is allowed by the set of tools we're born with and which we add to, as a result of added experience and understanding.
Because our 'equipment' varies slightly for genetic and other accidental reasons, so will our biases. But the strategies for enhancing S/N, should tend to reduce the net effect of bias on THOSE DIFFERENCES. We may never be able to be overcome other 'biases', that relate to our finite shared biology and experiences.
4. Although it matters to the essence of the question, let's put aside the intuitive sense that we "really exist" in a way distinguishing us from modal-realist possible worlds. (IMHO, it's not a mere coincidence between the sense of vivid realness in consciousness and the issue of "this is a real world, dammit!) Consider the technical propriety of claiming the world is "all math." That to me, implies that a reasonable mathematical model of "what happens" can be made. As far as I am concerned, the collapse problem in QM makes that impossible. We don't really know how to take a spread out, superposed wave function and make it pop into some little space or specific outcome. Furthermore, "real randomness" cannot come from math, which is deterministic! (I mean the outcomes themselves, not cheating by talking about the probabilities as a higher abstraction etc.) Same issue for "flowing time" and maybe more.
Some people think they can resolve the perplexity through a very flawed, circular argument that I'm glad looks suspect to Roger Penrose too. Just griping isn't enough, see my post on decoherence at my link. But in any case this is not elegant, smooth mathematics. Many say, that renormalization is kind of a scam too. Maybe it's some people's cognitive bias to imagine that the universe must be mathematical, or their cognitive dissonance to fail to accept that the universe really doesn't play along - but the universe really isn't a good "mathematical model." I think that's more important than e.g. how many universes there are.
5. Bee,
Just wanted to point out that the study said nothing about pattern recognition. In fact, from what you stated about the duration of time ("too short to use the part of the brain necessary for rational thought") to make the decision, no pattern recognition was involved or affected by the test: patterns take thought to see.
So, while I agree that pattern recognition is an evolutionary boon, is involved in creativity, and is present in both scientists and "believers", that says nothing about the quality of the patterns being observed. Bad signal-vs.-noise separation would, obviously, lead to bad patterns (GIGO, anyone?), but even good signal-vs.-noise separation could lead to bad patterns.
The study results seem to say that what was affected wasn't the interpreted quality of the signal (which wasn't tested), just whether it *was* a signal or was just noise. The correlation between "believers" and false signal detection might be more related to the GIGO issue rather than an assumed increase in pattern detection ability.
6. "...too short to use the part of the brain necessary for rational thought."
I wonder what that phrase means.
7. From AWT perspective, modern physics is dual to philosophy. While philosophers cannot see quantitative relations even at the case,their derivation is quite trivial and straightforward, formally thinking physicists often cannot see qualitative relations between phenomena - even at the case, such intuitive understanding would be quite trivial.
Because we are seeing objects as a pinpoint particles from sufficient distance in space-time, Aether theory considers most distant (i.e. "fundamental") reality composed of inertial points, i.e. similar to dense gas, which is forming foamy density fluctuations. Philosophers tend to see chaotic portion of reality, where energy spreads via longitudinal waves, whereas physicists are looking for "laws", i.e. density fluctuations itself, where energy spreads in atemporal way of transversal waves. It means, physicists tend to see gradient and patterns even at the case, when these patterns are of limited scope in space-time and it tends to extrapolate these patterns outside of their applicability scope - as Bee detected correctly.
Lubos Motl is particularly good case to demonstrate such bias, because he is loud and strictly formally thinking person. Bee is woman and thinking of women is more holistic & plural, which is the reason, why women aren't good in math in general. Nevertheless she's still biased by her profession, too. I don't think, any real physicist can detect bias of his proffession exactly, just because (s)he is immanent part of it.
8. For what it's worth, I saw nothing that I could identify in the CMB.
The study you cite is cute, but as with most psychological studies, it doesn't pay to try to milk the data for more than is actually there. Thinking you detect a signal and being willing to act on a signal are not the same thing, although in this simplistic, no-risk situation, they are made to appear to be. And science isn't just about how many times you say 'ooh!' in response to what you think is a signal. Science is very much about having that 'signal' validated by others using independent means.
I'm really not sure who or what you are trying to jab with this post, other than the poke at ESP.
And I'm seconding Austin with respect to pattern recognition. :)
9. /*..Extreme examples for skeptics in science are proponents of the multiverse..*/
From local perspective of CMB Universe appears like fractal foam of density fluctuations, where positive curvature is nearly balanced by this negative one. The energy/information is spreading through this foam in circles or loops simmilar to Mobius strip due the dispersion and subtle portion of every transversal wave is returning back to the observer in form of subtle gravitational, i.e. longitudinal waves. We should realize, there is absolutely no methaphysics into such perspective, as it's all just a consequence of emergent geometry.
But this dispersion results into various supersymmetry phenomena, where strictly formally thinking people are often adhering to vague concepts and vice-versa. For example, many philosophers are obsessed by searching for universal hidden law of Nature or simply God, which drives everything. Whereas many formally thinking people are proposing multiverse concept often. We can find many examples of supersymmetry in behavior of dogmatic people, as they're often taking an opinions, which are in direct contradiction to their behavior. We are often talking about inconsistency in thinking in this connection, but it's just a manifestation of dual nature of information spreading inside of random systems.
10. Supersymmetry in thinking could be perceived as a sort of mental correction of biased perceiving of reality, although in unconscious, i.e. intuitive way. But there is a dual result of dispersion, which leads into mental singularities, i.e. black holes in causal space-time. The strictly formally thinking people often tend to follow not just vague and inconsistent opinions, but they're often of "too consistent" opinions often, which leads them into dogmatic, self-confirmatory thinking. The picture of energy spreading through metamaterial foam illustrates this duality in thinking well: portion of energy gets always dispersed into neighborhood, another portion of energy is always ending in singularity.
Unbiaselly thinking people never get both into schematic, fundamentalistic thinking, both into apparently logically inconsistent opinions, which contradicts their behavior. Their way of thinking is atemporal, which means it follows "photon sphere" of causal space-time.
From this perspective, the people dedicated deeply to their ideas, like Hitler or Lenin weren't evils by their nature, they were just "too consequential" in their thinking about "socially righteous" society. The most dangerous people aren't opportunists, but blindly thinking fanatists. The purpose of such rationalization isn't to excuse their behavior - but to understand its emergence and to avoid it better in future. Their neural wave packets spreads in transversal waves preferably, which makes them often ingenial in logical, consequential way of thinking. But at the moment, when energy density of society goes down during economical or social crisis, society is behaving like boson condensate or vacuum, where longitudinal waves are weak - and such schematically thinking fanatics can become quite influential.
11. /*..what is a natural from the numbers to what is a natural probability distribution..*/
This is a good point, but in AWT the most natural is probability distribution in ideal dense Boltzmann gas. I don't know, how such probability appears and if it could be replaced by Boltzmann distribution - but it could be simulated by particle collisions (i.e. causual events in space-time) inside of very dense gas, which makes it predictable and testable.
12. /*.. the effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained..*/
It's because it's a product of long-term adaptation: the universe is a fractal foam, so that human brain maintains a fractal foam of solitons to predict it's behavior as well, as possible. Therefore both character, both the wavelength of brain waves correspond the CMB wavelength (or diameter of black hole model of observable Universe).
From perspective of AWT or Boltzmann brain Universe appears like random clouds or Perlin noise. A very subtle portion of this fluctuations would interact with the rest of noise in atemporal way, i.e. via transversal waves preferably. This makes anthropic principle a tautology: deep sea sharks are so perfectly adopted to bottom of oceans from exsintric perspective, they could perceive their environment as perfectly adopted to sharks from insintric persective of these sharks.
These two perspectives are virtually indistinguishable each other from sufficiently general perspective. In CMB noise we can see the Universe both from inside via microwave photons, both from outside via gravitational waves or gravitons. We can talk about black hole geometry in this connection The effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained - but basically it's just a consequence of energy spreading in chaotic particle environment, which has its analogies even at the water surface.
13. The reason, why contemporary physics cannot get such trivial connections its adherence to strictly causal, i.e. insintric perspective. Its blind refusal of Aether concept is rather a consequence, then the reason of this biased stance.
We know, mainstream physics has developed into duality of general relativity and quantum mechanics, but its general way of thinking still remains strictly causal, i.e. relativistic by its very nature. Their adherence to formal models just deepens such bias (many things, which cannot be derived can still be simulated by particle models, for example).
From this reason, physicists cannot imagine the things from their (slightly) more general exsintric perspective due their adherence to (misunderstood) Popper's methodology, because exsintric perspective it's unavailable for experimental validation by its very definition - so it's virtually unfalsifiable from this perspective. We cannot travel outside of our Universe to make sure, how it appears - which makes impossible for physicists to think about it from more general perspective.
14. Low Math, Meekly Interacting10:52 PM, January 06, 2010
Of course we're prone to bias.
That's why science works better than faith or philosophy: Nature doesn't care what we want.
I don't think bias is bad per se, though. It's difficult to make progress without a preconceived notion of what the goal might be. Even if that notion is completely wrong, at least picking an angle of attack and following it will eventually lead one to recognize their error and readjust, hopefully. Without some bias, we flail around at random.
It's when we can't temper our biases with observation and experiment that science really runs into trouble.
Dopamine is implicated in motivation, drive, the reward mechanism we inherited from our hunter-gatherer ancestors. It's good to love the chase; it keeps us fed when we're hungry, even if we can't see the food yet. Mice deprived of dopamine in certain brain regions literally starve to death for want of any desire to get up and eat. And no genius accomplishes anything without drive. So let there be bias. But let there be evidence, too, and a hunger to find it.
15. Hi Bee,
“This post is supposed to make you think about what you think about.”
Well gauging from the responses thus far, all it’s managed is to have many to remind others as to how they are suppose to think rather than give reason as to why. To me that simply serves to demonstrate that there are more people who are convinced the world should be as they think it should be, as opposed to those concerned as how to best learn to discover the way it presents itself as being.
So these wonderings as how one is best able to judge signal from noise, is just the modern way of asking how one is able to find what is truth as opposed to what are merely the shadows. That would have the sceptics on dopamine to be like the freed prisoner when first returned to the darkness of Plato's cave to be asked again to measure the shadows, while the believers on dopamine would be how that same prisoner found himself when first freed to the upper world.
So what then would Plato have said is the best way to judge signal from noise. To do this one has to introspect themselves in relation to the world, before one can excogitate about it, rather than consider only what one can imagine is how the world necessarily must be, for it then is only a projection of self and thus merely a shadow. So all the talk of the effect of observation on reality or our world is the way it is as to accommodate our existence, seems to be just what those prisoners in Plato’s cave must have thought and for the same reason. So I apologise if this seems nothing more than philosophy, yet is that not what’s asked we considered here, as what constitutes being good natural philosophy.
-Plato- Allegory of the Cave
16. It's very hard to guess what "bias" is supposed to mean in these contexts. Our brains like to keep things simple, to find economical descriptions of reality. With the help of math, though, those descriptions become florid indeed. Whatever the biases of the human brain, we know that (some) humans are damn good at sniffing out the laws of nature, because they have found so many of them.
Did our prejucices about space and time retard relativity, or our prejudices about causality retard quantum mechanics? Maybe a little but not for long. Neither could plausibly have been discovered 70 years earlier than they were.
Engineers are very familiar with the problem of detecting a signal in noise. The trick is to steer an optimal route between missed signals and false alarms. Your experiment suggests that dopamine moves the needle in favor of higher tolerance for false alarms than missed signals.
17. Testable predictions and experimental testing are the only known way to verify which patterns/ideas are useful and which are "robust" and "compelling" but not useful in understanding nature.
One in a million can reliably use intuition as a guide in science.
18. CIP: It means 140 msec. The paper didn't indeed say why 140 msec, but I guess the reason is roughly what I wrote. If you have time to actually "read" rather than "recognize" the word, you'd just test for illiteracy. Best,
19. This is why I read this blog. Happy New Year, Bee!
20. Austin, Anonymous: With "pattern recognition" I was simply referring to finding the face/word in random noise. You seem to refer to pattern recognition as pattern in a time series instead, sorry, I should have been clearer on that. However, you might find the introduction of this paper interesting which more generally is about the issue of mistakenly assigning meaning to the meaningless rspt causal connections where there are none. It's very readable. This paper (it seems to be an introduction to a special issue) also mentions the following
"The meaningfulness of a coincidence is in the brain of the beholder, and while ‘‘meaningless coincidences’’ do not invite explanatory elaborations, those considered meaningful have often lured intelligent people into a search for underlying rules and laws (Kammerer, 1919, for a case study)."
Seems like there hasn't been much research on that though. Best,
21. Dear Arun:
I wasn't so much thinking about particle physics (except possibly for the 1.5 sigma detections) but more about the attempt to go beyond that. Best,
22. Hi Len,
I agree with what you say. However, it has been shown in other context that knowing about a bias can serve as a corrective instance. Ie just telling people to be rational has the effect of them indeed being more rational. Best,
23. Neil: There's a whole field of mathematics, called stochastic, dedicated to randomness. It deals with variables that have no certain value that's the whole point. I thus don't know in which way you think "math is deterministic" (deterministic is a statement about a time evolution). In any case, I believe Tegmark favors the many worlds interpretation, so no collapse. Best,
24. /* which way you think "math is deterministic"..*/
Math is atemporal, which basically means, what you get is always what you put in - and the result of derivation doesn't change with time. Which is good for theorists - but it makes math a nonrealistic represenation of dynamical reality.
25. Zephir: That a derivation is atemporal does not prevent maths from describing something as a function of a parameter rspt as a function of coordinates on a manifold. Best,
26. I know, but this function is still fixed in time. Instead of this, our reality is more close to dynamic particle simulation.
We should listen great men of fictional history and their moms.
27. Zephir: That a function (rather than it's values) is "fixed in time" is an illdefined statement. The function is a map from one space to another space. To speak of something like constancy (being "fixed") with a parameter you first need to explain what you mean with that. Best,
28. The only way you can deal with bias is to find a good reason for every assertion you make and to provide a consistent, well defined theoretical explanation based on the evidence and on the accumulated knowledge in your area. That's the best think you can do I guess and your assertion will be debated. The diversity and pluralism of educated opinions is the best chance we have to filter any bias. The fact that you've raised the question of bias with your post is a living prove of that.
29. Hi Bee,
Just as a straight forward question from a layperson to a professional researcher in respect to what underlies this post, that is to ask if you consider physics turning ever closer to becoming the study of natural phenomena by those influenced primarily by their beliefs, rather than by their reason as grounded in doubt? As a follow up question, if you then consider this to be true, what measures would you find that need to be taken to correct this as to have physics better serve its intended purpose as it relates to discovering how the world works as it does?
30. Giotis: Yes, that's why I've raised the question. One should however distinguish between cognitive and social bias. Diversity and pluralism of opinions might work well to counteract social bias, but to understand and address cognitive bias one also needs to know better what that bias might look like. Plurality might not do. Best,
31. Hi Phil,
Here as in most aspects of life it's a matter of balance. Neither doubt nor belief alone will do. I don't know if there's a trend towards more belief today than at other times in the history of science and I wouldn't know how to quantify that anyway. What I do see however is a certain sloppiness in argumentation possibly based on the last century's successes, and a widespread self-confidence that one "knows" (rather than successfully explains) which I find very unhealthy. I personally keep it with Socrates "The only real wisdom is knowing you know nothing." This is why more often than not my writing comes in the sort of a question rather than an answer. Not sure that answers your question, but for what I think should be done is to keep asking. Best,
32. Hi Phil,
Regarding your earlier comment, yes, one could say some introspection every now and then could not harm. Maybe I'm just nostalgic, but science has had a long tradition of careful thought, discussion and argumentation that I feel today is very insufficiently communicated and lived. Best,
33. This comment has been removed by the author.
34. Hi Bee,
Well how could I argue with you pointing to Socrates for inspiration as his is the seed of this aspect of doubt as it relates to science? The only thing I would add is that Plato only expanded as to remind we are all prisoners and are better to be constantly reminded that we are; which of course is what you propose as the only remedy for bias. So would you not agree that the best sages of science usually are the ones that hold fast to this vision and that how they came to their conclusions are perhaps the better lessons , rather than what they actually have us come to know.
“But hitherto I have not been able to discover the cause of those properties of gravity from phænomena, and I frame no hypotheses; for whatever is not deduced from the phænomena is to be called an hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phænomena, and afterwards rendered general by induction. Thus it was that the impenetrability, the mobility, and the impulsive force of bodies, and the laws of motion and of gravitation, were discovered. And to us it is enough that gravity does really exist, and act according to the laws which we have explained, and abundantly serves to account for all the motions of the celestial bodies, and of our sea.”
-Isaac Newton- Principia Mathematica
Oh yes this has me remember how I was surprised that Stefan a few days past did not with a post remind us of the birthday of this important sage of science :-)
35. Hi Phil,
Well, reporting on dead scientists' birthday gets somewhat dull after a while. For what I am concerned what makes a good scientist and what doesn't is whether his or her work is successful. This might to some extend be a matter of luck or being at the right time at the right moment. But there are certainly things one can do to enhance the likeliness of success, a good education ahead of all. What other traits are useful to success depends on the research you're conducting, so your question is far too general for an all-encompassing reply. We previously discussed the four stages of science that Shneider suggested, and while I have some reservations on the details of his paper I think he's making a good point there. The trait you mention and what I was also concerned with I'd think is instrumental for what Shneider calls 1st stage science. Best,
36. Aye, yai, yai. Once again Bee, you have been singled out for criticism at The Reference Frame, in particular this blarticle. Click here for the review in question by Lubos.
If it's not too much trouble, a review by you of Lubos' review would be appreciated. Based on our previous discussion it will not be published there, or more to point you will not attempt to do so based on previous experience with TRF, therefore we humbly beseech thee to respond here under the reply section of this very blarticle that inspired Lubos to generate so many, many, very, very many words. (And for an added bonus, he trashes Sean Carroll's new book as well)
Thanks in advance.
Okay, what about the Ulam's Spiral Or, what about Pascal's triangle?
Have you noticed any patterns?
This goes to the question then of what is invented versus what is discovered?
As to Wmap, don't you see this?:)
38. Yes but the fact that you've raised the question of possible bias proves that humans (due to the pluralism of opinions) have the capability to take the factor of cognitive bias under consideration and maybe even attempt to take alternative roads due to that.
You are part of the human race aren't you? So this proves my point:-)
39. Steven: Lubos "criticism" is as usual a big joke. It consists of claiming I said things I didn't say and then making fun of them. It's terribly dumb and in its repetition also unoriginal. Just some examples:
- I meanwhile explicitly stated two times that I do not think arguments or naturalness "have no room in physics" as Lubos claims. He is either indeed unable to grasp the simplest sentences I write or he pretends to be. In the above post I wrote "I do think naturalness is a useful guide." How can one possibly misunderstand this sentence if one isn't illiterate or braindead?
- Lubos summary of my summary of Brugger's paper is extremely vague and misleading. Eg he writes "Skeptics converged closer to believers when they were "treated" by levodopa" but one doesn't really know what converged towards what. As I said as far as the bias is concerned they both converge towards the mean. This also means they converged to each other but isn't the same.
- Lubos says that "The biases are the reasons why the people are overly believing or why they excessively deny what can be seen. Sabine Hossenfelder doesn't like this obvious explanation - that the author of the paper has offered, too." First in fact, the authors were very accurate in their statements. What their research has shown is a correlation, not a causation. Second, I certainly haven't "denied" this possible explanation. That this is not only a correlation but also a causation is exactly why I have asked whether physics is cognitively biased, so what's his point?
And so on and so forth. It is really too tiring and entirely fruitless to comment on all his mistakes. Note also that he had nothing to say to my criticism of his earlier article. There was a time when I was thinking I should tell him when he makes a mistake, but I had to notice that he is not even remotely interested in having a constructive exchange. He simply doesn't like me and the essence of his writing is to invent reasons why I'm dumb and hope others are stupid enough to believe him. It's a behavior not appropriate for a decent scientist. Best,
40. Hi Giotis,
Yes, sure, I agree that we should be able to address and understand cognitive bias in science and that this starts with awareness that is easier to be found in a body that is pluralistic. What I was saying is that relying on plurality might bring up the question but not be the solution. (Much like brainstorming might bring up ideas but not their realization).
Btw: The package is on the way. Please send us a short note when it arrives just so we know it didn't get lost.
41. Typo: Should have been "arguments of naturalness" not "arguments or naturalness"
42. Bee, what I mean by "deterministic" math is that the math process can't actually *produce* the random results. Just saying "this variable has no specific value" etc. is "cheating" (in the sense philosophers use it), because you have to "put in the values by hand." Such math either produces "results" which are the probability distributions - not actual sequences of results - or in actual application, the user "cheats" by using some outside source of randomness or pseudo-randomness like digits of roots. (Such sequences are themselves of course, wholly determined by the process - they just have the right mix that is not predictable to anyone not knowing what they came from. In that sense, they merely appear "random.") I think most philosophers of the foundations of mathematics would agree with me. As for MWI, I still ask: why doesn't the intitial beam splitter of a MZI split the wave into two worlds, thus preventing the interference at all?
43. Hi Bee,
I actually read a book by Julian Baggini called: 'A very short Introduction to Atheism.' Baggini writes about evidence vs. supernatural and about naturalism. Where do we find evidence ? We all know: only in an experiment. But as you said it must be a good experiment. That means, as you too said, it must be based on a 'good' initialization.
For example, for dark matter and dark energy must be found a detector. The correct detector is needed to be found. What is the correct detector in case of dark matter or dark energy ?
Best Kay
44. Neil: I don't know what you mean with "math process producing a result." Stochastic processes produce results. The results just are probabilistic, which is the whole point. There is no "result" beyond that. I'm not "putting in values by hand," the only information there is is the distribution of the values. You are stuck on the the quite common idea that the result actually "has" a value, and then you don't see how math give you this value. Best,
45. Sure Bee, I'll do that. Thanks.
46. The CMB is a remarkably coincident map of the Earth: Europe, Asia, and Africa to the right; the Americas to the left. Is physical theory bent by pareidolia?
Physics is obsessed with symmetries: S((U2)xU(3)) or U(1)xSU(2)xSU(3) for the Standard model, then SUSY and SUGRA. String theory is born of fundamental symmetries, then whacked to lower symmetries toward observables (never quite arriving).
Umpolung! Remove symmetries and test (not talk) physics for flaws. Chemistry (pharma!) is explicit: Does the vacuum differentially interact with massed chirality?
pdf pp. 25-27, calculation of the chiral case
1) Two solid single crystal spheres of quartz in enantiomorphic space groups P3(1)21 and P3(2)21 are plated wtih superconductor, cooled, and Meissner effect levitated in hard vacuum beind the usual shieldings. If they spontaneously reproducibly spin in opposite directions, there's your vacuum background.
2) Teleparallel gravitation in Weitzenböck space specifically allows Equivalence Principle violation by opposite parity mass distributions, falsifying metric gravitation in pseudo-Riemannian space. A parity Eotvos experiment is trivial to perform,
again using single crystals of space groups P3(1)21 and P3(2)21 quartz. Glycine gamma-polymorph in enantiomorphic space groups P3(1) and P3(2) is a lower symmetry case and is charge-polarized, with 1.6 times the atom packing density of quartz.
Theoretic grandstanding has produced nothing tangible after 25 years of celebrated pontification. Gravitation theories are geometries arising from postulated "beautiful" symmetries. They are vulnerable to geometric falsification (e.g., Euclid vs. elliptic and hyperbolic triangles). Somebody should look.
47. Bee, what I mean is, the mathematical machinery can't produce the actual random results directly. That means, a sequence like 4,1,3,3,0,6, ... or something else instead. It just treats the randomness as an abstraction. If you can find a way for the *operation* to produce an actual sequence of random numbers or etc., please explain and show the results. REM the same operation must produce different sequences other times it is "run" or it isn't really random. (I don't think you can, since any known "operation" will produce the same result each time - again, if you don't "cheat" by pulling results from outside. Hence, taking sqrt 2 provides a specific sequence, and it will every time you do it. Even if you said, it can be either negative or positive if you consider x^2 = 2, *you* are still going to decide which to show each time. Otherwise, it is just the set of solutions. In a random variable, it represents a class of outputs - that is not the same, as having a mechanism to produce varying results each time. Don't you think, if a math process could do that, chip mfrs would use that instead of either seeded pseudo-random generators, or an actual physical process?
If you are thinking in terms of practical use, all I can say is: I mean, the logical definition that a worker in FOM would use, and I think they agree with me with few exceptions. Please think it through carefully, tx.
48. Neil: I understand what you're saying but you don't understand what I'm saying. You are implicitly assuming reality "is" something more than a process that is (to some extend) "really" probabilisitc. You're thinking it instead "really is" the sequence and the sequence is not the random variable. That is your point but it is a circular argument: you think reality can't be probabilistic because a probabilistic distribution is not real. Define "not real," see the problem? Best,
49. Bee, Giotis:
Specifically with regard to QFT - how well-defined does one have to be? Are we well-defined enough?
50. One can never be well-defined enough. The pitfalls in physics as in economics and biology are the hidden assumptions people forget about because they are either "obvious" (cognitive bias) or "everybody makes them" (social bias). Best,
51. Bee, I mean very carefully what I said about the specific point I made: that *math* can't produce such "really random" results, but only describe them in the abstract. But if we were talking at cross purposes, then we could both be right about our separate points. As for yours: I am assuming nothing about the universe or what it has to be like. But if we appreciate the first point above, and then look at the universe: we find "random" results supposedly coming out. The universe does produce actual sequences and events, not (unless you dodge via MWI) a mere abstraction of a space of probable outcomes.
If actual outcomes, sample sequences which are the true 'data' from experiments, are genuinely "random" in the manner I described, then:
(1) The universe produces "random" sequences upon demand.
(2) They can't - as particulars - be produced by a mathematical process.
(3): The universe is therefore not "just math", and MUH is invalid.
That is not a circular argument. It is a valid course of deduction from a starting assumption (about math, supported by the consensus of the FOM community AFAIK), which is compared to the apparent behavior of the universe, with a disjoint deduced thereby. As for what "real" means, who knows for sure? But we do know how math works, we know how nature works, and it cannot IMHO be entirely the same.
52. Neil: But I said in the very beginning it's a many world's picture. The MUH doesn't only mean all that you think is "real" is mathematics but all that is math is real. Best,
53. Neil: You're right, I didn't say that, I just thought I said it. Sorry for the misunderstanding. Best,
54. Well it depends. The simplest example is the divergence of the vacuum energy. You just subtract it in QFT saying that only energy differences matter if gravity is not considered and QFT is not a complete theory anyway. Are you satisfied with the explanation? Some people are not very happy with all these divergences and their handling with the renormalization procedure. Also perturbative QFT misses a number of phenomena. So somebody could say that is not well defined or is well defined in a certain regime under certain preconditions.
The main issue though is that you'll always find people who challenge the existing knowledge, ask questions and doubt the truth of the given explanations if they are not well defined (as Bee does in her post). That's why I talked about pluralism, diversity of opinions, open dialogue and open/free access to knowledge, as a remedy even for the cognitive bias. And I'm not talking about physics or science only but generally.
55. Neil: Maybe what I said becomes clearer this way. Your problem is that stochastic doesn't offer a "process to produce" a sequence. Since the sequence is what you believe is real, reality can't be all described by math. What I'm asking is how do you know that only the sequence is "real" and not all the possible sequences the random variable can produce? I'm saying it's circular because you're explaining the maths isn't "real" because "reality" isn't the math, and I'm asking how do you know the latter. (Besides, just to be clear on this, I am neither a fan of MUH nor MWI.) Best,
56. Neil, have you read some of Gregory Chaitin's work who believes randomness lies at the heart of mathematics? Good article here: "My Omega number possesses infinite complexity and therefore cannot be explained by any finite mathematical theory. This shows that in a sense there is randomness in pure mathematics."
I think as a clear example, the distribution of prime numbers appears to be fundamentally random - cannot be predicted from any algorithm. But the positions are clearly defined in mathematics. So that's fundamental randomness right at the heart of maths.
57. While I think it is very important to examine the question of bias, I also think it is a very sticky wicket. Observing signals among noise is an extremely individualistic thing. It is a fact that some gifted individuals can pick signals out of the noise but can't explain how they do it. Or they explain it and it isn't rational to the rest of us.
For instance the brains of many idiot savants, (and also some normally functioning individuals, which is much more rare), can calculate numbers in their head using shapes and colors that they visualize. Others can memorize entire phonebooks using similar methods. Visualization cues are often key to these abilities.
To most of us it would seem like very good intuition because most brains don't work like that, but I think that is a mistake. I certainly think there are rare individuals who can do similar things in other fields of study. But scientists are often too biased in their reductionist philosophy to accept it. They assume that because a particular individuals brain doesn't work the way their's do any explanation for how the "calculation" was done is that is was just good intuition. That conclusion itself is an overly reductionist conclusion.
58. Whew, what a metaphysical morass. Well, about what MUH people think is true: it is rather clear, they say that all mathematical descriptions are equally "real" in the way our world is, but furthermore there is no other way to be "real" (ie, modal realism.) So there isn't any valid: "hey, we are in a materially real world, but that conceptual alteration where things are a little different 'does not really exist' except as an unactualized abstraction." MT et al would say, there is no distinction between unactualized, and actualized (as a "material" world) abstractions. Poor Madonna, was she wrong? But I don't agree anyway. BTW, MUH doesn't really prove MWI unless you can connect all the histories to get retrodiction of the quantum probabilities.
Bee, Andrew: Now, about math: the stochastic variable represents the set of outcomes. Why do I know only the particular outcome is real? Well, in an actual experiment that's what you get. How can I make that more clear? Of course the math is "real", but so is the outcome in a real universe. There is a mismatch. Please don't go around the issue of which is real. Both are real in their own ways, they just can't be equivalent. You can't construct a mathematical engine to produce such output.
I don't think Chaitin's number can produce *different* results each time one uses the process. Like I said, I want to see such output produced. It is not consensus to disbelieve in the deterministic nature of math. As for primes, that is a common misunderstanding regarding pseudo-random sequences. The sequence of primes is *fixed*, that is what matters regardless of what it looks like. If you calculate the primes, you get the *same* sequence each time, OK?! But a quantum process produces one sequence one time, another sequence another (in "our world" at least, and let them prove any more.) Folks, I shouldn't have to slog through all this. Check some texts on the foundations of math, I doubt many disagree with me.
Bee - I can't get email notify any more.
Hi Neil, yes, but that's not the definition of "random" - that it "produces a different number each time". If I produce an algorithm that produces "1", "2", "3" etc. then it is producing "a different number each time" but that is clearly not random.
No, the definition of a random sequence is one which cannot be algorithmically compressed to something simpler (e.g., the sequence 1,2,3 can clearly be compressed down to a much simpler algorithm).
I can assure you, the distribution of the primes (or the decimals of pi, for example) is truly random in that it cannot be further compressed.
Random quantum behaviour would be described by such a truly random sequence in that the behvaiour cannot be compressed to a simpler algorithm (i.e., a simpler deterministic algorithm).
Neil: "Folks, I shouldn't have to slog through all this. Check some texts on the foundations of math, I doubt many disagree with me." I actually think most would disagree, Neil. See more on algorithmically random sequence
60. Bee:
Of course we are not unbiased. The brain does Bayesian inference (whether consciously or not) and Bayesian inference depends in part on a prior estimate of the probability distribution over the possible observed data. This prior distribution unavoidably introduces bias into cognition. Since this prior distribution is encoded in one’s current brain state at the moment one begins to process a newly observed datum, no two people will bring the exact same bias to any given inference. This is as true of low-level perceptual inference of the kind studied by Brugger as it is of high-level abstract inductive inference of the kind that gives rise to scientific theories.
Equally unavoidably, we are predisposed by the structures of our brains to describe the world in terms of certain archetypical symbols, which you may think of as eigenvectors of the brain state. The structure of each brain is determined by a complex interplay between genetic factors and the entire history of that brain from the moment of conception. Thus, there are bound to be species-wide biases as well as cultural and individual predispositions in the way we describe what we see, the questions can we ask about it, and the answers we are able to accept.
The only remedy for such biases is the scientific method, practiced with complete intellectual honesty and total disregard for accepted doctrine and dogma -- to the extent that this is humanly possible. Unfortunately, in recent times, this process is becoming increasingly hobbled by a number of destructive trends.
Firstly, we have allowed indoctrination to become the primary goal of our education system. Where once it was considered self-evident that the purpose of education is “to change an empty mind into an open one,” educators now claim explicitly that the most important role of education is “to inculcate the right attitude towards society.”
Secondly, the unavoidable imperfections of the peer review process have been co-opted by political special-interest groups as well as the personal fiefdoms and in-groups of influential scientists, so the very process that is supposed to guard against bias is now perpetuating it. This can be seen in every modern science; specific recent examples include psychology, sociology, anthropology, archeology, climatology, physics and mathematics.
Thirdly, widespread misunderstanding of the content of quantum theory has lead many to doubt that “objective reality” even exists. This, in turn, is used by so-called “philosophers” of the post-modern persuasion to call the very idea of “rational thought” into question. Well, if objective reality and rational thought are disallowed, then only blind superstition and ideological conformity are left.
Is it any wonder, then, that progress in science (as distinct from technology) is grinding to a halt?
61. Bee, you ask exactly the right question. If I may paraphrase it thus: "What cognitive or social biases have ( become embedded in and )impeded Science from developing a truly compelling and comprehensive Quantum Gravity unification cosmology & philosophy? " ( say provisionally, cQGc).
In a soon to be released monograph, 3 such impediments and biases with far ranging theoretical consequences are identified. In appreciation of this and many of your previous blog postings and since you ask, I feel compelled to answer your question in some detail with this sneak preview of some of the introduction from that monograph, edited only slightly to accomodate the context of this post.
"... however our senses, which can fall victim to optical illusions and other cognitive biases, only generate the rawest form of data for Science which applies to these measurement, rigor and axiomatic philosophical principles to weed out such biases to generate the positivistic consensus reality Science seeks to fully describe and explain. Despite this ideal, a great many scientists themselves ( and their theories) still fall victim to the incorrect cognitive bias that our consensus reality is continuous rather than being discrete and positivistic and there is widespread subscription to the mistaken idea that Science is uncovering reality as it 'really is'. This is to mistake the map for the territory it depicts. In a May 2009 essay for Physics Today David Mermin reminds us of the importance of not falling victim to this mistaken thinking.
This failure in many to respect the positivistic rudder in Science has been with us since the days of the Copenhagen School and the Bohr/Einstein debates and is the first of 3 major impediments to discovering a cQGc. The deep divide and raging debate ( indeed crisis) which philosophically divides the theoretical physics community regarding the invalidity of mistaken notions of ManyWorlds, MultiVerses and Anthropic rationalizations is not just about the absence of some sort Popperian critical tests of such models but rather, their invalidity that so many fail to accept is based on the blatant violation of intrinsic QM positivism these ideas embody.
.../ cont. in Pt.2
62. ... Part.2
The 2.nd impediment has been whimsical or careless nomenclature and/or careless use of language which has resulted in sloppy philosophizing and the embedding into our inquiries, certain misapprehensions regarding precisely what it is we seek to explain. So for example, none of the observational evidence in support of the big bang in any way supports the assertion that this was the birth of the Universe but rather, all we can infer is that the big bang was the 'birth' or phase change of SpaceTime, a subset of Universe, from a state of near SpaceTime_lessness to what we observe today. Philosophically, how can the Universe in its totality, go from a timeless state of (presumably) perfect stasis ( or non-existence)to a timeful state as we observe today. Note how this simple clarification immediate resolves 2 deep questions. Creation ex nihilo and "Why is there something rather than nothing ? " The latter is a positivistic non-sequitur as there is no evidence whatsoever that the Universe was ever in a state of non-existence and Science, being positivistic, need not explain those things which never occur, only those which have or are allowed occur.
The 3.rd impediment has been mis-use or runaway abuse of Newton's Hypothetico-Deductive (HD) method where, for example, we begin with say, an Inflation Conjecture to HD resolve certain issues but before very long, we have Eternal Inflation and then we have baby universes popping off everywhere, in abject violation of positivism not to mention SpaceTime Invariance. Similarly, the HD proposal of a string as the fundamental entity of our consensus reality to better interpret a dataset formarly known as the scattering matrix which then becomes String Theory which then becomes Superstring Theory which then becomes matrix theory which then becomes M-Theory perfectly forgets that searching for a fundamental object of our consensus reality is like looking for the most fundamental word in the dictionary. Our consensus reality is intrinsically relational and this fact is the lesson we should take from Goedel's Incompleteness Theorem (GI). So, the mistake here is to take or overly rely on Conjectures as established results and build further HD conjectures on top as also established. In passing, i would further observe that a string can only support a vibratory state (or wave mechanics) by remembering that such a string must have tension, a property which seems to me is conceptually lost when one connects the ends of the string to inadmissably conjure up the first loop to force fit the consequences of one's initial, flawed, HD conjecture. The invocation of convenient quantum fluctuations to force fit Inflation in the face CMB anisotropies is another yet example of such erroneous reasoning.
Science is the formal system which can never succeed, in principle, in bootstrapping itself to a generally covariant absolute statement of Truth like "This is the Universe as it really is". (The URL under my name for this comment will take you to a talk which strongly suggests that even Stephen Hawking subscribes to the concept of a reality as it 'really is' ).
... / cont Pt.3
63. ... part.3
So, even a derivation of a cQGc from first principles which would be a proof in any other context, remains undecidably True while at the same time we will know it to be provisionally true( lower case t) because of its comprehensiveness and the absence of a counter-example. GI is actually the only legitimate anthropic principle we may recognize in Science and arises from the fact that all our formal systems (languages, Science, etc) are all arbitrary convential human inventions which can only self-consistently describe the consensus reality we positivistically observe and are able to measure or infer, consistent with our nature as an inextricable subsystem of that consensus reality. My personal mnemonic for GI is "More truth than Proof"
So Bee, I hope this goes some way to answering your question and while I feel sure none of it comes as any surprise to you( though other aspects of the monograph might when you someday read it). I hope this respopnse helps and accurately clarifies some things for your readership in answer to your question.
Thanks again,
64. Bee, Neil, and Andrew:
Regarding your ongoing exchange, I would like to emphasise that there is no point in trying to distinguish between “truly random” and “pseudo-random.” Any process which takes place in finite time can only depend on a finite amount of information, and it takes infinite information to distinguish between “truly random” and “pseudo-random.” Chaitin’s criterion regarding where to stop and declare that we are “close enough to truly random for practical purposes” is as good as any other -- perhaps better than most.
In addition, probability distributions merely enumerate possibilities. Therefore, the distributions that follow from our mathematical models apply only to the models, and not to the real world. We may, for example, make an idealized model of coin tossing, which is governed by a binomial distribution. But that distribution only enumerates the possibilities inherent in the idealized model. In the real world, the odds are not 50/50; the dynamics depends very sensitively on initial conditions, and there is no limit to the number of factors we may choose to take into account or neglect as extraneous. Thus our choice of a probability distribution describes our state of knowledge about coin tossing.
In respect of phenomena in the real world, we may choose to treat them analytically as though they are governed by some particular probability distribution. But in so doing, it would be a mistake to ascribe objective reality to that distribution. The “true distribution” is as unknowable as the “true value” of a measurement. The best we can do is to approximate these things with varying degrees of accuracy. Hopefully, our accuracy improves as we learn more about the real world.
Of course, it is also a mistake to claim that these values and distributions don’t exist, just because they are unknowable. The very fact that these things can be repeatably approximated shows that they are indeed, objectively real. Of that we can be certain, despite being equally certain that we can never know them with perfect precision.
65. Canadian_Phil:
I would remind you that reality needs no help from you or your putative “consensus” to be what it is. If our consensus is not converging on an ever-more-accurate approximation of an objective reality that exists independent of any of us, then we are wasting our time with solipsistic nonsense.
66. Well, things are made more difficult by various senses of "random" that are used in various contexts. Yes, there is such a thing as a 'random' sequence per se. BTW it should have been clear, I meant about a process that produces a different sequence of numbers each time it is run. In other words, it's *action* is random. A mathematical process cannot do that. So even if there are other ways to be "random", my essential point is correct: the universe cannot be "made from math" because math is deterministic. That is the key point, "deterministic", more than the precise definition of "randomness" which also gets hung on on pseudo-randomness etc. The digits of pi may be "random" in the sense of appearances but their order is determined by the definition, and it will be the same time after time. That makes those digits "predictable." That is equivalent to the physical point: determinism v. (claimed) inherent predictability. I also still maintain that the most cogent thinkers in foundations of mathematics agree with me in the context I make.
67. (REM also that in the sense used to claim that certain phenomena are "truly random", that is meant to imply that there is nothing we can know that would show us reliably what would happen next. Sure, if I just look at a sequence of digits it may "appear" random and to various tests, as the definitions admit. But once I found out that they were generated by eg the deterministic mathematics behind deriving a root, then I would know what was coming next etc.
Andrew - since you are interested in QM issues, pls. take a look at my own blog post on decoherence. A bit clunky now, but explains how we could experimentally recover information that conventional assumptions would say, was lost.)
68. /*...Al Gore is the antichrist...*/
LOL, how did you come into it?
69. Regarding arguments about infinite complexity, I'd like to make a small correction. The information content of pi can be contained in a finite algorithm so it contains only a finite amount of information. I think there are similar algorithms for generating prime numbers as well?
70. I see now that 'anonymous' already made this point much better than I did!
71. If someone were to make an unfortunate comment like:
"What I was aiming at is that unlike all other systems the universe is perfectly isolated."
Someone else might respond:
"What “universe” is she talking about? The local observable universe? The entire Universe [rather poorly sampled!]?
We have so little hard evidence in cosmology that it is ill-advised for us to make such sweeping and absolute statements about something we know very little about.
Then again, cosmologists and theoretical physicists are: “often
wrong, but never in doubt”.
Blind leading the blindly credulous into benightedness?
72. Ulrich: I was using the word "universe" in the old fashioned sense to mean "all there is." I have expressed several times (most recently in a comment at CV) that already the word "multiverse" is meaningless since the universe is already everything. But words that become common use are not always good choices. Besides this, I would recommend that instead of posting as "Anonymous" you check the box Name/URL below the comment window and enter a name. You don't have to enter a URL. That's because our comment sections get easily confusing if there's several aonymouses. Best,
73. Zephir: Read it on a blog. You find plenty of numerology regarding Al Gore'e evilness if you Google "Al Gore Anticrist 666."
74. Neil:
You cannot. That's why it's circular. It doesn't matter whether you call it "real" or "actual," you have some idea of what it is that you cannot define. (This is not your fault, it's not possible.) Let me repeat what I said earlier. In which sense are the other outcomes "not real?" How do you know that?
It occurred to me yesterday that this is way too complicated to see why MUH is not "invalid" for the reasons you mention. (What I wrote in my post is not that MUH cannot be but that Tegmark's claim it can be derived rather than assumed is false. It's such sloppiness in argumentation that I was complaining to Phil about.)
Forget about your "sequence" with which you have a problem, and take your own reality at a time t_0. Let's call this Neil(t_0). I leave it to you whether you want Neil just to be your brain or include your body, clothes, girlfriend, doesn't matter. Point is, MUH says you're a mathematical structure and all mathematical structures are equally real somewhere in the level 4 multiverse (or whatever he calls it). Now note that by assuming this you have assumed away any problem of the sort you're mentioning. You do not need to produce your past or future and some sensible sequence, all you really need is Neil(t_0) who BELIEVES he has a past. And that you have already by assumption. (Come to think of it, somehow this smells Barbourian to me.) This of course doesn't explain anything, which is exactly why I find it pointless. Best,
75. Anonymous (6:54 PM, January 07, 2010),
First the same recommendation to you as to Ulrich: Please chose Name/URL below the comment window and enter a name (or at least a number) because the comment sections get easily confusing with various anonymouses. (If I could I would disable anonymous comments, but I can only do so when I also disable the pseudonymous ones, thus I unfortunately keep repeating this over and over again.)
I agree with you on the first and second point. I don't know what to make of the third and given that I've never heard of it despite having spent more than a decade in fundamental research I doubt that there are many of my colleagues who believe "rational thought is disallowed," and thus there cannot be much to the problem you think it is. Best,
76. Hi Canadian Phil,
-Neils Bohr
After reading through your long treatise it appears to boil down to have the above statement of Bohr to be just generalized to all of physics. I would say that you’re thinking and that of Mermin’s echoes the same sentiment, which I would contend more indicative as to what the problem is in modern physics, rather then what should be considered as a remedy. So if I were to pick someone who stood for the counter of your position it would be J.S. Bell, as he so often reminded that much of what we consider as truth is not forced upon us by what the experiments tell us, yet rather directly from deliberate theoretical choice. The type of theoretical choices he was referencing being the ones resultantly formed of the sort of scientific ambiguity and sloppiness which are exactly the type you support.
“Even now the de Broglie - Bohm picture is gene rally ignored, and not taught to students. I think this is a great loss. For that picture exercises the mind in a very salutary way. “
-J.S, Bell-Introductory remarks at Naples-Amalfi meeting,May 7, 1984.
“Why is the pilot wave picture ignored in textbooks? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism are not forced on us by experimental facts, but by deliberate theoretical choice?”
-J.S. Bell-“On the impossible pilot wave”, Foundations of Physics, 12 (1982) pp 989-99.
P.S. I must apologize for my two previous erasers, yet this to was simply to rid my own thoughts of the ills Bell complained about :-)
77. Phil & Phil: We discussed Mermin's pamphlet here, please stick to the topic. Best,
PS: Canadian Phil, I'm afraid the other Phil is also Canadian.
78. Janne: "Regarding arguments about infinite complexity, I'd like to make a small correction. The information content of pi can be contained in a finite algorithm so it contains only a finite amount of information." Yes, you're quite right. I realised after I wrote it but I hoped no one would notice! The decimals of pi are certainly not random as they can be produced by a very simple algorithm.
The distribution of the primes is a different thing altogether, which I believe is genuinely random (i.e., cannot be produced by a simpler algorithm. At least, they are random if someone can prove the Riemann Hypothesis - there's a great article : The Music of the Primes.
Neil, I think your criticism of the MUH is not so much based on randomness at all, but more the idea that ANY mathematical structure is unvarying with respect to time and so cannot represent the universe. However, this isn't a valid criticism of Tegmark's idea as he proposed a block universe mathematical structure in his original paper which would, of course, be unvarying with time but would appear to change with time for any observer inside the universe. Here is an extract from Tegmark's paper: "We need to distinguish between two different ways of viewing the external physical reality: the outside view or bird perspective of a mathematician studying the mathematical structure and the inside view or frog perspective of an observer living in it. A first subtlety in relating the two perspectives involves time. Recall that a mathematical structure is an abstract, immutable entity existing outside of space and time. If history were a movie, the structure would therefore correspond not to a single frame of it but to the entire videotape." So the entire mathematical structure might be fixed and immutable, but to the frog everything still appears to be moving in time. I don't think it's possible to simply criticise Tegmark's work on that basis - he did his job very well. It's a superb paper, really all-encompassing, well worth putting aside a day to read it. But I don't think his conclusion is right (hardly anyone does, it appears).
(Good luck with your work on decoherence, Neil. I was interested a while back but I've had my fill of it).
79. Hi Bee,
As I would say that what Bell was referring to has directly to do with what is asked here , as to whether physics is cognitively biased, I would wonder as to how that has my remarks as being off topic? Perhaps you feel that my contention was meant as support for a particular theory, which if that be the case I can assure you it certainly is not as I don’t have a particular theory I favour. Actually all I was asking to be considered is the contention of Bell that vagueness, ambiguity and sloppiness are primary what stands as being the noise which currently prevents it from being able to discover what nature is, rather then only what we might be able to say about it.
80. Phil: I was just saying for prevention if you want to discuss Mermin's essay, please don't do it here since Canadian Phil doesn't seem to know we previously discussed it. Best,
81. Hi Bee,
I see your point. as perhaps this post is more meant to ponder the cause(s) of bias, rather then what any particular one might be. Still though as in medicine it is hard to discover the mechanism of disease without first examining its symptoms. That would be as science would have us look to experiment to consider what begs explanation. Then of course with the aid of this examination to find if of any the explanations offered to be correct, only if it can further have us understand the mechanism as to able to predict further what this would demand. In the case of medicine this is confirmed when such understanding has rendered a cure that exceeds those found sometimes as only resultant of a belief one has, rather than able to demonstrate one has an understanding as to how that suggests reason as to why.
So I see this whole thing that’s called science as a continuing process to delve ever deeper to discover the underlying mechanisms of the world, rather then have it become something that prevents us from finding them. I’m thus reminded of Newton’s statement that he could offer no explanation of gravity yet only able to predict it’s actions and that should be enough and yet Einstein was not intimated into accepting such a limitation and resultantly able to come up with a mechanism which has proven us able to understand more than Newton thought as being relevant or to have utility. So simply put as I see a person of science is not one who at any point is able to accept the answer for how or why as simply because, as if they do that forms to be the gratest bias which prevents its success.
82. Bee, with all due respect you are making the wrong choice about who has the burden of proof about our world and the various unique "outcomes" we observe, v. the idea that there are more of them. Let's say we do an actual quantum experiment (like MZ interferometer with a phase difference) and get sequence 1 0 0 1 1 1 0 1 0 1 0 0 1 1 ... That is an "actual result" that is not AFAWK computable from some particular algorithm. It is not like the digits of pi: they are logically necessary (and hence, deterministically reproducible) consequences of a particular mathematical operation. It is not my job to "prove" or even have the burden of argument, that all other possible sequences of hits from the MZI "exist" somewhere as other than raw abstractions, like "all possible chess games." The burden of proof is on you and anyone who believes in physobabble concepts like MWI. Until that is demonstrated or at least solidly supported, I have the right to claim the upper hand (not "certainty"; but so what) about there being a distinction between "natural" QM process outcomes, and the logically necessary and fixed results of mathematical operations.
83. (I mean not my job to prove they aren't there.)
It seems such randomness has some order to it?:)
85. Ok, but where do you get the justification for the received wisdom that "the universe is perfectly isolated" in any meaningful physical sense?
Why are not scientists more careful and humble in their intuitive beliefs?
86. Bee:
Thanks for explaining how to post under a pseudonym. I am the Anonymous from 6:54 PM, 8:05 PM, and 8:09 PM on January 07.
The “widespread misunderstanding of the content of quantum theory” I was referring to includes, inter alia, the notion that a quantum system has no properties until they are brought into existence by the observer through an act of measurement. This sort of nonsense not only retards the progress of physics, but gives rise to all manner of pernicious superstition and mystical hocus-pocus, wrapped in a false mantle of scientific objectivity.
In my view, enormous damage has been done, not only to physics, but to all of science – and indeed to the very concept of objective rationality – by those who mistakenly read an ontological content into the famous statement of Neils Bohr, quoted by Phil Warnell above. Let me repeat it here for convenience:
This is an explicit warning not to ascribe the “weirdness” of the quantum formalism to the real physical world, but since the day the words were uttered, there has been an apparently irresistible urge to do the exact opposite.
Bohr was not alone in suffering such misinterpretation. Schrödinger originally introduced us to his cat as a caution against ascribing physical reality to the superposition of states, yet Schrödinger’s cat was made famous by others who deviously used it to support precisely what Schrödinger argued against. And Bell’s theorem is ubiquitously used in support of spooky claims about quantum measurement, effectively drowning out Bell’s own opinion of hidden-variable theories, as made clear by another quote from page 997 of the article quoted by Phil Warnell:
“What is proved by impossibility proofs is lack of imagination.”
Of course, those who indulge in mystical interpretations of quantum mechanics do not believe they are disallowing rational thought; they think they are being deep. But their stance is nonetheless profoundly anti-rational; it leaks out of physics into metaphysics and philosophy, and from there, into the rest of post-modern thought.
It lends credence to such notions as “the quantum law of attraction” (otherwise known by Oprah fans as “the secret”), not to mention the idea that reality is a matter of consensus. The first is a thinly veiled return to sympathetic magic, and the second is a kind of quantum solipsism that results from treating the “intersubjective rationality” of Jürgen Habermas as legitimate epistemology, instead of recognizing it as a degenerative disease of the rational faculty.
87. Ulrich: "Everything there is" is perfectly isolated from everything else, since it's damned hard to interact with nothing. Best,
88. Neil: I already said above I don't believe in MWI. Unfortunately, since you are the one claiming you have a "proof" that MUH can't describe reality it's on you to formulate your proof in well-defined terms, which you fail to do. Your three step procedure makes use of the notion of a "production" which is undefined and your other arguments continue to assume a notion of what is "not real" that makes your argument circular.
Look, read my last comment addressed to you and you'll notice that you can stand on your feet and wiggle your toes but there is no way to proof what you want to proof without having to assume some particular notion of reality already. Andrew got it exactly right: your problem is that you believe there has to be some actual time-sequence, some "production" (what is a production if not a time-sequence?). I'm telling you you don't need a time-sequence. You don't need, in fact, any sort of sequence or even ordering. All you need to capture your reality in maths is one timeless instant of Neil(now). That's not how you might think about reality, but there's no way to prove that's not your reality. Best,
I wonder how one could have ever been lead through to the "entanglement processes" without ever first going through Bell? I mean sure, at first it was about Einstein and spooky, and now it's not such a subject to think it has through time become entwined with something metaphysical and irrelevant (thought experiments about elephants)because one would like to twist the reality according too? I mean what was Perose and Susskind thinking?:)
Poetically, it has cast a segment of the population toward connotations of "blind men."
Make's one think their house is some how "more appealing" as a totally subjective remark.
So indeed one has to be careful how we can cast dispersions upon the "rest of society" while we think we are safe in our "own interactions" to think we are totally within the white garment of science.
I hear you.:)
90. Plato:
Yes, that’s a perfect example of the sort of drivel that results when you think a probability is a property of a particle.
It makes smart guys say dumb things...
91. Ain Soph,
Your choice of a handle reminded me of a term that just came to me as if I had heard it before but the spelling was different.
Is there any correlation?
92. Hi Ain Soph,
I must say I was intrigued by what you said last as to where our prejudices and preconceptions can lead us to, even though they may appear as sound science. I would for the most part agree with what you said in such regard, except for the role of Bohr and what his intentions where as driven by his own philosophical and metaphysical center.
To serve as evidence of my contention goes back to the very beginnings of the Copenhagen interpretation’s creation and the sheer force of will Bohr had to serve in having it become as ambiguous and sloppy as many find it now. That would be when Heisenberg first arrived at the necessity for uncertainty with his principle and with his microscope example attempted to lend physical meaning to it all. Bohr of course staunchly opposed such an attempt and argued Heisenberg even when taken to bed in sickness until he finally relented and altered his view to match that of Bohr’s.
So my way of reading this coupled with the content of his rebuttal of EPR has given me reason to find that while Bohr may not as Einstein being guilty at times of telling nature how it should be,was guilty of having the audacity of insisting what nature would allow us to ultimately know. I’ve then have long asked, which is the greater transgression as to enabling physics to progress; that being convinced nature having certain limits in regards to what’s reasonable or rather the only limiting quality it has is in it restricting having anyone able to find the reason in them.
So in light of this I don’t know what your answer would be, yet I consider the second as being the most unscientific and thus harmful of the two biases; as the first can be falsified by experiment, while the latter prevents one from even bothering making an attempt. Fortunately for science there always have been and I hope always will be those like Einstein, Bohm and Bell, who refuse to be so intimidated as to feel restricted to look.
93. Bee:
Precisely. How interesting that you should recognize the reference...
94. Plato:
My last post should have been addressed to you, not Bee.
95. Phil:
I get the impression that you’ve spent quite a bit more time studying the history of the subject than I, so I will defer to your greater knowledge of it. It seems, then, that I have always given Bohr the benefit of more doubt than there actually is.
The quotation we have both commented on actually doesn’t appear in print anywhere under the by-line of Neils Bohr. It was attributed to him by Aage Petersen in an article that appeared in Bull. Atom. Sci. 19:7 in 1963, a year after Bohr’s death. I had always thought that Petersen rather overstated the case – especially in the third sentence – and that Bohr’s own stance must have been more sane. But perhaps not.
Another, who gleefully conflated mysticism and quantum mechanics, was J. R. Oppenheimer. For example, his 1953 Reith Lectures left his listeners to ponder such ersatz profundities as the following:
“If we ask whether the electron is at rest, we must say no; if we ask whether it is in motion, we must say no. The Buddha has given such answers when interrogated as to the conditions of a man’s self after his death; but they are not familiar answers for the tradition of seventeenth- and eighteenth-century science.”
Disturbingly, this strikes me not so much as cognitive bias as deliberate obfuscation. True things are said in a way that invites the listener to jump to false conclusions.
96. "Ulrich: "Everything there is" is perfectly isolated from everything else, since it's damned hard to interact with nothing. Best, B."
If you give it a little more thought, you may be forced to concede that the "perfectly isolated" assumption lacks any rigorous scientific meaning. Certainly no empirical proof in sight.
By the way, you and your colleagues:
(1) Do not know what the dark matter is [and that's = or > than 90% of your "everthing"].
(2) Do know what physical process give rise to "dark energy" phenomena.
(3) Do not have an empirical clue about the size of the Universe.
(4) Do not have more than description and arm-waving when it comes to explaining the existence and unique properties of galaxies.
Wake up! Stop swaggering around like arrogant twits, pretending to a comprehensive knowledge that you most certainly do not possess.
Einstein spoke the truth when he said: "All our science when measured against reality [read nature] is primitive and childish, and yet it is the most precious thing we have."
THAT is the right attitude, and it is a two-part attitude, and both parts are mandatory for all scientists.
Real change is on its way,
97. Ulrich: It's not an assumption. The universe is a thermodynamically perfectly isolated system according to all definitions that I can think of. If you claim it is not, please explain in which way it is not isolated.
As for the rest of your comment, yes, these are presently open questions in physics. I have never "pretended" I know the answer, so what's your point.
Besides this, your comments are not only insulting, they are also off-topic. Please re-read our comment rules. Thanks,
98. Ulrich:
Real change is on its way?
What – you’re going to learn some manners?
99. The very name "Ain Soph" suggests a lack of manners.
100. Hi Arun,
I find Ain Soph to be quite a respectful name as to serve as a reminder that when it comes to science since its central premise denies ever considering there be made allowable such a privilege position to have it then able to deny their be reason as to look away from finding explanation, for as Newton reminded in respect to any such propositions:
101. Hi Ain Soph,
Well I don’t know which of us are more studied when it comes to the history of the foundations, as it appears you’ve looked at it pretty closely. My only objection being it seems as of late there appears to be a little rewriting of it as to give Bohr a pass on what his role in all this was and what camp he represented, as to have him thought as misunderstood rather than its primary advocate. Of course we don’t have any of them with us here today as to ask directly, yet still I think things are made pretty clear between what they left of their thoughts and their legacy made evident with the general attitudes of the scientists of the following generation.
My thoughts are this obfuscation as you call it, has simply reincarnated itself in things like many universes, many worlds, all is math and so many of the other approaches in which the central premise of each is to have made unapproachable exactly what needs to be approached. I must say your moniker is an excellent symbol as to what all these amount to as being when it comes to natural philosophy. So as such I would agree that anytime things in science are devised which prevents one from being able to ask a question meant to enable one to find the solution to something that begs explanation, that’s the time to no longer have it considered as science since it’s lost its reason to be. That’s to say there is no harm in having biases as long as the method assures these can be exposed for what they are with allowing them to be proven to be wrong as they apply to nature.
102. Hi Bee,
I think this whole question of biases come down to considering one thing, that being the responsibility of physics is to have recognized and give explanation to nature’s biases, rather than being able to justify our own. So yes it does all depend on biases with having reality itself having the only ones that are relevant.
103. Phil,
Its fine. I have not been able to decipher what you and Ain Soph are saying anyway.
104. Hi Ain soph
It is not by relation that I can say I am Jewish...but that I understood that different parts of society have their relation to religion, and you piqued my interest by "the spelling" and how it sounded to me. It bothered me as to where I had seen it.
As in science, I do not like to see such divisions, based on a perceived notion that has been limited by our own choosing "to identify with" what ever part of science that seems to bother people about other parts of science as if religion then should come between us.
You determined your position long before you choose the name. The name only added to it as if by some exclamation point.
Not only do I hear you but I see you too.:)
105. Plato:
Yes, the spelling is the irony.
But Copenhagen is not Cefalu. Or is it?
106. Oops, my bad!
Let me just reiterate:
and leave it at that. Almost.
Science, unlike cast-in-stone religions, is self-correcting.
It may be a slow process, but I should trust the process.
Come on you grumbler [not to mention sock puppet], have a little faith!
107. Phil:
Oh, great. Whenever the revisionists go to work on a discipline, expect trouble! If the Copenhagen Orthodox Congregation, the Bohmians, the Consistent Historians, the Einselectionists, the Spontaneous Collapsicans, and the Everettistas can’t communicate now, just wait until revisionism has cut every last bit of common ground out from under them!
108. Ulrich:
Grumbler? Sock Puppet?
What is it with you?
You can’t maintain a civil tone from one end of a 100-word post to the other?
Clean up your act, Ulrich, or I will just ignore you.
109. Hi Arun,
”Its fine. I have not been able to decipher what you and Ain Soph are saying anyway.”
Now I feel that I’ve contributed to the confusion, rather than having things made a little clearer, which is probably my fault. To have it simply put as to what for instance Bell’s main contention and complaint was is that things like superposition that lead to contentions such as taking too seriously things such as the collapse of the wave function and the measurement problem more generally, are the result of particular theoretical choices, rather then what’s mandated by experiment. So Bell’s fear, if you would have it called that, is the impediment such concepts have as physics attempts to move forward to develop even a deeper understanding. That’s why he used concepts such as ‘beables’ in place of things like ‘observables’ for instance in an attempt to avoid such prejudices and preconceptions.
110. Hi Ain Soph,
I actually don’t have much concern the historical revisionists will be able to increase the confusion any greater than it already is. My only concern is when deeper theories are being considered is that the researchers are clear as to what begs explanation and what really doesn’t. That’s to have them able to distinguish what concepts the use are result of only particular theoretical choice and which ones are required solely by what experiment have as necessary. That’s to simply to have recognized what serves to increase understanding, while what only serves as impediments in such regard.
111. Phil:
It’s true that it would be hard to increase the confusion beyond its current level. But historical revisionism could make things worse by erasing the “trail of crumbs” that marks how we got here. Personally, when I’m confused, I often find the only remedy is to backtrack to a place where I wasn’t confused and start over from there.
Quantum mechanics is, these days, presented to students as a formal axiomatic system. As such, it is internally consistent, and consistent also with a wide variety of experimental results. But it is inadequate as a physical theory. So some of the axioms need to be adjusted, but which ones? And in what way?
The axiomatic system itself gives us no help in that regard, and simply trying random alternatives is an exercise in futility. The more familiar we are with the existing system, the harder it is to think of sensible alternatives, and the very success of the current theory guarantees that any alternative we try will almost certainly be worse. Indeed, the literature of the past century is littered with such attempts, including some truly astounding combinations of formal virtuosity and physical vacuity.
So, to have any hope of progress, I think we must trace back over the history of the formulation of the present theory, and reconsider why this particular set of axioms was chosen, what alternatives were considered, why they were rejected, and by whom. We need to reconsider which choices were made for good reason after sober debate, which ones were tacitly absorbed without due consideration because they were part of “what everybody knew” at the time, and which ones were adopted as a result of deferring to the vigorous urgings of charismatic individuals.
We are, as you say, badly lost. But let that trail of crumbs be erased, and we may well find ourselves hopelessly lost.
And that is why historical revisionism is so dangerous.
112. Phil:
“... he used concepts such as ‘beables’ in place of things like ‘observables’ for instance in an attempt to avoid such prejudices and preconceptions.”
I must confess that I cringe every time I read a paper about beables. Yes, the term avoids prejudices and preconceptions, but it is also completely devoid of valid physical insight. Thus it throws the baby out with the bathwater.
For me, it makes thinking about the underlying physics even harder and actually strengthens the stranglehold of the formal axiomatic system we are trying to escape.
113. Ain Soph: But Copenhagen is not Cefalu. Or is it?
Oh please the understanding about what amounts to today's methods has been the journey "through the historical past" and the lineage of teacher and students has not changed in it's methodology.
Some of the younger class of scientist would like to detach themselves from the old boys and traditin. Spread their wings. Woman too, cast to a system that they too want to break free of.
So now, such a glorious image to have painted a crippled old one to extreme who is picking at brick and mortar. How nice.
Historical revisionist?
They needed no help from me.
The real philosopher is the prisoner who has escaped from the cave into the light of truth, he is the one who possesses real knowledge. This immediate connection with truth or, we may in the Christian sense say, with God is the new reality that has begun to become stronger than the reality of the world as perceived by our senses. The immediate connection with God happens within the human soul, not in the world, and this was the problem that occupied human thought more than anything else in the two thousand years following Plato. In this period the eyes of the philosophers were directed toward the human soul and its relation to God, to the problems of ethics, and to the interpretation of the revelation but not to the outer world. It was only in the time of the Italian Renaissance that again a gradual change of the human mind could be seen, which resulted finally in a revival of the interest in nature.Werner Heisenberg (1958)
These things are in minds that I have no control over, so, how shall I look to them but as revisionists of the way the world works now. Even, Hooft himself:)
114. Ain Soph: I'm not really sure what point you're trying to make. If anything then the common present-day interpretation of quantum mechanics is an overcompensation for a suspected possible bias: we're naturally tending towards a realist interpretation, thus students are taught to abandon their intuitions. If this isn't accompanied by sufficient reflection I'm afraid though it just backlashes.
Btw, thanks for Bell's impossibility quote! I should have used that for fqxi essay! Best,
115. Hi Ain Soph,
As to the revisionists it’s true that they might cause some reason for concern. However there is the other side of the coin where those like Guido Bacciagaluppi & Antony Valentini who are telling the story from the opposite perspective of the prevailing paradigm and so I suspect the crumbs will always remain as to be followed.
I am surprised you’re not a ‘beables’ appreciator for it was Bell’s way of emphasizing that QM had to be stripped bare first of such motions before it had any chance of being reconstructed in such a way it would serve to be a consistant theory that can take one to the experimental results without interjecting provisos that don’t stem from the formalism.
This has me mindful of a pdf I have of a hand written note Bell handed to a colleague during a conference he attended year ago that listed thw words he thought should be forbidden to be used in any serious conversation regarding the subject which were “ system, apparatus, microscopic, macroscopic, reversible, irreversible, observable, measurement, for all practical purposes ”. So I don’t know as to exactly how you feel about it, yet to me this appears as a good place to start.
116. Plato:
For the past few hundred years, Western civilization has enjoyed an increasingly secular and rational world view – a view which practiced science as natural philosophy and revered knowledge as an end in itself. The result was an unprecedented proliferation of freedom and prosperity throughout the Western world.
But that period peaked around the turn of the last century, and has been in decline for almost a hundred years. Now we value science primarily for the technology we can derive from it. And the love of knowledge is being pushed aside by a resurgence of mysticism and virulent anti-rationalism.
This is not just young Turks making their mark. This is barbarian hordes at the gates.
And yes, we are now witnessing a return to a preoccupation with gods and goddesses and magic, just like the last time a world-dominating civilization went into decline. The result was a thousand years of ignorance and serfdom.
This time, the result may be less pleasant.
117. Bee:
“If this isn't accompanied by sufficient reflection I'm afraid though it just backlashes.”
Yes. Exactly my point.
Students today are not encouraged to reflect and develop insight. They are encouraged to memorize formal axioms and practice with them until they can produce detailed calculations of already known phenomena. They are thereby trained to use quantum mechanics in the development of new technologies, but they are not educated in a way that would allow them to move beyond the accepted axiomatic system in any principled way.
Bourbakism and the Delphi method ensure that the questions and beliefs of the vast majority remain well within the approved limits.
118. Phil (and Bee - this further amplifies my reply to you):
While I agree with the intent of banishing misconceptions and preconceptions, I disagree with the method of inventing semantically sterile new terminology.
For example, in moving from Euclidean to hyperbolic geometry, one can simply amend the parallel postulate, claim that geometric insight is therefore of no further use, and deduce the theorems of hyperbolic geometry by the sterile, rote application of axioms.
Or one can draw a picture of a hyperbolic surface, and enlist one’s geometric insight to understand how geometry changes when the parallel postulate is amended. One ends up proving the same theorems, but one gets to them much faster, and much more surely. And one understands them much better.
In short, I think teaching students to abandon their intuitions does more harm than good.
Having abandoned them, what choice remains to them but to mimic the cognitive biases of their instructor?
119. This comment has been removed by the author.
120. Hi Ain Soph,
You talk about amending axioms instead of eliminating them from being ones; this is exactly how the type of ambiguity and sloppiness that Bell complained about arose in the first place. What an axiom or postulate represents in math or theory is a self evident truth, which either is to be considered so or not. What would it mean to amend an axiom, could that mean for instance that the fifth postulate holds every day except Tuesdays? No I’m sorry that’s the type of muddled headed thinking that has had QM become what it is, with all the ad hoc rules and decisions as to how and when they are to apply. The fact is in deductive (or inductive) reasoning a postulate is or it isn’t, with no exceptions allowed, otherwise it has lost all its ability to be considered as logic.
This then is exactly what a ‘beable’ is, as being something that you consider as a postulate (prerequisite) or not, it either is or it isn’t. What then falls out is then consistent with nature as it presents or it doesn’t. So that’s why for instance Bell liked the pilot wave explanation, since when asked is it particle or wave , such a restriction of premise didn’t satisfy what nature demonstrated as being both particle and wave.
Therefore the concept of ‘beables’ is not to have what is possible ignored, yet quite the opposite. So where for instance the pilot wave picture is referred to as being a hidden variables theory, Bell would counter that standard QM is a denied variables theory. This is to find that it makes no sense to have axioms amended, they either are or they’re not, otherwise it just isn’t a method of reason. So what’s asked for is not that intuitions be ignored, rather that when such intuitions are incorporated into theory there be a way to assess its validity where nature is the arbitrator of what is truth and not the theorist.
121. Phil:
Sorry, I should have been more clear. Essentially, the parallel postulate holds that the sum of the internal angles of any triangle is equal to 180 degrees. If we amend that to read “greater than” then we get hyperbolic geometry. (And “less than” gives us elliptic geometry.)
122. Hi Ain Soph,
What you are talking about is not amending a postulate, yet rather to define or set parameters where there isn’t one. What the fifth postulate is doesn’t allow for what you propose in either case, so therefore it must be eliminated to even have it considered.
That’s like people believing Einstein set the speed of light as a limit, rather than him realizing this speed was a limit resultant of being a logical consequence of his actual premises; which are there is no preferred frame of reference, such that whenever anyone is arbitrarily chosen the laws of nature will present as the same. The speed of light then being a limit falls out of these axioms and not needed in addition as to have it to be. That is it’s not an axiom yet rather a direct consequence of them. So then if you want things to always be hyperbolic or elliptic geometrically would require an axiom and not a parameter to mandate it be so. Whether it is less or greater than 180 degrees holds no significant where such parameters are just special cases as the one it being compared , with itself also just a special case where no such axiom to have it be so exists.
So for me a true explanation is found when things are no longer simply parameters, yet rather consequences of premise (or axioms). Of course one could insist all such things are indeed arbitrarily chosen, which on the surface sounds reasonable, yet it still begs the question be answered how is it these parameters hold at all to present a reality that has them as fixed. So my way of thinking, being consistent with Bell’s, is to be a scientist is to find the world as a construct mandated by logic and to think otherwise just isn’t science. This I would call the first axiom of science, where that of Descartes’ being the second, which is to give us and not reality reason to think we might discover what, how and why it is as it is.
123. Ain Sof:Now we value science primarily for the technology we can derive from it.
No, as I see it, you are the harbinger of that misfortune.
What can possible be derived from developing measure that extend our views of the universe? "Only" human satisfaction?
Shall we leave these things unquestionable then and satisfactory, as to the progression you have seen civilization make up to this point?
You intertwine the responsibility of, and confuse your own self as to what is taking place in society, cannot possibly be taking place within your own mind?:)yet, you have "become it" and diagnosed the projection incorrectly from my point of view:)
You could not possibly be wrong?:)
124. Phil,
He is right in relation to this geometric sense and recognizes this to be part of the assessment of what exists naturally.
Gauss was able to provide such examples with a mountain view as a move to geometrically fourth dimensional thinking.
As to lineage, without Gauss and Riemann, Einstein would have not geometrically made sense. This is what Grossman did for Einstein by introduction. Wheeler, a Kip Thorne.
125. In context, Phil:
Emphasis mine.
Businessmen value Science for the technology it can produce. Governments, sometimes. There is of course this little thing called "National Defense" such that even if a country is not on a war-footing, they at least seek the technology that puts them on an even-footing with other governments that may put the war-foot on them. USSR vs USA in the 20th century, Iran vs Israel and the West today, and there are many other examples throughout history.
But we knew that. I'm just reminding. I believe Ain Soph was railing against the Politico-Economic "human" system that places Engineering above Science, and I hope I've explained why. I don't see where Ain Soph was being the harbinger of that reality; rather, he was pointing it out.
Governments also support Theory, and that's key. Questions regarding how many Theorists are actually needed non-withstanding, we do need them. Businesses know this, and cull theorists only when they are on the edge of a breakthrough. They haven't the time to waste on things that will pay off 10-20 years down the road. They want applications, now. Yesterday would be better.
Two examples: Bell Labs up through the mid-1990's, and Intel. Intel used Quantum Physics as was known, specifically Surface Physics. Bell Labs on the other hand had no reason to work on "Pure Research," yet they did. But Bell Labs was part of the communications monopoly AT&T, which had more money than God, AND the US Government poured lots of money into Bell Labs as well to the point you couldn't tell where AT&T ended and the government began, so the Labs were an example of yes, Government funding, at least partially.
Enter our new age, where rich folks like Branson and Lazaridis etc. are picking up the slack. There has been a shift in funding sources, especially with governments hard pressed to meet budgets, and when that happens, Theory always takes a hit.
126. It's late and also I don't think Bee really gets my point (did anyone else?) about randomness in nature v. the deterministic nature of math. But for the record some clarification is needed. First, it's not really a matter of my having or claiming to have a disproof of MWI. But it is accepted logical practice that the one postulating something more than we know or "have", has the burden of proof. Also, I am saying that *if* the world is not MWI then it cannot be represented by deterministic math - which is different from saying, "it is not" MWI and thus cannot be represented by math. (Bee, either you need to sharpen up your logical analysis of semantics, or I wasn't clear enough.)
Furthermore, I don't "believe" that there has to be some actual sequence, I am saying that such specific sequences are what we actually find. But now I see the source of much confusion of you and Andrew T: you thought, I was conflating the idea that "real flowing time" couldn't be mathematically modeled with the other idea that a mathematical process can produce other than the specific sequence it logically "has to", such as digits of roots.
But that isn't what I meant. It doesn't matter whether time actually flows or not, or if we live in a block universe. The issue is, the sequence produced in say a run of quantum-random processes is thought to be literally random and undetermined. That means it was not logically mandated in advance by some specific choice such as "to take the digits of the cube root of 23." Sure, some controlling authority could pick a different seed each time for every quantum experiment, but "who would do that"? But if there isn't such a game-changer, then every experiment on a given particle would yield the same results each time. That is the point, and it is supported by the best thinking in foundations. Above all, try to get someone's point.
127. (OK, I still may have confused the issue about "time" by saying "in advance." The point is: even in a block universe with no "real time", then various sequences of e.g. hits in a quantum experiment would have to be generated by separate, different math processes. That sequence means the ordered set of numbers, whether inside "real time" or just a list referring to ordering in a block of space-time. So one run would need to take e.g. the sqrt of 5, another the cube root of 70, another 11 + pi, etc. Something would have to pick out various generators to get the varying results. If that isn't finally clear to anyone still dodging and weaving on this, you can't be helped.
Not a lot understand the implication of this over the doorway in this "new institution called Perimeter Institute" but yes, if one were hell bent toward research money for militarization then indeed such technologies could or might seem as to the status of our civilization.
Part of this Institution called PI I believe is what Ain Sof is clarifying to Phil, is an important correlation along side of all the physics, and does not constitute the idea of militarization, but research about the current state of the industry.
That is a "cold war residue" without the issue being prevalent has now been transferred to men in caves. Fear driven.
The larger part of society does not think this way? So in essence you can now see Ain sof's bias.:)
129. Neil: I apologize in case I mistakenly mangled your statement about MWI, that was not my intention. About the burden of proof: you're the one criticizing somebody else's work, it's on you to clarify your criticism. I think you have meanwhile noticed what the problem is with your most recent statement. I think I said everything I had to say and have nothing to add. You are still using undefined expressions like "process" and "picking" and "generation of sequences." Let me just repeat that there is no need to "generate" a sequence, "pick" specific numbers or anything of that sort. It seems however this exchange is not moving forward. Best,
130. This comment has been removed by the author.
131. For those who want to venture further.
132. (Note: all of the following is relevant to the subject of cognitive bias in physics, being concerned with the validity of our models and the use of math per cognitive model of the world.) Bee: thanks for admitting some confusion, and it may be a dead end but I feel a need to defend some of my framings of the issue. I don't know why you have so much trouble with my terms. We have actual experiments which produce sequences which appear "random", and which are not known to be determined by the initial state. That is already a given. I was just saying as analysis that a set of sequences that are not all the same as each other, cannot be generated by a uniform mathematical process (like, the exact same "program" inside each muon or polarizing filter.) If there was the same math operation or algorithm there each time, it would have to produce the same result each "time" or instance. Find someone credible who doesn't agree in the terms as framed, and I'll take it seriously.
Steven C: Your post is gone (well, in my email box - and I handle use of author-deleted comments with great carefulness), but I want you to see this anyway: As for this particular squabble, it isn't any more my stubbornness than anyone else who disagrees and keeps on posting. I had to, since I was often misunderstood and am expressing (believe or like it or not) the consensus position in foundations of mathematics and physics. Read up on foundations and find re determinism and logical necessity v. true randomness.
Your agreeing with Orzel in that infamous thread at Uncertain Principles doesn't mean his defense of the decoherence interpretation was valid. Most of my general complaints are the same as those made by Roger Penrose (as in Shadows of the Mind.) He made, like I did, the point that DI uses a circular argument: if you put the sort of statistics caused by collapse into the density matrix to begin with, then scrambling phases produces the same "statistics" as one would get for a classical mixture. Uh yeah, but only because "statistics" are fed into the DM in the first place. Otherwise, the DM would just be a description of the spread of amplitudes per se. You have to imagine a collapse process to turn those amplitudes - certain or varied as the case may be- into statistical isolation. The DI is a circular argument, which is a logical fallacy not excused or validated by "knowing more physics." Would you be dismissive of Penrose?
As for MWI: if possible measurements produce "splits" but there is nothing special about the process of measurement or measuring devices per se, then wouldn't the first BS in a MZ interferometer instigate a "split" into two worlds? That is, one world in which the photon went the lower path, another world where it went the other path? But if that happened, then we wouldn't see the required interference pattern in any world (or as ensemble) because future evolution would not recombine the separated paths at BS2. Reflect on that awhile, heh.
133. [part two of long comment]
At Uncertain Principles I critiqued Orzel's specific example - his choice - which used "split photons" in a MZI subject to random environmental phase changes. He made the outrageous argument that, if the phase varies from instance to instance, the fact that the collective (ensemble) interference pattern is spoiled (like it would be *if* photons went out as if particles, from one side of the BS2 or the other) somehow explains why we don't continue to see them as superpositions. But that is absurd. If you believe in the model, the fact that the phase varies in subsequent or prior instances can't have any effect on what happens during a given run. (One critique of many - suppose the variation in phase gets worse over time - then, there is no logical cut-off point to include a set of instances to construct a DM from an average spread, see?)
At the end he said of the superposed states "they just don't interfere" - which is meaningless, since in a single instance the amplitudes should just add regardless of what the phase is. Sure, we can't "show" interference in a pretty, consistent way if the phase changes, but Orzel's argument that the two cases are literally (?) equivalent (despite the *model* being the problem anyway, not FAPP concerns) is a sort of post-modern philosophical mumbo jumbo. My reply was "philosophy" too, but at least it was valid philosophical reasoning and not circular and not making sloppy, semantically cute use of the ensemble concept. (How can I appreciate his or similar arguments if it isn't even clear what is being stated or refuted?)
Funny that you would complain about philosophy v. experiment, when the DM is essentially an "interpretation" of QM not a way to find different results. Saying decoherence happens in X tiny moment and look, no superposition! - doesn't prove that the interpretation is correct. We already knew, the state is "collapsed" whenever we look. Finally, I did actually just propose a literal experiment to retrieve amplitude data that should be lost according to common understanding of creating effective (only that!) mixtures due to phase changes. It's the same sort of setup Orzel used, only with unequal amplitude split at BS1. You should be interested in that (go look, or again but carefully), it can actually be done. It's importance goes beyond the DI as interpretation, since such information is considered lost, period, in traditional theory not even counting interpretative issues.
I do like your final advice:
TRY, brutha, to expand your horizons. Is all I'm saying.
Yes, indeed! I do try - now, will you?
BTW Bee and I get along fine, despite tenaciously arguing over mere issues, and are good Facebook Friends. We send each other hearts and aquarium stuff etc. - I hope that's OK with Stefan! (Stefan, I will send a Friend request to you too, so you feel better about it.) I also have her back when she's picked on by LuMo or peppered by Zephir.
134. (correction, and then I leave it alone for awhile)-
I meant to say, in paragraph #1 of second comment:
[Not, "during a given run." - heh, ironic but I can see it's a mistake to conflate the two.]
135. Phil:
I don’t know what to make of your last two posts. Surely you’re not unfamiliar with non-Euclidean geometry?
In flat Euclidean space, the geodesics are straight lines and the sum of the internal angles of any triangle is equal to 180 degrees.
In a negatively curved space, the geodesics are hyperbolae and the sum of the internal angles of any triangle is less than 180 degrees.
In a positively curved space, the geodesics are ellipses and the sum of the internal angles of any triangle is greater than 180 degrees.
These are facts.
And in each case, they are also logical necessities that follow from the geometric structure of the space. (note: I sloppily reversed greater and less in my last post)
We believe that space is negatively curved on a cosmological scale, and we know that live on the surface of a spheroid, which is positively curved. So this parameter which you claim I make up and set arbitrarily is actually very real and determined by measurable properties of real things.
Now, to return to my point:
It would be foolish to study non-Euclidean geometry by abandoning our geometric insight, just because that insight was developed in a Euclidean context.
Rather, we should use our geometric insight to see precisely what must be generalized in moving from Euclidean to non-Euclidean geometry, and to understand how and why the generalizations are possible and when they are necessary.
The same remark applies to the study of special relativity, where the finite speed of light leads to an indefinite metric which induces a hyperbolic geometry, and Lorentz boosts are nothing other than 4-dimensional hyperbolic rotations.
And the same remark applies to quantum mechanics, where the non-vanishing of Planck’s constant induces a hyperbolic projective mapping from the Bloch sphere to the complex Hilbert space and causes probabilities to appear noncommutative and complex.
It is precisely by retaining our geometric insight that the apparent paradoxes of these subjects are most easily resolved and understood to be nothing more than logical necessities that follow from the underlying geometric structure.
136. Plato:
Certainly, there are many things I could be wrong about. But, be that as it may...
I see that the philosophical trends of the last century are an outright attack on rationality, replacing reason with rhetoric whose primary aim is to deconstruct Western culture.
I see that research is funded by agencies uninterested in the pursuit of knowledge except as a source of economic advantage and weapons production.
I see that universities have been transformed from academies of learning into vocational schools and centers of indoctrination.
These are simple observations, easily seen by anyone who looks with open eyes.
Thus they cannot possibly be projections of anything taking place within my own mind.
137. ain soph,
"I see" then, you have no biases and I am not blind.:)
Good clarity on the subject of geometrical propensities. Good stuff.
138. Neil B:
“Find someone credible who doesn't agree in the terms as framed, and I'll take it seriously.”
“Would you be dismissive of Penrose?”
For someone who likes to decry logical fallacies, you’re awfully fond of the argument from authority...
By the way, quantum states don’t collapse. State functions collapse. Just as my observation of the outcome of a coin toss collapses my statistical description of the coin, but does not change the coin at all.
Given sufficient sensitivity to initial conditions, arbitrarily small amounts of background noise are all it takes to make nominally identical experiments come out different every time, in ways that are completely unpredictable, yet conform to certain statistical regularities.
Now, if you can clearly define the difference between “completely unpredictable” and “truly random” in any operationally meaningful, non-circular way, then you may have a point. Otherwise I have no more difficulty dismissing your argument than some of the ill-considered arguments made by Penrose.
139. Plato:
Yup. That’s our story, and we’re stickin’ to it...
140. Ain SopH: No, I'm not all that fond of the argument from authority, if you mean that if so-and-so believes it, it must be true. Your understanding of that fallacy seems a little tinny and simpleminded, because the point is that such a person's belief doesn't mean the opinion has to be true. However, neither should a major figure's opinion be taken lightly, which is why I actually said to SC: "Would you be dismissive of Penrose?" instead of, "Penrose said DI was crap, so it must be." But you do have a point, so remember: if the majority of physicists now like the DI and/or MWI, that isn't really evidence of it being valid.
This statement by you is incredibly misinformed:
By the way, quantum states don’t collapse. State functions collapse. Just as my observation of the outcome of a coin toss collapses my statistical description of the coin, but does not change the coin at all. Uh, you didn't realize that the wave function can't be just a description of classical style ignorance, because parts can interfere with each other? That if I shot BBs at double slits, the pattern would be just two patches? Referring to abstractions like "statistical description" doesn't tell me what you think is "really there" in flight. Well, do you believe in pilot wave theory, what? What is going from emitter, through both (?) slits and then a screen, etc? Pardon my further indulgence in the widely misunderstood "fallacy of argument from authority", but all those many quantum physicists, great and common, were just wasting their wonder, worrying why we couldn't realistically model this behavior? That only a recent application of tricky doubletalk and unverifiable, bong-style notions like "splitting into infinite other worlds" somehow makes it all OK?
141. [two of three, and then I rest awhile]
Before I go into this, please don't confuse the discussion about whether math can model true randomness (it can't, whether Bee gets it or not), with the specific discussion of decoherence and randomness there. They are related but not exactly the same. Now: you are right that the background noise can make certain experiments turn out differently each time (roughly, since they might be the same result!) but with a certain statistics. But what does that show? Does it show that there weren't really e.g two different wave states involved, or that we don't have to worry what happened to the one we don't find in a given instance? No. First, the question it begs is, why are the results statistical in the first place instead of a continued superposition of amplitudes, and why are they those statistics and not some other. If you apply a collapse mechanism R to a well-ordered ensemble of cases of a WF, then you can get the nice statistics that show it must involve interference. If R acts on a disordered WF ensemble, then statistics can be generated that are like mixtures.
Does that prove jack squat about how we can avoid introducing R to get those results? No. If something (like, on paper, a clueless decoherence advocate who applies the squared amplitude rule to get the statistics, and who doesn't even realize he has just fallaciously introduced through the back door the very process he thinks he is trying to "explain") hadn't applied R to the WFs, there wouldn't be *a statistics* of any kind, orderly or disorderly. (On paper, that something could be a clueless decoherence advocate who applies the squared amplitude rule to get the statistics, and who doesn't even realize he has just circularly and fallaciously introduced through the back door the very process he thinks he will "explain.") There would be just shifting amplitudes. It is the process R that produces mixture-like statistics from disordered sets of WFs, not the MLSs that explain/produce/whatever "the appearance" of R through a cutesy, backwards, semantic slight of hand. Your point about whether such kinds of sequences could be distinguished (as if two processes that were different in principle could not produce identical results anyway, which is a FAPP conceit that does not treat the model problems) is moot, it isn't even the key issue anyway. The key issue is: why any statistics or sequences at all, from superpositions of deterministically evolving wave functions.
So we don't know what R is or how it can whisk away the unobserved part of a superposition, etc. This is what a great mind like Roger Penrose "gets", and a philosophically careless, working-physics Villager like Orzel does not. I'm not sure what you get about this line of reasoning since you didn't actually deal with my specific complaints or examples. Note my critique of MWI, per that the first BS in a MZ setup should split the worlds before the wave trains can even be brought back together again.
142. Here's another point: the logical and physical status of the density matrix, creating mixtures, and the effects of decoherence shouldn't depend on whether someone knows the secret of how it is composed. But if I produce a "mixture" of |x> and |y> sequential photons by switching a polarizer around, I know what the sequence is. Whether someone else can later confidently find that particular polarization sequence depends on whether I tell them - it isn't a consistent physical trait. Someone not in on the plan would have to consider the same sequence to be "random", and just as if a sequence of diagonal pol, CP, etc. as shown by a density matrix. But it *can't be the same* since the informed confederate can retrieve the information that the rubes can't.
So the DM can't really describe nature, it isn't a trait as though e.g. a given photon might really "be" a DM or mixture instead of a pure state or superposition. Hence, in the MZ with decoherence that supposedly shows how the state approaches a true mixture, everything changes if someone knows what the phase changes are. That person can correct for the known phase changes, and recover perfect interference. How can the shifting patterns be real mixtures if you can do that? Oh, BTW - a "random" pattern that is known in advance (like I tell you, it's sqrt 23) "looks just like" a really random pattern that you don't or can't know, but it can make all the difference in the world, see?
Finally, I said I worked up a proposal to experimentally recover some information that we'd expect to be lost by decoherence, and it seems you or the other deconauts never checked it out. It may be a rough draft, but it's there.
143. Arun:
That’s an interesting paper, although it suffers greatly under the influence of “critical theory” and goes out of its way to rewrite history in terms of economic class struggle. After reading its unflattering description German universities of the nineteenth century, one can only wonder how such reprehensible places could have given us Planck, Heisenberg, Schrödinger, Minkowski, Stückelberg, Graßmann, Helmholtz, Kirchhoff, Boltzmann, Riemann, Gauss, Einstein...
Clearly, those places were doing something right. Something we’re not doing, otherwise we would be getting comparable results. But the paper refuses to acknowledge that, and studiously avoids giving the reader any reason to search for what that something might be. Some of the paper’s criticisms are not without substance, but that’s all the paper does: it criticises. And thus it makes an excellent example of the corrosive influence of historical revisionism, and how critical theory is used to undermine Western culture.
144. Neil: You continue to make the same mistake, you still start with postulating something that is "actual" and what we "know" and is "given" what I'm telling you we actually don't know without already making further assumptions. I really don't know what else to say. Look, take a random variable X. It exists qua definition somewhere in the MUH. It has a space of values, call them {x_i}. It doesn't matter if they're discrete or continuous. If you want a sequence, each value corresponds to a path, call it a history. All of these paths exists qua definition somewhere in the MUH because they belong to the "mathematical structure". Your "existence" is one of the x_i(t), and has a particular history. But you don't need to "generate" one particular sequence, you just have to face that what you think is "real" is but a tiny part of what MUH assumes is "real."
Besides, this is very off-topic, could we please come back to the topic of this post? Best,
145. Hi Ain Soph,
Of course I’m familiar with non-Euclidian geometry, as it simply refers to geometries that exclude the fifth postulate. I’m also quite aware that GR is totally dependent upon it . The point I was attempting to make is what the difference is between the axioms of a theory and any free parameters it contains. It could be said for instance what forces non-Euclidian geometry upon GR is its postulate of covariance which has the architecture of space time mandated by the matter/energy contained. However particularly what that (non-Euclidian) geometry is in terms of the whole universe is not determined by this postulate, yet rather the free parameter known as the cosmological constant; that is whether it be closed, flat or open. So my contention is that to fix this variable one needs to replace the parameter with an axiom that will as a consequence mandate what this should be, whether that be within the confines of GR or a theory which is to supersede it. Anyway somehow or other I don’t believe either you or Plato understand the point I’ ve attempted to make and thus rather then just repeat what I said I’ll just leave it there.
146. Bee: could we please come back to the topic of this post?
I second the motion.
I'm sure we're jumping on wrong clues on all sorts of things*, and thanks for the Brugger psychology-testing stuff. Awesome. I do know something about Dopamine since a close family member was turned into a paranoid schizophrenic thanks to a single does of LSD her so-called "friend" put in her mashed potatoes at lunchtime one day. The results were horrific, the girl went "nuts" to use the vernacular. Too much Dopamine=Very Bad. Well, I've long felt everyone suffers to some degree some amount of mental illness. The Brugger test confirms that in my mind.
*So our brains aren't perfect, yet I believe the ideal is community, in Physics that means peer review, to sort out the weaknesses of one individual by contrasting their ideas with multiples of those better informed, not all of whom will agree of course, and not all of whom will be right. So consensus is important, before testing proves or disproves, or is even devised.
Regarding assumptions (whether true or false), I think that is the job of a (Real not Pop) Philosopher, going all the way back to good ol' Aristotle and his "Logic" stuff. George Musser sums it up better than I, as so:
I leave you with pure cheek:
Andrew Thomas: Ooh, isn't that weird?! Your initials are carved into the CMB!
I didn't see that in the oval Bee featured. I DID see "S'H", which I interpret as God confirming my SHT, or SH Theory, aka "Shit Happens" Theory. What a merry prankster that God dude is, what a joker man, putting it out there right on the CMB for all to see! Well, he DID invent the Platypus, so that's your first clue. ;-)
Clearly, those places were doing something right....
Well, experiments did not cost an arm and a leg and did not ever require hundreds of scientists or a satellite launch in those days. As one biographer pointed out, even upto Einstein's middle age, it was possible for a person to read all the relevant literature; the exponential growth since has made it impossible.
Lastly, in the areas where the constraints mentioned above don't hold we're doing fine - e.g, genetics and molecular biology, computing, etc. It is just that you - we - do not recognize the pioneers in those fields to have such genius; that is a definite cognitive bias on our part.
148. From Columbus to Shackleton - the West had a great tradition of explorers, but now nobody is discovering new places on the Earth - must be an attack by the forces of unreason on the foundations of Western civilization. I mean, what else could it be?
149. Hi Phil,
You mustn't become discouraged as to not understanding your point or not, all the better taken in stride.
However, by throwing out Euclid's fifth postulate we get theories that have meaning in wider contexts, hyperbolic geometry for example. We must simply be prepared to use labels like “line” and “parallel” with greater flexibility. The development of hyperbolic geometry taught mathematicians that postulates should be regarded as purely formal statements, and not as facts based on experience. SeeAxiom
There is to me a succession( who was Unruh's teacher?) and advancement of thought about the subjects according to the environment one is predisposed too. Your angle, your bias, is the time you spent with, greatly, before appearing on the scene here. Your comments are then taken within "this context" as I see it.
Part of our communication problem has been what Ain Sof is showing. This has been my bias. Ain Sof doesn't have any.:)
Why I hold to the understanding of what Howard Burton was looking for in the development of the PI institution was a "personal preference of his own" in relation to the entrance too, is what constitutes all the science there plus this quest of his.
So as best I can understand "axiom" I wanted to move geometrical propensity toward what iS "self evident."
Feynman's path integral models.
Feynman based his ideas on Dirac's axiom "as matrices." I am definitively open to corrections my our better educated peers.
Here was born the idea of time in relation to the (i) when it was inserted in the matrices? How was anti-matter ascertained?
Feynman's toy models then serve to illustrate?
Let the wrath be sent down here to the layman's understandings.
150. Arun:
You’re not suggesting we’ve mapped out physics with anywhere near the completeness with which we’ve mapped out the Earth, are you?
151. Plato:
“This has been my bias. Ain Sof doesn't have any.”
Now, now...
What I said was that certain trends are so obvious that I’m certain I’m seeing something that is really there, and not projecting my own stuff onto the world.
Of course, being certain of it is no guarantee that its true...
152. Phil:
Once again, I agree wholeheartedly with what Bell is trying to accomplish by adopting the word, “beable,” but I loose more by abandoning the correct parts of my understanding of words, like “observation” and “property,” than I gain from the tabula rasa that comes with the word, “beable.”
Bell himself recognised that the introduction of the word was a two-edged sword. In his 1975 paper, “The Theory of Local Beables,” he writes
The name is deliberately modeled on “the algebra of local observables.” The terminology, be-able as against observ-able is not designed to frighten with metaphysic those dedicated to realphysic. It is chosen rather to help in making explicit some notions already implicit in, and basic to, ordinary quantum theory. For, in the words of Bohr, “it is decisive to recognize that, however far the phenomena transcend the scope of classical physical explanation, the account of all evidence must be expressed in classical terms.” It is the ambition of the theory of local beables to bring these “classical terms” into the mathematics, and not relegate them entirely to the surrounding talk. [emphasis in the original]
Two or three paragraphs later, he adds
One of the apparent non-localities of quantum mechanics is the instantaneous, over all space, “collapse of the wave function” on “measurement.” But this does not bother us if we do not grant beable status to the wave function. We can regard it simply as a convenient but inessential mathematical device for formulating correlations between experimental procedures and experimental results, i.e., between one set of beables and another.
Now, for someone who is thoroughly steeped in the orthodox view that probabilities are objectively real properties of physical systems, I suppose it can be useful to adopt the word, “beable,” to remind themselves that a probability isn’t one. But the real danger of introducing this term is that it tempts one to treat the concept as being relevant only in the quantum context. Thus it opens the door to a new misconception while throwing the old one out the window.
So I think the preferable way to combat this kind of cognitive bias is to realize that its root lies in the widespread misapprehension of the concept of probability. And this is where I draw a parallel to my remarks about geometric insight, because we must use our insight to see precisely what must be generalized in moving from classical to quantum mechanics, and to understand how and why the generalizations are possible and when they are necessary.
The key realisation is that probable inference is the generalization of deductive inference from the two-element field {0,1} to the real interval (0,1). That alone should be enough to counteract any tendency to ascribe objective reality to probabilities (i.e., treat them as beables), even in a classical context. The we must generalize again to vector probabilities in statistical mechanics, and finally to spinor probabilities in quantum mechanics.
As I remarked above: my observation of the outcome of a coin toss collapses my statistical description of the coin, but does not change the coin at all. Once we realize that this same idea still applies when probabilities are generalized from to scalars to spinors, and manifests in the latter case as the collapse of the wavefunction, the “weirdness” of quantum mechanics evaporates, and along with it, the need for terms like “beable.”
153. Ain Sof,
Ah!....there is hope for you then.:)
Alma mater
University of Cambridge
Doctoral advisor
Paul Dirac
Doctoral students
John D. Barrow
George Ellis
Gary Gibbons
Stephen Hawking
Martin Rees
David Deutsch
Brandon Carter
154. Hi Ain Soph,
So you are basically saying that as long as the statistical description is relegated to being an instrument for calculating outcome, rather then what embodies as being the mechanics (machinery) of outcome, then we don’t need anything else to keep us from making false assumptions. This to me sounds like what someone like Feynman would say who contended that the path integral completely explained least action as what mandates what we find as outcome. I’m sorry yet for me that just doesn’t cut it, as it assigns the probabilities as being the machinery itself. The whole point of Bell’s ‘beable’ concept is to force us to look at exactly what hasn’t been explained physically, rather than having us able to ignore their existance.
That’s to say, that yes the reality of the coin is not affected by it being flipped, yet one still has to ask what constitutes being the flipper, even before it is considered as landed by observation to be an outcome. What you are asking by analogy to be done is to accept that a lottery drum’s outcomes are explained without describing what forces the drum to spin. The fact is probability is reliant on action and all action requires an actuator. So if your model has particles as being the coins you still have to give a physical reality to not just the drum, yet what has it to be spun? If your model has the physicality of reality as strictly being waves, then you are in a worse situation, for although you have accounted for what represents as being the actuator of the spin, yet left with nothing to be spun as to be able to observe as an outcome.
This is exactly the kind of thing that Bell was attempting to have laid bare with his inequality, as it indicated that the formalism (math) of QM mandated outcomes that required a correlated action demonstrated in outcomes separated by space and time exceeding that of ‘c’ and yet had no mechanism within its physical description that would account for such outcomes. So yes I would agree that the mathematics allows us to calculate outcome, yet it doesn’t then by itself able to embody the elements that have them to be and thus that’s why ‘beables’ should be trusted when evaluating the logical reality of models.
Further, one could say that’s why we have explanations like many worlds as attempting to give probability a physical space for Feynman’s outcomes, or Cramer’s model to find the time instead as the solution. Then again we have all as being math contending that there is no physical embodiment of anything yet only the math which as far as I can see is what your contention would force us to consider as true. I don’t know about how you see all this, yet for me their seems to be room for other more direct physically attached explanations for what we call to be reality, which as you say would not force us to throw the baby, which in this case being reality, out with the bath water, with them only representing our false assumptions and prejudices. So yes I agree that one and one must lead to there being two, yet they both must still be allowed to exist to have such a result found as being significant tp begin with.
155. While the reality of the coin is seemingly not affected by the collapse of the statistical distribution describing its position, the same cannot be said about the electron. There are no hidden variables maintaining the reality of the electron while its wave function evolves or collapses.
156. It's a shame that Bee and I continue to disagree about the issue of determinism in math v. the apparent or actual "true randomness" of the universe. I don't even agree that it's off-topic, Bee, since it is very relevant the core issue of whether we do and/or should project our own cognitive biases on the universe. One of those modern biases apparently is the idea of mechanism, that outcomes should be determined by initial conditions. Well, that is how "math" works, but maybe not the universe. Perhaps Bee is thinking I'm trying to find purely internal contradictions in the MUH, but I'm not. It can be made OK by itself, in that every possibility does "exist" Platonically, and there is no other distinction to make (like, some are "real stuff" and others aren't.) That's the argument the modal realists make. In such a superspace, it is indeed true that the entire space of values of a random variable exist. It's like "all possible chess games" as an ideal. But it isn't like a device that can produce one sequence one time it is "run", another sequence in another instance etc. It is a "field." And no it doesn't matter what is continuous or discrete, that's beside the point of deterministically having to produce consistent outputs when actually *used.*
But my point is, we don't know that MUH is rightly framed. What we have is one world we actually see, and unless I have "some of what MWI enthusiasts are smoking" I do not "see" or know of any other worlds. In our known world, measurably "identical particles" do not have identical behavior. That is absurd in logical, deterministic terms. We do have specific and varying outcomes of experiments, that is something we know and does not come from assumptions.
I am sure you misunderstood, since you are aware of the implications of our being able to prepare a bunch of "identical" neutrons. One might decay after 5 minutes, another after 23 minutes. If there was an identical clockwork equivalent, the same "equation" or whatever inside each neutron, then each neutron would last the same duration. I think almost everyone agrees on that much, they just can't agree on "why" they have different lifetimes.
In an MUH, we'd still have to account for different histories of different particles, given the deterministic nature of math. There are ways to do that. The world lines could be like sticks cut to different links. In such a case there is no real causality, just various 4-D structures in a block universe "with no real flowing time." Or, each particle could have its own separate equation or math process (like sqrt 3 for one neutron, cube of 1776 for another.)
But the particles could not all be identical mathematical entities, and glibly saying "random variable" *inside each one* would not work. If each neutron started with the same inner algorithm or any actual math structure or process, it would last as long as any other. That is accepted in foundations of math, prove me wrong if you dare.
157. It is possible for different math-only "worlds" to have different algorithms. But if the same one applied to all particles in that world, then every particle would have to act the same since math is deterministic. Hence, we'd have the 5-minute-neutron world, the 23-minute-neutron world, etc.
Our universe is clearly not like that, as empirically given. Hence, each of those apparently identical particles must have some peculiar nature in it, that is not describable my mathematical differences. And Ain Soph is IMHO wrong to say we can't ascribe probability to a single particle. Would you deny, that if I throw down one die it has "1/6 chance of showing a three"? Would you, even if the landing destroyed the die after that? What other choice do we have?
Yes, we find out via an ensemble. But then why does the ensemble produce that statistics unless each element has some "property" of maybe doing or not, per some concept of chance? I think we have no choice.
And as for thinking the wave function is just a way of talking about chances, isn't beable, whatever: then what do you think is the character of particle and photons in flight?
If you want to deny realness and say our phenomenal world is like The Matrix, fine, but at least you can't have it both ways. And however overlong or crabby some of the comments might be, it would be instructive to consider some of my critique of DI/DM/MWI.
Sorry there is much confusion over determinism and causality here, but there just is, period! Again, this is relevant. But no more about it per se unless someone prompts with yet another critique per se! (;-) Yet note, the general point is part of the continuing sub-thread here over quantum reality, since it has to be.
158. Last note for now: Thanks Phil for some very cogent comments, supporting my outlook in general but in your more dignified style ;-) As for the tossed coin: REM that in a classical world, for one coin to land on it's head and other, tails; was a pre-determined outcome of the prior state actually being a little different in each case! The coin destined to come up heads was already tipping that way, and at an earliet time its tipping that way was from a slightly different flip of my wrist, etc. It's not about whether the observation changes the coin, it's about the whole process being rigged in advance.
One could think of the whole process as like a structure in space-time, with one outcome being one entire world-bundle, and the other outcome being another world-bundle. They are genuinely different (however slightly) all the way through!
But in QM, we imagine two "identical states" from which outcomes are, incredible, different. (It is easy for forget, that really is logically incredible as Feynman noted, since we'd gotten used to it being the apparent case.) As I painstakingly explained, that is not derivable from coin-toss style reasoning. If you believe that the other outcomes really exist somewhere, it's your job to bring photos, samples, whatever or else just be a mystic.
159. Jules Henri Poincare (1854-1912)
Mathematics and Science:Last Essays
8 Last Essays
Let rolling pebbles be left subject to chance on the side of a mountain, and they will all end by falling into the valley. If we find one of them at the foot, it will be a commonplace effect which
will teach us nothing about the previous history of the pebble;
A Short History of Probability
The Pascalian triangle(marble drop experiment perhaps) presented the opportunity for numbers systems to materialize out of such probabilities?
If one assumes "all outcomes" one then believes that for every invention to exist, it only has to be discovered. These were Coxeters thoughts as well. Yet now we move beyond Boltzmann, to entropic valuations.
The Topography of Energy Resting in the Valleys then becomes a move beyond the notions of true and false and becomes a culmination of all the geometrical moves ever considered?
Sorry just had to get it out there for consideration.
160. Just consider the "gravity of the situation:) and deterministic valuation of the photon in flight has distinctive meanings in that context?:)
But they have identical wave functions.
162. Phil:
You conclude from my argument exactly the opposite of what I intended to show. Nothing could more clearly demonstrate the confounding effects of cognitive bias.
I’m NOT saying that your cognition is biased and mine isn’t.
I’m saying that a mismatch between our preconceptions leads us to ascribe opposite meaning to the same sentences – with or without beables! This results in a paradoxical state of affairs: it seems we agree, even though our attempts to express that agreement make is seem like we disagree. Okay, so let me try again...
In my view, the probabilities are anything but the machinery! They are nothing more than a succinct way of encoding my knowledge of the state and structure of the machinery.
Neither my view nor Feynman’s nor Bell’s treats probabilities as beables.
The wave fronts of the functions which satisfy the Schrödinger equation are nothing other than the iso-surfaces of the classical action, which satisfies the Hamilton-Jacobi equation. The apparently non-local stationary action principle is enforced by the completely local Euler-Lagrange equation. This is no more or less mysterious than the apparently non-local interference of wave-functions. In the last analysis, they stem from the same root.
Thus amplitudes are not real things. They are merely bookkeeping devices that record our knowledge about the space-time structure of the problem, while abstracting away much of the detail by representing its net effect as quantum phase. This is what Schrödinger was trying to tell us with his thought experiment about the cat.
By the same token, probabilities are not real things. They, too, are only bookkeeping devices, which quantify our ignorance of details. A correctly assigned probability distribution is as wide as possible, given everything we know. It is therefore not surprising that our estimated probability distribution becomes suddenly much sharper when we update it with the results of a measurement.
It was 1935 when Hermann showed that von Neumann’s no-hidden-variables argument was circular.
It was 1946 when Cox showed that the calculus of Kolmogorovian probabilities is the only consistent way to generalize Boole’s calculus of deductive reasoning to deal with uncertainty.
It was 1952 when Bohm published a completely deterministic, statistical, hidden-variables theory of quantum phenomena.
It was 1957 when Jaynes showed that probabilities in statistical mechanics have no objective existence outside the mind of the observer.
From 1964 to the end of his life, Bell could not disabuse people of the false notion that his theorem proved spooky action at a distance.
An now, in 2010, cognitive bias still prevents the majority of physicists from connecting the dots.
163. Neil B:
“prove me wrong if you dare”
Ha! This is trivially easy. Each neutron in your example exists in its own unique milieu of external influences. Thus they are identical machines operating on different inputs, which therefore give different outputs. Their dependence on initial conditions is very sensitive, so there is no correlation between the moments at which different particles decay, even if they are very close together. Only the half life survives as a statistical regularity. QED.
“Would you deny, that if I throw down one die it has 1/6 chance of showing a three? ... What other choice do we have?”
I claim that the state of the die will evolve as determined by its initial conditions and various influences that affect it in transit and modify its trajectory. Since I have imperfect knowledge of the initial conditions, and cannot predict the transient influences, and since know that the final state depends very sensitively on these things, I have no rational choice but to treat the problem statistically. I will assign equal probabilities to the six faces only if I believe that the apparent symmetries of the die are real, and I will believe that only if I lack evidence to the contrary. However, if I see the die come up three, over and over again, I will have no rational choice but to adjust my assignment of probabilities, which amounts to revising my estimation about the symmetries.
So you see, these is a statements of “having no rational choice but to assign certain probabilities” are statements about me, and about the evolution of my knowledge about the die. They are not statements about the die. With each observation I make, my estimate of the probabilities changes, but the die remains the same.
And nowhere in any of that did I say anything about an ensemble. No ensemble is required. If you think you need an ensemble, then you have already accepted many-worlds, whether you think you have, or not.
“It’s not about whether the observation changes the coin, it’s about the whole process being rigged in advance.”
The belief that this is not true of quantum phenomena is one the cognitive biases that result from the incorrect understanding of the nature of probability.
See also my remarks in reply to Phil.
164. Ain Soph, I appreciate finally getting a considered response. However, your position about environmental influences on something so fundamental as particle life-expectancy is very unorthodox and very unsupported AFAIK by any experiments. So you are a deterministic, who thinks there is some particular reason one neutron decays after one span, and for another to last a different span? Then we should be more able to do two things:
1. Make batches from different sources and environments, that have varying tendencies to decay even if we can control the environment. If there's a clockwork inside each neutron, we should be able to create batches with at least some varying time spectra, such as lumping towards a particular span etc. But no such batches can be made, can they? Nor can we
(2.) Do things to the particles to stress them into later being short-lived, or long-lived, etc. That is unheard of.
Most telling, is that if we let a bunch of particles decay for awhile and take, say, the remaining 1%, that i% decays from then on in the same probabilistic manner as the batch did as a whole up to that point.
It is incredible, for a bunch of somethings with deterministic structure to have a subset which last longer, but then has no further distinction after that time is up. The remaining older and older neutrons can keep being separated out, and no residual signal of a deterministic structure can be found after they've "held off" for all that time. They'd have to be like the silly old homunculus theory of human sperm, like endless Russian dolls waiting for any future contingency (look it up.) It is absurd, sorry.
It's looking at actual nuts and bolts and not semantics or understanding about "probability" that best shows the point. You're right about the probability just being bookkeeping or coding of ignorance in a classical world, but our world is probably (!) not like that. A fresh neutron should be like a die with the same facing up each time and just falling straight down. (BTW, an ensemble is the set of trials or particles in one world, it does not have to mean MWI. The other copy of a particle in our world is just as good a repletion as having the ostensible same thing happen elsewhere too.)
The actual evidence supports the logically absurd idea that genuinely identical particles and states (empirical and theoretical basis up to the moment the similarity is shattered by a measurement or decay event, etc.) sometimes do one thing, sometimes another, for no imaginable reason as we understand and can model causality. Why? Because the universe is just weird.
And it isn't about understanding "probability" per se, which of course does not really exist in math anyway - all the outcomes are precoded into earlier conditions etc., which means it's a matter of whether pseudo-random patterns that would seem to pass the smell test had been "put in by hand", in the Laplacian sense, by God or whatever started up the universe's clockwork. It is about understanding what our universe is like, when it is involved in what we loosely call "probability", without truly understanding what that means in the real world. It is wrong to project and impose our supposed philosophical needs or prejudices upon it.
BTW, I was hoping you'd look at my experiment about recovering data after decoherence.
165. Neil B:
Of the two issues you raise, you are wrong about the first one and right about the second one. In both cases, the correct understanding of the issue supports my argument.
Firstly, if all neutrons are identical, then we definitely should not be able to prepare batches of neutrons with differing parameters. Further, particle decay obeys Poisson statistics, which are shift invariant. Hence knowing how long a given particle has lived tells you nothing about how much longer you can expect it to survive.
Secondly, there is indeed something you can do to “stress” a neutron to systematically affect its half-life: you can put it into different nuclei, or leave it free. In that way, you can vary the half-life of a neutron from about 886 seconds to infinity.
A “fresh” neutron will be in some unpredictable state determined by the unknown details of the process that created it.
An ensemble is not an actual set of particles or trials. People use ensemble arguments when they want to define probabilities as the limiting frequencies of occurrence in an infinite number or trials. But determining what would happen if we could perform an infinite number of trials is based on symmetry arguments of the kind I’ve already outlined. If you can do that correctly, you don’t need an ensemble. If you can’t, all the ensembles in the multiverse won’t help you.
Many-worlds is an attempt to rescue limiting frequencies in cases where postulating more than one trial makes no sense. For example, what is the probability that the sun will go nova in the next five minutes? Many-worlders claim to find that probability by counting the fraction of parallel universes in which the sun actually does go nova in the next five minutes.
Yeah. Right.
Many-worlds is the last resort of frustrated frequentists, desperately searching for ensembles in all the wrong places.
Oh, and... what experiment about recovering data after decoherence?
166. Neil:
What I was thinking you were saying is that MUH is in conflict with observation:
"If actual outcomes, sample sequences which are the true 'data' from experiments, are genuinely "random" ... then... MUH is invalid.
And I've tried to explain you several times why MUH is not in conflict with observation.
Of course we don't know that MUH is correct. It's an assumption, as I've been telling you several times already. All I've been saying is that it is tautologically (by assumption) not in conflict with reality.
About the neutrons: Their behavior is described by a random process. All values this process can take "exist" mathematically in the same sense. You just only see one particular value. Is the same I've been telling you several times now already. Nobody ever said you must be able to "see" all of the mathematical reality. This is one of the assumptions you have been implicitly making that I tried to point out.
Incidentally you just called my replies to you inconsiderate. Which, given the time that I have spent, I don't find very considerate myself. Best,
167. This comment has been removed by the author.
168. This comment has been removed by the author.
169. HI Ain Soph ,
Perhaps as you say each of our biases have had us to see that we disagree in places that we don’t. There’s not much more that could be said about this discussion about beables, since as you admit whether the concept is useful to avoid biases really depends on your own biases.:-) Just a couple of comments as to what you said, then I think we should put this to rest, at least in terms of this blog. The first being would be to say I disagree that Feynman didn’t consider probabilities as a beable, for he certainly did. I won’t defend this other than just to say that you would have to point me to something more specific that would convince me otherwise. Lastly what Hermann demonstrated as what was wrong with von Neumann’s proof was not that it was a circular argument, yet rather it assigned the logic of the averaged value of an assemble to situations where they were they just couldn’t be demanded logically to hold.
As to all the back and forth comments in respect to probability, what these in the end represent comes down to whether one believes, rather then knows, if there is such an entity as the random set, that is outside of it being something that can only be defined mathematically by what it isn’t, rather then what it is. This reminds of a time some years back when I was playing craps late into the evening in Atlantic City, with noticing one fellow off to the side scribbling each roll of the dice on a note pad. When it came time for me to leave the table, I asked this fellow if he believed what he was keeping track of would help him to win, with his reply being of course because it was all a matter of the probabilities. Then to continue I asked had he never heard that Einstein said that God doesn’t play dice and he replied yes I have, so what does that have to do with it. I then said what he meant which is of importance here is that even god could not have random work to have something known to be real, so then what chance do you think you have in being able to succeed? :-)
170. Hi Ain Soph,
Just one thing I forgot to add is that from what I’m able to gather you are one of those that consider the working of reality as being that of a computer. Actually I have no difficulty with this as long as a computer is not limited to being digital. The way I look at it with respect to having both waves and particles as beables this would have this computer to be analogue which is restricted to digital output :-)
171. Bee, I don't know why you think I implied or said your replies were inconsiderate. When I said it's a shame we continue to disagree, I meant in the usual sense of "it's unfortunate it's that way" rather than "shame" over something bad. Or you might be confusing my use of "considered" in a reply to Ain Soph not you, in which I said I appreciated finally getting such a reply? The word "considered" means put effective thought into the comment instead of just tossing off IMHO assumptions etc. It does not mean the same as "considerate", meaning caring about someone else, being polite etc. REM that I am cross-talking to you and A.S. about nearly the same point, since you both seem to accept determinism (or its viability) and don't seem to appreciate my point about neutrons and math structures, etc.
Perhaps also you have some lingering soft spots in practical English, although your writing is in general excellent and shows correct parsing of our terms and grammar at a high level. Note that English is full of pitfalls of words and phrases that mean very differently per context.
Note also that when two people keep debating and neither yields, then both are "stubborn" in principle. I suggest seeking third-party insight, which I predict will be a consensus in the field of foundations (not applied math) that identical math structures must produce identical results (as A.S. now seems to admit - saying it's a matter of environmental influence, about which more in due course), and that a field of possibilities is just an abstraction. Hence it is not possibly a way to get one identical particle to last one duration, and another one; another duration. It is not a "machinery" for producing differential results in application. That is so regardless of what kind of universe we are in or how many others there are etc. Either we pre-ordain the behavior in the Laplacian sense, or it is inexplicably random and varying despite the identical beginning states.
This is not my own idiosyncratic notion, but supported by extensive reading of historical documents in science and phil-sci that included works of founders of QM etc. Sure, we can't figure out "how can this be?" - it's just the breaks.
In any case I'm sorry you felt put-down, but you can be relieved that isn't what I meant.
172. Further possible confusion: in practical (English?) discourse, if a comment is addressed to soandso then the statement:
"Soandso, I appreciate finally getting a considered response..."
is supposed to mean, "I appreciate finally getting _____"[from you] rather than, "I appreciate getting _____" from at least someone, at all, period. I'm not being a nit-picker about trivia, just don't want anyone to feel slighted.
Ain Soph: I mean, the proposed experiment I describe at my name-linked blog, the latest post "Decoherence interpretation falsified?" (It's a draft.) Please, look it over, comment etc.
173. Neil: Thank you for the English lesson and I apologize for any confusion in case "inconsiderate" is not the opposite of "considered," which is what I meant. Yes, I was referring to your earlier comment addressed at Ain Soph. Your statement, using your description, implies that you think I have not "put effective thought" into my comments, which I find inappropriately dismissive. In fact, if you read through our exchange, I have given you repeatedly arguments why your claim is faulty which you never addressed. I am not "tossing off" assumptions, I am telling you that your logical chain is broken, and why so. It is not that I do not "appreciate" your point, I am telling you why you cannot use it to argue MUH is in disagreement with observation. This is not a "debate," Neil, it is you attempting an argumentum ad nauseum.
Finally, to put things into the right perspective, nowhere have I stated whether I "accept" determinism or not, and for the argument this is irrelevant anyway. Nevertheless, when it comes to matters of opinion, I have told you several times already that I don't beleive neither in MUH nor in MWI. I am just simply telling you that your argumentation is not waterproof. Best,
174. Bee, I think you missed my followup to the explanation about "considered" - as I said there, I meant to Ain Soph that he/she had finally given me a "considered" [IMHO] reply, not that finally "someone" had - which would mean, no one else had either! So can we finally be straight about that, since you were not meant to be included?
As for argumentum ad nauseum, I note that you keep mostly repeating yourself as well so wouldn't that apply to both of us if so? Also, I have provided some new ideas such as the example of neutrons, moving beyond more abstract complaints.
So let's forget about MUH for awhile (and since it involves accepting "all possible math structures", which goes beyond merely saying that this world is fully describable by math.) Note also that even if a person's argument is not airtight, it can still be the most plausible one. Also, AFAIK I do have majority support (or used to?) in the sci-phil community.
175. BTW, I just got a FBF acceptance from Stefan! Thanks. The blog is good "you guys" (another colloquialism that in English can now include any gender) overall. To other readers: Bee's FB page is cute and interesting, much more than the typical scientist's.
176. Neil: Okay, let's forget about the considerable considerations, this is silly anyway. Of course I am repeating myself, because you are not addressing my arguments. Look, I am afraid that I read your "new ideas" simply as attempts to evade a reply to my arguments. But besides this, I addressed the neutrons already above. Best,
177. Hi Bee,
”I am just simply telling you that your argumentation is not waterproof.”
Interesting much of this conversation ends up focused around semantics and looking at what you said to Neil reminded me it at times can be non trivial. That is particularly in today’s scientific climate I’d rather have my theory be bullet proof, while less concerned if it be water proof, as there is a significant diference between being all wet and dead:-)
c.c. Neil Bates
178. I think we've come a long way from Spooky.:)
179. Plato: I wonder if your piece on imaging with entangled photons is the same idea as this stunning report:
Wired Magazine Danger Room
Air Force Demonstrates ‘Ghost Imaging’
* By Sharon Weinberger Email Author
* June 3, 2008 |
* 11:00 am |
* Categories: Science!
Air Force funded researchers say they’ve made a breakthrough in a process called "ghost imaging" that could someday enable satellites to take pictures through clouds.
180. Phil:
You have a point about Feynman. Although, on page 37 of his 1985 book, QED, we find
... the price of this great advancement of science is a retreat by physics to the position of being able to calculate only the probability that a photon will hit a detector, without offering a good model of how it actually happens.
which draws a clear distinction between what we can calculate (probabilities) and what actually happens (beables), yet on page 82 he says
by which I think he really means that he, himself, has given up on it -- which is sad, because his path integrals build such a clear bridge between quantum phase and classical action; they are bound to play a central role in the defeat of quantum mysticism.
Also, I think your analogue computer, restricted to digital output, is an excellent metaphor! At least to first order. It reminds me of Anton Zeilinger’s remark that “a photon is just a click in a photon detector.”
181. Ain Soph, I think what you call "quantum mysticism" is just what nature is like. Why must She make sense? She is not like the Queen of England, she is like Lady Gaga: "I'm a freak bitch, baby!" About neutrons: yes in an extreme case, inside a nucleus, neutrons are stable. But in the bound state they are exchanging with other nucleons, not a proper dodge regarding in-flight differences. You seem to admit that a real mechanism would mean we could make a batch of "five minute neutrons", but almost no one thinks we could. We can't even make a batch that has a bias, etc. That is absurd. The consistent, Poisson distribution is "mystical", it is absurd.
The alternative would be a ridiculous Rube-Goldberg world where intricate arrangements were made to program each little apparently identical particle with a mechanism that could never be exposed, never tricked into revealing the contrivance by how we grouped the particles, how we made them, waiting them out, nothing. The universe can't do that. It's something to accept.
Again, re the proposed information recovery experiment: I describe it in my blog.
182. Neil B:
Tyrranogenius??!!?! Ree-hee-hee-ly... Ahem. Anyway...
I took a look at your post about recovering information after decoherence, and I pretty much agree with most of what you wrote. But let’s be clear about what this really implies about the nature of probability.
This little thought experiment of yours clearly demonstrates my point, that there is nothing special or mysterious about quantum probabilities; they nothing other than classical probabilities applied to things that have a phase.
There is a somewhat analogous experiment in statistical mechanics. One puts a drop of black ink in a viscous white fluid contained in the thin annular space between two transparent, rigid cylinders. Then one turns the outer cylinder relative to the inner one, and watches as the ink dot is smeared out around the circumference, becomes an increasingly diffuse grey region until it finally disappears completely. If the rotation is continued long enough, the distribution of ink can be made arbitrarily close to uniform, both circumferentially and longitudinally.
Eventually, one concludes that entropy has increased to a maximum and the information about the original location of the ink drop has been irreversibly lost. However, if one then reverses the relative rotation of the cylinders, one can watch as the ink drop is reconstituted, returning to its original state exactly when the net relative rotation of the cylinders returns to zero.
This works better with more viscous fluids, but only because that makes it easier to reverse the process. The ease of demonstrating the principle depends on the viscosity, but the principle itself does not. And the principle is this: information is never lost in a real process, but it can be transformed in ways that make it prohibitively difficult to recover. Of course, “prohibitively difficult” is in the eye of the beholder. It is not a statement about the system; it is a statement about the observer.
If you come along after I’ve turned the cylinders, not knowing that they were turned, and I challenge you to find the ink droplet, you will measure the distribution of ink, find it so close to uniform that you declare the difference to be statistically insignificant, and conclude that my challenge is impossible, saying the information is irretrievably lost. That is, you will say that the mixture is thermalized.
But then I say, no, this is not a mixed state at all; it is an entangled state. And to prove it, I turn the cylinders backwards until the ink drop reappears. Voila!
So you see, the question is not, when is the information lost, but rather, at what point is recovering it more trouble than it’s worth? And the answer depends on what you know about the history of the situation.
The moral of the story is, one man’s information is another man’s noise. There is no such thing as “true randomness.”
And this is the real lesson to be learned from the whole messy subject of decoherence.
183. Neil B:
I say, “if all neutrons are identical, then we definitely should not be able to prepare batches of neutrons with differing parameters.”
And you reply “you seem to admit that a real mechanism would mean we could make a batch of five minute neutrons.”
No wonder you think other people’s posts are not carefully considered.
You don’t pay attention to what they write.
184. Ain Soph, thanks for looking at my blog and getting the point about recovering information, even if we don't agree about the significance (REM, I say we can recover the original biased of amplitude differences, not specific events.) As for the name, well it's supposed to be cute and creative.
Neutrons: but the statements are flip sides of the same point:
Right, so if they weren't identical, and were deterministic (as you seem to think they must be, and "God only knows" what Bee really thinks IMHO but I'll leave her alone anymore), then we would be able to prepare a batch of "five minute neutrons." They would of course be, a whole bunch that were the same as, that portion, that lasts five minutes, of a normal motley crew of varying life times. Of course almost no one thinks we can do that, hence neutrons are likely identical, hence looking for mechanism to break the mystical potential of events is likely hopeless.
You need to think less one-dimensionally?
185. Hi Ain Soph,
So I guess on the question of probabilities we find Feynman to have given them a physicality that just can’t be justified. He also held similar notions as to the meaning of information and what that implied in terms of physical reality, as what is to be considered as real physically and what isn’t. I think what separates the way we each look at all of this is rooted in what of our most basic of biases be and that what forms as constituting our ontological centres.
So when I say analogue rendering only digital results, I mean just that, with having to attach a separate and distinct entity to both, while you seem able to have only one thing stand as being both. To me this is reminiscent of when as a child I would get these puzzles where one connects the dots , where after tracing between the dots something would appear as a figure, such a s boy or girl’s face, or some inanimate object. I find your way of looking at the world is to just to see the dots, while the lines between are spaces having no meaning or consequence. However for me it is to have the figure as the place that no matter where the dots are looked for and even when not found still exists, as do the dots.
Now as much as I hate to admit it, this is one bias that I fear each of us will never be able to discard and as such as for both of us the how and the why of the world will be looked for from two distinctly different perspectives. That said I have no compliant as you being a Feynman fan, as then you having come by it honestly for this information (digital) perspective of reality can be attributed largely to him. To quote Mehra’s biography of Feynman ‘Beat of A Different Drum’ under the heading ‘24.2 Information as a Physical Reality’ (page 530)Feynman’s thoughts on this in summation reads:
“This example has demonstrated that the information of the system contributes to its entropy and that information is a well-defined physical quantity, which enters into conservation laws”
The thing is I have no problem with this statement, other to echo Bell’s compliant when this all is information view was proposed in asking “information about what”. With the Feynman perspective as with his diagrams this information represented only what the correlated assembly (the group of dots yield) without regard for what formed to be the cause of the correlations as in his diagrams having those wavy lines in between to be assigned no physically yet required all in the same. So once again for me it’s not how it happens that physics can demonstrate so well why there be no hidden variables, yet rather how can it even consider it a good beginning to deny made so evident to be deduced by reason of experiment. This is where I find the quantum mysticism to begin as the same reason given by Einstein to Heisenberg when he explained :
"...every theory in fact contains unobservable quantities. The principle of employing only observable quantities simply cannot be consistently carried out."
Anyway despite our biases I have to respect that you take your position seriously, as do I, yet I convinced that no matter what the outcome each would be more then grateful if an experiment could be devised which could have made clear which is simply wishful thinking and which is nature’s way of being.
186. Phil, REM that experimental proposal of mine that you've read (and correctly understood, as did Ain Soph.) If we can recover such information about input amplitudes after the phases are scrambled - and the direct optical calcuation, which has never been wrong, says we can - that is a game changer. The output from BS2 in my setup "should" be a "mixture" in QM, ie equivalent to whole photons randomly going out one face or the other. But if not, then the fundamentals have to be reworked and we can't use traditional DM or mixture language.
I'm serious as a heart attack, it's not braggadocio but a clear logical consequence. (BTW anyone, that blog name is supposed to be cute and camp, not to worry.)
187. Remark to All:
Many valid arguments have been presented over the years that should be, to use Neil’s phrase, “game changers.” But they’re not.
Einstein, Schrödinger, Bohm and Bell, put together, were not able to counter the irrationality that was originated by the “Copenhagen Mafia” and continues to be aggressively promoted today.
As we have strikingly demonstrated in this very thread, contemporary physics is hobbled by an inability to agree on the meaning of such basic terms as “reality,” “probability,” “random” and “quantum” -- just to name a few. Thus, endless semantic quibbling has been imported into physics and drowns out any vestige of substantive debate that could lead to real progress.
Willis E. Lamb, awarded the 1955 Nobel Prize in Physics for discovering the Lamb shift, states categorically in a 1995 article [Appl. Phys. B, 60(2-3):77] that
“there is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists.”
But there is quite some evidence to indicate that these errors are anything but accidental.
In Disturbing the Memory, an unpublished manuscript written by Edwin T. Jaynes in 1984, he describes why he had to switch from doing a Ph.D. in quantum electrodynamics under J. R. Oppenheimer to doing one on group theoretic foundations of statistical mechanics under Eugene Wigner:
“Mathematically, the Feynman electromagnetic propagator made no use of [QED’s] superfluous degrees of freedom; it was equally well a Green’s function for an unquantized EM field. So I wanted to reformulate electrodynamics from the ground up without using field quantization. ... If this meant standing in contradiction with the Copenhagen interpretation, so be it. ... But I sensed that Oppenheimer would never tolerate a grain of this; he would crush me like an eggshell if I dared to express a word of such subversive ideas.
“Oppenheimer would never countenance any retreat from the Copenhagen position, of the kind advocated by Schrödinger and Einstein. He derived some great emotional satisfaction from just those elements of mysticism that Schrödinger and Einstein had deplored, and always wanted to make the world still more mystical, and less rational. ... Some have seen this as a fine humanist trait. I saw it increasingly as an anomaly -- a basically anti-scientific attitude in a person posing as a scientist.”
Whether or not it started out that way, in the end the truth was of no importance in all of this, as exemplified by Oppenheimer’s remark (quoted by F. David Peat, on page 133 of Infinite Potential, his 1997 biography of David Bohm):
“If we cannot disprove Bohm, then we must agree to ignore him.”
There are other stories like this.
Is this evidence of an innocent cognitive bias, or something more dangerous?
188. Ain Soph,
I must admit to still being confused by your position, for on one hand you seem to agree with Lamb that there is no such thing as a photon and then have empathy for Bohm. This itself turns out to be completely opposite views ontologically which I would have thought you would have to first settle to move forward if only for yourself. So I would ask which is it to be Lamb or Bohm?
189. Phil:
Actually, I think it would be a mistake to follow either too dogmatically. I’m not sure that the two of them are as incompatible as they seem at first sight, unless one insists on treating fields and particles identically. I can see the appeal in that, but it does cause a lot of problems.
To be sure, the renormalization procedures of quantum electrodynamics can be made to yield impressive numerical accuracy, but this in itself does not validate the underlying physics: Ptolemaic epicycles can be made to reproduce planetary motions with arbitrary accuracy, even though the underlying model is essentially theological.
For radiation, Jaynes notes that only emission and absorption can be shown unequivocally to be quantized, and that only two coefficients are required to completely specify each field mode. Lamb gave completely classical treatments of the laser and the Mössbauer effect, showing that neither photons nor phonons are strictly required.
Jaynes also showed that the arguments claiming that the Lamb shift, stimulated emission and the Casimir effect prove the physical reality of the zero-point energy are circular; they assume that these things are quantum effects at the outset. For every effect that is commonly held to prove the physical reality of the zero-point energy, one can find an alternative classical derivation from electromagnetic back-reaction.
So I have yet to see a valid argument that compels me to quantize the field.
In regard to Bohm, I must say that I prefer the crisp clarity of his earlier work to his later dalliance with the mysticism of the implicate order. His demonstration, together with Aharonov, that the vector potential is a real physical entity, was masterful. And his pilot wave theory proves unequivocally that a hidden-variables theory is possible.
But I am not ready to commit to any detailed interpretation of the pilot wave, primarily because of the treatment of the Dirac equation given by Hestenes, who takes the zitterbewegung to be physically real. From that starting point, one can not only construct models of the electron reminiscent of the non-radiating current distributions of Barut and Zanghi, but one can also recover the full U(1) x SU(2) x SU(3) symmetry of the standard model.
In short, unless your interest in physics is motivated only by the desire to build gadgets, it would be a grave error to follow David Mermin’s curt injunction to “shut up and calculate.”
190. Ain Soph, I think you are blaming the wrong agents here! It's not the fault of scientists and philosophers trying to get a handle on the odd situation of quantum mechanics. Sure, many of them sure aren't doing the best they can - I rap in particular, the wretched circular argument and semantic slight of hand of advocates for decoherence as excuse for collapse or not seeing macro supers. No, the "fault" is not in (most of ;-) us but is in the stars: it's the universe just, really, being weird. It really doesn't make sense. Why should it?
But yeah, maybe pilot waves can do something but I consider it a cheesy kludge. And even if it handles "particles" trajectories (uh, I'm still trying to imagine what funny kind of nugget a "photon" would be ... polarized light in 2 DOFs? Differing coherence lengths based on formation time? Hard for even the real Ain Soph to straighten out), what about neutron and muon decay and all that?
As for my proposed experiment: as I said, its significance transcends interpretation squabbles. Nor is it in vein to previous paradoxes. It means getting more information out than was thought possible before. I say that is indeed a game changer
191. Neil:
“... the universe just, really, being weird. It really doesn’t make sense.”
That’s exactly what THEY want you to think!
Seriously: that has got to be the most self-defeating bullsh!t I’ve ever heard.
Reality cannot be inconsistent with itself.
A is A. Of course it makes sense.
We just haven’t figured it out yet.
Quantum mechanics is no weirder than classical mechanics. The universe may very well be non-local, but simply cannot be acausal.
If you learn nothing else from John Bell and David Bohm, learn that.
192. Reality cannot be inconsistent with itself.
It isn't, I suppose in the circularly necessary and thus worthless sense - it's just inconsistent with what is conceptually convenient or sensible to us (or to being put into MUH.)
A is A. Of course it makes sense.
Randian QM? Just as unrealistic as for the human world.
Quantum mechanics is no weirder than classical mechanics.
Yes, it is. Even with concepts like PWs, what are we going to do about electron orbitals and shells, their not radiating unless perturbed - and then the process of radiation itself, tunneling and all that (and still, neutrons and muons etc which almost no one accepts as being a matter of outside diddling. What about the particles that don't last long enough to be diddled?)
So I guess you think everything is determined, so we have to worry why each muon decayed at all those specific times etc. What a clunky mess, why not let it go? My reply is: It can be whatever it wants to be. I think, horribly to the usual suspects around here, that it's here first for a "big" reason like our existence, and only second to be logically nice. That may be mysticism but so is the idea the second purpose is uppermost.
They were bright folks but ideological imposers. I want to say Bell should know better because of entangled properties (that are not supposed to be like "Bertlmann's socks" which are preselected ordinary properties) but maybe he thought their was a clever way to set it all up. But even if you imagine a pre-related pair of photons, the experimenter has to be dragged into the conspiracy too. Bob has to be forced to set a polarizer at some convenient angle, so he and Alice can get the same result. It's not enough for the photons to be eg "really at 20 degrees linear polarized" because if A & B use 35 degrees, they still get the same result as each other.
Yet it can't be an inevitable result of the 15 degree difference either, since there is a pattern of yes and no - the correlation is what matters. If pilot waves can arrange all that, they might as well just be the real entities anyway.
BTW your anonymity is your business, but if you drop a blog name etc. it might be worthwhile.
PS I've had a heck of trouble with Google tonight, are the Chinese really messing with them that much?
193. Causality and determinism are two different ideas.
The world can be causal and non-deterministic.
194. This comment has been removed by the author.
195. Hi Ain Soph,
Well that was certainly as nice way of dancing around the question and perhaps as such you feel you suffer being less biased and maybe rightfully so. In this respect I guess I’m not as fortunate as you, for I see the world as something that’s always moving from becoming to being, as to be driven there by potential. So no matter which way you care to express it, for me there must be something that stands for the source of potential and another that stands for its observed result and both must be physical in nature for them to be considered as real.
The fact is nature has demonstrated to be biased, through things like symmetry, conservation and probability, with then having these biases manifest themselves consequentially as invariance, covariance, action principle and so on. The job of science is then by way of observation (experiment) to discover how nature is biased and then through the use of reason to consider how such biases must be necessary to find things the way that they are; or in other words why. However, if all that a scientist feels their job as being is to figure out the recipe for having things to be real, without seeing it required to ask why, that is their failing and not a bias mandated by science itself. This is the bias expressed first by Newton himself, which Bohr merely served to echo later that those like Descartes, Einstein and Bohm never did agree with. So I find in relation to science this to be the only bias that holds any significance in terms of its ultimate success.
- Albert Einstein- September 1944[Born-Einstein Letters],
196. Arun said: "Causality and determinism are two different ideas. The world can be causal and non-deterministic."
Very true. If event A happens, then either event B or C might happen. In which case event B or C would be caused by event A, but it would still be non-deterministic.
Ain Soph: "Quantum mechanics is no weirder than classical mechanics." Well, It seems pretty weird to me! Do you have access to some information the rest of us don't have?
197. Andrew - good distinction about causality v. determinism. That's basically what I meant when disagreeing with Ain Soph, forgetting the difference. Hence, we can't IMHO explain the specifics of the outcomes. But in common use, "causality" is made to be about the timing itself, so people say "the decay was not caused to be at that specific time by some pre-existing process, or law (the "law" such as it is, applies only to the probability being X.)
You would likely have an interest in my proposal to recover apparently lost amplitude information. It's couched in terms of disproving that decoherence solves macro collapse problems, but there is no need to agree with me about that particular angle. Getting the scrambled info back is significant in any case, and the expectation it couldn't be is orthodox and not a school debate. I've gotten some interest from e.g blogger quantummoxie, but indeed I need a diagram!
198. Neil:
Reality may look strange when you can’t take the speed of light to be infinite, or neglect the quantum of action, or treat your densities as delta functions, and even stranger in the face of all three. But that’s not the same as not making sense.
A is A... these days, you hear it in the form, “it is what it is.” To deny it is to deny reason. But you just blow it off with a non sequitur. I guess that’s what you have to do if you want to believe that reality makes no sense.
In your reply to Andrew, you are back to pretending there is a difference between “completely unpredictable” and “truly random.” Again, I guess you have to, otherwise you can’t cling to the idea that reality makes no sense.
By the way, thanks for the expression of interest, but I don’t have a blog.
Comment moderation on this blog is turned on. |
9ad24bf1014e72ff | Scattering and tunnelling
Scattering and tunnelling
Free course
Scattering and tunnelling
3.3 Scattering from a finite square step
The kind of one-dimensional scattering target we shall be concerned with in this section is called a finite square step. It can be represented by the potential energy function
The finite square step (Figure 8) provides a simplified model of the potential energy function that confronts an electron as it crosses the interface between two homogeneous media. The discontinuous change in the potential energy at x = 0 is, of course, unrealistic, but this is the feature that makes the finite square step simple to treat mathematically. The fact that we are dealing with a square step means that we shall only have to consider two regions of the x-axis: Region 1 where x ≤ 0, and Region 2 where x > 0.
Figure 8 A finite square step of height V0 < E0
Figure 8 A finite square step of height V0 < E0
Classically, when a finite square step of height V0 scatters a rightward moving beam in which each particle has energy E0 > V0, each of the particles will continue moving to the right but will be suddenly slowed as it passes the point x = 0. The transmitted particles are slowed because, in the region x > 0, each particle has an increased potential energy, and hence a reduced kinetic energy. The intensity of each beam is the product of the linear number density and the speed of the particles in that beam. To avoid any accumulation of particles at the step, the incident and transmitted beams must have equal intensities; the slowing of the transmitted beam therefore implies that it has a greater linear number density than the incident beam.
Exercise 2
In general terms, how would you expect the outcome of the quantum scattering process to differ from the classical outcome?
In view of the quantum behaviour of individual particles (as represented by wave packets) when they meet a finite square barrier, it is reasonable to expect that there is some chance that the particles encountering a finite square step will be reflected. In the case of quantum scattering we should therefore expect the outcome to include a reflected beam as well as a transmitted beam, even though E0 > V0.
We start our analysis by writing down the relevant Schrödinger equation:
where V(x) is the finite square step potential energy function given in Equations 7.6 and 7.7. We seek stationary-state solutions of the form , where E0 is the fixed energy of each beam particle. The task of solving Equation 7.8 then reduces to that of solving the time-independent Schrödinger equations
A simple rearrangement gives
and it is easy to see that these equations have the general solutions
where A, B, C and D are arbitrary complex constants, and the wave numbers in Region 1 and Region 2 are respectively
You may wonder why we have expressed these solutions in terms of complex exponentials rather than sines and cosines (recall the identity eix = cos x + i sin x). The reason is that the individual terms in Equations 7.11 and 7.12 have simple interpretations in terms of the incident, reflected and transmitted beams. To see how this works, it is helpful to note that
where is the momentum operator in the x direction.
It therefore follows that terms proportional to eikx are associated with particles moving rightward at speed , while terms proportional to e−ikx are associated with particles moving leftward at speed .
These directions of motion can be confirmed by writing down the corresponding stationary-state solutions, which take the form
where ω = E0/. We can then identify terms of the form ei(kxωt) as plane waves travelling in the positive x-direction, while terms of the form e−i(kx+ωt) are plane waves travelling in the negative x-direction. None of these waves can be normalised, so they cannot describe individual particles, but you will see that they can describe steady beams of particles.
In most applications of wave mechanics, the wave function (x, t) describes the state of a single particle, and |(x, t)|2 represents the probability density for that particle. In the steady-state approach to scattering, however, it is assumed that the wave function (x, t) describes steady beams of particles, with |(x, t)|2 interpreted as the number of particles per unit length – that is, the linear number density of particles. We know that the wave function is not normalisable, and this corresponds to the fact that the steady beams extend indefinitely to the left and right of the step and therefore contain an infinite number of particles. This will not concern us, however, because we only need to know the linear number density of particles, and this is given by the square of the modulus of the wave function.
Looking at Equation 7.14, and recalling that the first term Aei(k1xωt) represents a wave travelling in the positive x-direction for x ≤ 0, we identify this term as representing the incident wave in Region 1 (x ≤ 0). We can say that each particle in the beam travels to the right with speed , and that the linear number density of particles in the beam is
(You will find further justification of this interpretation in Section 3.4.)
Similarly, the second term on the right of Equation 7.14 can be interpreted as representing the reflected beam in Region 1 (x ≤ 0). This beam travels to the left with speed vref = k1/m and has linear number density nref = |B|2.
The first term on the right of Equation 7.15 represents the transmitted beam in Region 2 (x > 0). This beam travels to the right with speed vtrans = k2/m and has linear number density ntrans = |C|2. The second term on the right of Equation 7.15 would represent a leftward moving beam in the region x > 0. On physical grounds, we do not expect there to be any such beam, so we ensure its absence by setting D = 0 in our equations.
Using these interpretations, we see that the beam intensities are:
Expressions for the reflection and transmission coefficients then follow from Equation 7.5:
It is worth noting that the expression for the transmission coefficient includes the wave numbers k1 and k2, which are proportional to the speeds of the beams in Regions 1 and 2. The wave numbers cancel in the expression for the reflection coefficient because the incident and reflected beams both travel in the same region.
To calculate R and T, we need to find the ratios B/A and C/A. To achieve this, we must eliminate unwanted arbitrary constants from our solutions to the time-independent Schrödinger equation. This can be done by requiring that the solutions satisfy continuity boundary conditions:
• (x) is continuous everywhere.
• d(x)/dx is continuous where the potential energy function is finite.
The first of these conditions tells us that our two expressions for (x) must match at their common boundary x = 0. From Equations 7.11 and 7.12, we therefore obtain
Taking the derivatives of Equations 7.11 and 7.12,
so requiring the continuity of d/dx at x = 0 implies that
After some manipulation, Equations 7.19 and 7.20 allow us to express B and C in terms of A
Combining these expressions with Equations 7.17 and 7.18, we finally obtain
Since and , where E0 is the incident particle energy and V0 is the height of the step, we have now managed to express R and T entirely in terms of given quantities. The transmission coefficient, T, is plotted against E0/V0 in Figure 9.
Figure 9 A graph of the transmission coefficient T against E0/V0 for a finite square step of height V0
The above results have been derived by considering a rightward moving beam incident on an upward step of the kind shown in Figure 8. However, almost identical calculations can be carried out for leftward moving beams or downward steps. Equations 7.21 and 7.22 continue to apply in all these cases, provided we take k1 to be the wave number of the incident beam and k2 to be the wave number of the transmitted beam.
The formulae for R and T are symmetrical with respect to an interchange of k1 and k2, so a beam of given energy, incident on a step of given magnitude, is reflected to the same extent no matter whether the step is upwards or downwards. This may seem strange, but you should note that the reflection is a purely quantum effect, and has nothing to do with any classical forces provided by the step.
Another surprising feature of Equation 7.21 is that R is independent of m and so does not vanish as the particle mass m becomes very large. However, we know from experience that macroscopic objects are not reflected by small changes in their potential energy function – you can climb upstairs without serious risk of being reflected! How can such everyday experiences be reconciled with wave mechanics?
This puzzle can be resolved by noting that our calculation assumes an abrupt step. Detailed quantum-mechanical calculations show that Equation 7.21 provides a good approximation to reflections from a diffuse step provided that the wavelength of the incident particles is much longer than the distance over which the potential energy function varies. For example, Equation 7.21 accurately describes the reflection of an electron with a wavelength of 1 nm from a finite step that varies over a distance of order 0.1 nm. However, macroscopic particles have wavelengths that are much shorter than the width of any realistic step, so the above calculation does not apply to them. Detailed calculations show that macroscopic particles are not reflected to any appreciable extent so, in this macroscopic limit, quatum mechanics agrees with both classical physics and everyday experience.
Although we have been discussing the behaviour of beams of particles in this section, it is important to realise that these beams are really no more than a convenient fiction. The beams were simply used to provide a physical interpretation of de Broglie waves that could not be normalised. The crucial point is that we have arrived at explicit expressions for R and T, and we have done so using relatively simple stationary-state methods based on the time-independent Schrödinger equation rather than computationally complicated wave packets. Moreover, as you will see, the method we have used in this section can be generalised to other one-dimensional scattering problems.
Exercise 3
• (a) Use Equations 7.21 and 7.22 to show that R + T = 1.
• (b) Evaluate R and T in the case that E0 = 2V0, and confirm that their sum is equal to 1 in this case.
Exercise 4
Consider the case where k2 = k1/2.
• (a) Express B and C in terms of A.
• (b) Show that in the region x > 0, we have ||2 = 16|A|2/9 = constant, while in the region x ≤ 0, we have .
• (c) What general physical phenomenon is responsible for the spatial variation of ||2 to the left of the step?
• (d) If the linear number density in the incident beam is 1.00 × 1024 m−1, what are the linear number densities in the reflected and transmitted beams?
• (a) When k2 = k1/2, we have
• (b) From Equation 7.15 with D = 0,
• Similarly, from Equation 7.14,
• Since , we have
• Multiplying out the brackets, we find
• (c) The variation indicated by the cosine-dependence to the left of the step is a result of interference between the incident and reflected beams. The presence of interference effects was noted earlier when we were discussing the scattering of wave packets but there the effect was transitory. In the stationary-state approach interference is a permanent feature.
• (d) The linear number densities in the incident, reflected and transmitted beams are given by |A|2, |B|2 and |C|2. The question tells us that |A|2 = 1.00 × 1024 m−1, so the linear number density in the reflected and transmitted beams are
• Note that the transmitted beam is denser than the incident beam: |C|2 > |A|2. However, since k2 = k1/2, we have jtrans < jinc. The transmitted beam is less intense than the incident beam because it travels much more slowly.
Exercise 5
Based on the solution to Exercise 4, sketch a graph of ||2 that indicates its behaviour both to the left and to the right of a finite square step.
A suitable graph is shown in Figure 10.
Figure 10 |Ψ|2 plotted against x for a finite square step at x = 0 when E0 > V0
Take your learning further371
Request an Open University prospectus371 |
c6e5c44575af9ec3 |
Photon and Gluon Emission in Relativistic Plasmas
Peter Arnold Department of Physics, University of Virginia, Charlottesville, Virginia 22901 Guy D. Moore and Laurence G. Yaffe Department of Physics, University of Washington, Seattle, Washington 98195
December 11, 2020
We recently derived, using diagrammatic methods, the leading-order hard photon emission rate in ultra-relativistic plasmas. This requires a correct treatment of multiple scattering effects which limit the coherence length of emitted radiation (the Landau-Pomeranchuk-Migdal effect). In this paper, we provide a more physical derivation of this result, and extend the treatment to the case of gluon radiation.
preprint: UW/PT 02–06\@floatstrue
I Introduction
The rate of photon emission in a high-temperature QCD plasma is a problem of some theoretical interest [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], due in part to the hope that hard photon emission will be a useful diagnostic probe of heavy ion collisions [15, 16]. Understanding the analogous rate of gluon emission in a hot QCD plasma is required for computing thermalization and transport processes [17, 18, 19, 20, 21, 22, 23, 24].
Two-to-two particle processes contributing to the leading order
photon emission rate.
Time may be viewed as running from left to right.
Figure 1: Two-to-two particle processes contributing to the leading order photon emission rate. Time may be viewed as running from left to right.
Bremsstrahlung and pair production contributions to photon emission.
The bottom line in each diagram can represent either a quark or a gluon.
Figure 2: Bremsstrahlung and pair production contributions to photon emission. The bottom line in each diagram can represent either a quark or a gluon.
We will make the simplifying assumption that the temperature is so large that the running strong coupling constant can be treated as small, and focus on the leading-order (in ) behavior of the photon or gluon emission rate.111In other words, we will neglect all sub-leading corrections to the emission rate suppressed by additional powers of , as well as power corrections suppressed by or . Evaluating just the leading weak coupling behavior, even in this asymptotically high temperature regime, is non-trivial. For the photon emission rate, one might expect that it would be sufficient to compute just the diagrams shown in Fig. 1, which describe lowest order two-to-two particle scattering processes. But it turns out that naive perturbation theory (augmented by the standard resummation of hard thermal loops) does not suffice to calculate the leading-order hard photon emission rate. Fig. 2 shows bremsstrahlung and pair production processes which also contribute to the on-shell photon emission rate at leading order. The essential problem is that the internal time scale associated with these processes is comparable to the mean free time for soft scattering with other particles in the plasma. Consequently, even at leading order, the photon emission rate is sensitive to processes involving multiple scatterings occurring during the emission process. This is known as the Landau-Pomeranchuk-Migdal (LPM) effect. Diagrammatically, it manifests as the presence of interference terms involving multiple collisions, such as depicted in Fig. 3, which parametrically are equally as important as those of Fig. 2. In the nearly-collinear limit, the extra explicit factors of turn out to be canceled by a combination of parametrically large enhancements from the internal quark propagators and soft exchanged gluons.
An interference term, involving amplitudes for photon emission before and
after multiple scattering events,
which contributes to the leading order emission rate.
Figure 3: An interference term, involving amplitudes for photon emission before and after multiple scattering events, which contributes to the leading order emission rate.
In Ref. [25], we showed how to account for the LPM effect in photon emission by identifying and summing the appropriate infinite class of diagrams which contribute to the leading order emission rate, and in Ref. [26] we solved the resulting integral equations numerically. Our analysis in Ref. [25] involved quite detailed power counting of diagrams and relied upon some rather unintuitive relationships found by Wang and Heinz [27] between real-time thermal 4-point Green’s functions in the r/a (Keldysh) formalism. One goal of the present article is to reproduce our previous results in a manner that more clearly highlights the essential physics of the result. We will embrace physical arguments wherever possible, leaving to our previous work [25] the more technical justification of our results. Our second goal will be to extend the treatment to the case of gluon emission.
We are hardly the first people to discuss applications of the LPM effect, either qualitatively or quantitatively [13, 18, 19, 20, 21, 22, 23, 24, 28, 29, 30, 31, 32, 33].222 For a review of many aspects of the LPM effect, with an emphasis on the theory and experiment of high-energy collisions of electrons with atomic matter, see Ref. [34]. However, discussions prior to our work [25] have almost always focused on systems where the scatterers with which the emitting particle interacts are static. (The scatterers are represented by the bottom lines in Figs. 2 and 3.) But typical scatterings in an ultra-relativistic plasma involve interactions with excitations which are themselves moving at nearly the speed of light. They produce dynamically screened color electric and magnetic fields which form a fluctuating background field in which bremsstrahlung, pair annihilation, and the LPM effect take place.
Our discussion will be largely self-contained and will not directly rely on previous treatments of the LPM effect in other contexts. One could, presumably, explicitly mimic previous discussions such as the seminal 1955 analysis of Migdal [29, 30], and suitably generalize the treatment to non-static scattering. However, it is rather challenging to follow in detail all of the assumptions and approximations made in Migdal’s quantum mechanical treatment. We believe that readers interested in understanding the effect can benefit from several different formulations. Moreover, our discussion of scales and approximations will be tailored to the particular problem at hand, namely hard photon (and later gluon) emission in ultra-relativistic plasmas.
We will focus on the contributions of bremsstrahlung and pair annihilation to the rate of photon (or gluon) emission. We will not explicitly discuss contributions to the emission rate from the processes of Fig. 1, which are of the same order but do not require a treatment of the LPM effect. These contributions to the photon emission rate are calculated in Ref. [9, 10, 26].333 Only the limit is addressed in Refs. [9, 10]. Diagrammatically, the bremsstrahlung and pair annihilation processes of Fig. 2 look like they are suppressed by an explicit factor of compared to the processes of Fig. 1. However, the diagrams of Fig. 2 have a collinear enhancement, associated with emission of the photon. This enhancement is produced by a would-be collinear singularity which is cut off by the effective thermal mass (of order ) and width of the fermions, yielding a parametrically large near on-shell enhancement which causes these processes to be the same order (up to a logarithm of ) as those of Fig. 1.
We will use the following nomenclature in our discussion. The particles corresponding to the lower lines in Figs. 2 or 3 will be called the scatterers. The particle which emits the photon in bremsstrahlung, or the particle/anti-particle pair which annihilate in pair annihilation, will be called the (photon) emitters. Each separate gluon exchange in Figs. 2 and 3 will be referred to as a ‘‘scattering’’ of the emitters.444 When distinguishing emitters from scatterers, one need not worry about the fundamental indistinguishability of quarks in, say, quark-quark scattering, because the scattering is dominated by small angle collisions, and the photon is emitted nearly collinear with the emitter(s). So, at leading order, emitters and scatterers are distinguishable simply by whether or not their direction of motion is nearly aligned with the emitted photon.
Before we begin, we first summarize the basic scales associated with bremsstrahlung and pair production of hard photons via the processes of Fig. 2. For simplicity, we restrict attention to photons with momenta parametrically of order , which is the typical momentum of particles in the ultra-relativistic plasma. We will throughout treat electromagnetic couplings as small compared to the QCD coupling, and we will ignore thermal effects on the propagation of the photon (which are proportional to ). The basic kinematics, summarized pictorially in Fig. 4 for bremsstrahlung, turn out to be as follows.
Orders of magnitude of various momentum, distance, and angular scales
associated with bremsstrahlung of a photon with momentum of order
Figure 4: Orders of magnitude of various momentum, distance, and angular scales associated with bremsstrahlung of a photon with momentum of order . stands for the strong coupling .
• The typical momenta of the emitters and scatterers is order .
• The typical momentum transfer of an exchanged gluon responsible for scattering during the emission of the photon is of order , which is also the order of the inverse Debye screening length for color. The angle of deflection in such a collision is of order this momentum transfer divided by the emitter momentum of order , hence .
• Processes contributing to the leading-order emission rate are dominated by nearly collinear emission of the photon. The corresponding internal quark lines in Fig. 2 are nearly on-shell, with energies off-shell by an amount of order . Fourier transforming, this implies that these processes have time durations of order . This is known as the formation time of the photon.
• The formation time scale is also the order of the mean free time between collisions with momentum transfers of order . This is why multiple collisions cannot be treated independently and interferences such as Fig. 3 must be included at leading order.
• The typical angle between the directions of the photon and the emitter(s) (be they initial state or final state emitters) is .
• The typical angle between the directions of the emitter and a scatterer is .
A brief review of how to obtain these scales is given in Sec. I.B of Ref. [25]. Here, we will take them as our starting point and proceed to discuss how to sum up the effects of multiple collisions.
Certain physical processes in high temperature gauge theories (such as the rate of baryon number violation in hot electroweak theory) are sensitive to “ultra-soft” collisions which are mediated by low-frequency non-Abelian magnetic fluctuations with momentum transfers of order . The dynamics of such ultra-soft collisions are intrinsically non-perturbative. One may check a posteriori, by inspection of the final answers we will produce, that the leading-order dynamics relevant for hard photon (or gluon) production is not sensitive to ultra-soft collisions. Alternatively, the reader may find detailed qualitative and diagrammatic discussions of this point in Ref. [25].
For simplicity, we will restrict attention to the case of zero chemical potential. The generalization of results for photon emission to non-zero chemical potential may be found in Ref. [25]. In the next section, we will mainly focus on bremsstrahlung and reformulate the problem as that of bremsstrahlung from an emitter that is propagating through a classical random background color gauge field. We will show how simple physical considerations of localization of particles in both space and momentum lead to a description of the LPM effect in bremsstrahlung as an infinite sum of ladder diagrams. In section III, we will discuss how to convert the required sum of ladder diagrams into a Boltzmann-like integral equation—a task previously carried out in Ref. [25], but which we will repeat using the formalism of this paper, in a way that more clearly displays which portions of the relevant physics depend on the nature of the emitter (e.g. an actual Dirac quark, a fictitious scalar quark, or something else) and which physics does not. In section IV, we then analyze the closely related process of pair production, and we combine these results for photon emission in section V. Finally, in section VI we discuss the generalization to gluon emission.
Ii Random backgrounds and ordering of interactions
The differential photon emission rate (per unit volume), at leading order in , is given by the well-known relation
where denotes the null photon 4-momentum, the ’s represent a basis of transverse polarizations for the photon, and is the Wightman electromagnetic current-current correlator,
We use a metric with signature . Because the correlator is contracted with photon polarization vectors in (1), one need only consider the case where and are spatial indices in what follows. As always, denotes an expectation value in whatever density matrix is of interest, which in our case is a thermal ensemble describing the equilibrium plasma. If one inserts a complete set of multi-particle states between the currents, and works in a basis where is diagonal, then the correlator can also be written as
It will be useful to remember that this corresponds to scattering from an initial state to a final state plus the emitted photon. Moreover, the integrand in (3) can be interpreted as representing interference between photon emission at the space-time points and .
The perturbative bremsstrahlung and annihilation processes of
Figure 5: The perturbative bremsstrahlung and annihilation processes of Fig. 2, with the soft gluon fields now interpreted as classical background fields.
Bremsstrahlung and annihilation processes including
the multiple interactions with a background field,
which lead to the LPM effect.
Figure 6: Bremsstrahlung and annihilation processes including the multiple interactions with a background field, which lead to the LPM effect.
Our first approximation will be to treat the soft gluons of the problem as a random, classical non-Abelian background field , through which the particles that bremsstrahlung or annihilate propagate. The lowest-order processes of Fig. 2 are then replaced by Fig. 5, and our task will be to sum up multiple gluon interactions such as depicted in Fig. 6. After computing rates, we will appropriately average over this random classical gauge field. Such averaging will be denoted by . By translation invariance, the variance of the background field must have the form555A translationally invariant choice of gauge fixing, such as Lorentz or Coulomb gauge, is tacitly assumed.
where is the spectral density of thermal gauge field fluctuations. We may treat the statistical distribution of the background gauge field as Gaussian, dropping higher-order connected correlations. This is equivalent to neglecting the nonlinear self-interaction of the background and neglecting the interactions between different scatterers. This approximation is valid, at leading order, everywhere except in the deep infrared (). Neglecting non-Gaussian correlations in the background gauge field would not be allowable if the processes under consideration were sensitive (at leading order) to ultra-soft fluctuations. Fortunately, this is not the case [25].
Physically, these soft gauge fields are created by other charge carriers [with typical momenta] passing randomly through the plasma. The statistics of the soft gauge fields generated by these particles can be described using (i) the fluctuation-dissipation theorem, and (ii) the standard hard thermal loop (HTL) approximation for the retarded self-energy of soft () gauge fields. Namely, the spectral density may be taken to be
where is the Bose distribution function, is the free retarded gauge field propagator, and is the retarded propagator with hard thermal loops resummed, which account for Debye screening and Landau damping of the color fields. We will review the specific formulas later, in section V. Because the background fields of interest are soft, one may replace the statistical factor by its low frequency limit, namely .
The presence of correlations in the background field with non-trivial frequency (and wave-number) dependence is the most significant difference between our problem of hard photon emission in a relativistic plasma, and the original setting for the LPM effect of hard photon emission in a static medium of random Coulomb scatterers (nuclei). In order to be able to make contact later between our results and those of Migdal, we will treat the spectral density as arbitrary, and not use the specific form (5) until the very end.
ii.1 Particle propagation in a random field
Our next task is to rewrite the current correlator (2) in terms of propagation amplitudes of single particle states in the random background. We will focus on the bremsstrahlung process and come back to pair annihilation in section IV. Bremsstrahlung corresponds to a contribution to the current correlator (2) of the form666For any three-momentum , means , while for a four-vector , will denote .
where and represent one-particle states of the emitting charge carrier, and is the corresponding equilibrium distribution function [either or ]. The contributions (6) to the current correlator have to be summed over all types (including anti-particles) and spins of hard charged quasi-particles. The momenta and can be regarded as the initial and final momenta of the charge carrier, and and are the initial and final state statistical factors. There is an approximation hiding here because momentum eigenstates are not exact energy eigenstates in the presence of the soft background field. The density matrix is not therefore precisely . Over the formation time relevant to our problem, soft momenta transfers change by order . But the resulting error in the use of is then sub-leading in and so can be ignored in a leading-order analysis.
Making the time evolution explicit, and inserting complete sets of intermediate states, the statistically averaged matrix elements appearing in (6) can be rewritten as
where is the time evolution operator in the background field. As usual, the spatial Fourier transform in (6) combines with translation invariance (of the statistically averaged matrix elements) to enforce total momentum conservation:
Two diagrams, time ordered from left to right, whose interference
contributes to the rate of bremsstrahlung. The first diagram represents
photon emission at time zero, and the second at time
Figure 7: Two diagrams, time ordered from left to right, whose interference contributes to the rate of bremsstrahlung. The first diagram represents photon emission at time zero, and the second at time , and in both cases the diagrams show the evolution between these two times.
A time-ordered Z diagram.
Figure 8: A time-ordered Z diagram.
A single diagram depicting the interference of the two diagrams
of Fig.
Figure 9: A single diagram depicting the interference of the two diagrams of Fig. 7. The interactions along top and bottom lines are independently time ordered from left to right.
Diagrammatically, this contribution represents the interference in the evolution of the particle from time 0 to depending on whether the photon is emitted at time 0 or at time , as depicted in Figs. 7a and b. These diagrams can be considered as time ordered (with time running from left to right) because each individual momentum transfer is not enough to create or destroy a (nearly on-shell) particle/anti-particle pair. That is, in a time-ordered Z contribution like Fig. 8, the three-particle intermediate state would have to be so far off shell that its contribution is suppressed compared to the time ordering of Fig. 7a. It is convenient to put the interference of the evolutions of Figs. 7 together into the single diagram depicted by Fig. 9. The top line (or “rail”) represents and the bottom line (rail) the complex conjugate of . This looks just like a Feynman diagram for the current correlator, except with the added interpretation that each rail of the diagram can be considered as time ordered from left to right.
ii.2 Ordering of interactions
The next step is to understand the effect of averaging over the background gauge field. A quick way to visualize the dominant correlations is to think of the photon and emitter qualitatively as approximately classical particles, having both approximately well-defined position and momentum, obeying roughly
Indeed, the original, qualitative derivation of the LPM effect was purely classical [28].777 Classical results become precise quantitatively in the limit that the photon energy is small compared to the emitter energy. (See section 1 of Ref. [24] for a concise review.) More formally, instead of using a basis of momentum or position states, one may use an (over-complete) basis of Gaussian wave packets with width . For momenta of order , the widths will remain of this order over the time scale relevant to our problem.888 In more detail, the transverse momenta in the wave packet will be order , both from the initial width as well as momentum transfer from scattering. The transverse spatial spread of the wave packet over time will then be , since . There is also a longitudinal spread in the wave packet due to the effective thermal mass of hard quarks, which gives a dispersion in velocities of order . The longitudinal spread of the wave packet due to this dispersion is then of order . The spatial uncertainty in the position of the particles is then always small compared to , which is the mean free path for the soft gluon interactions with which dominate our problem. This implies that interactions with scatterers must occur in a definite order, regardless of when the bremsstrahlung photon is emitted.
The emitter’s trajectory, thought of as approximately classical,
and the ordering in time and space this gives to its local interactions
with scatterers (dashed lines).
Figure 10: The emitter’s trajectory, thought of as approximately classical, and the ordering in time and space this gives to its local interactions with scatterers (dashed lines).
Two amplitudes which do not interfere (at leading order) because of
mismatched order of encountering scatterers.
Figure 11: Two amplitudes which do not interfere (at leading order) because of mismatched order of encountering scatterers.
Amplitudes for which the time and space
ordering of interactions is consistent.
Figure 12: Amplitudes for which the time and space ordering of interactions is consistent.
Fig. 10 illustrates the basic point. It shows the emitter propagating along the axis as a (nearly) localized classical particle. The dashed lines depict the small subset of other hard particles in the plasma which happen to pass close to the emitter during its flight and happen to interact with it when they do. Because of the localization of the emitter in space and time, and because of the local nature of the soft interactions (set by the Debye screening length ), the interactions (for the particular background of hard particles shown) must happen in the order numbered in the diagram, regardless of when the collinear photon is emitted. That is, there can be no interference (at leading order in ) between the diagrams of Fig. 11 if the numbers 1 and 2 denote the fields generated by two different scatterers. However, the diagrams of Fig. 12a and b could interfere. So could the diagrams of Fig. 12a and c, except that this particular interference vanishes when averaged over the randomness of the scatterers: the unmatched scatterer 4 could be a particle or anti-particle (for example), and a single insertion of its field in the product of amplitudes averages to zero by charge conjugation invariance. So, after averaging, non-zero contributions only arise from the cases where interactions with a given scatterer happen to appear twice in Fig. 9, with the interactions of the top and bottom rails consistently ordered in time and space. We can rewrite the random average of these figures in the form of Figs. 13 and 14, where the dashed lines indicate that the corresponding pair of ’s have been replaced by the correlation . In our problem, this correlator is given by (4), but the argument we have used to arrive at Fig. 13 would also apply to static random scatterers, such as considered by Migdal, whenever one has a similar hierarchy of scales.
The product of amplitudes, as in Fig.
Figure 13: The product of amplitudes, as in Fig. 9, now averaged over random background fields. The dashed lines denote the independent correlations of the background gauge fields. The double lines represent the resummed propagators of Fig. 14.
The resummation of self-energy insertions into the propagator.
Figure 14: The resummation of self-energy insertions into the propagator.
We should mention in passing that the condition that the entire wave packet remain well-localized, in all three dimensions, on a scale parametrically small compared to the mean separation between scattering events, is much stronger than necessary to argue that the interaction must be ordered in time (at leading order). Interactions are effectively ordered in time, and ladder diagrams dominate, in numerous other applications including the classic case of scattering from point-like static random impurities.999In this case, a localized wave packet impinging on a single scattering center will produce a scattered wave which resembles an outgoing spherical shell, with no localization in direction whatsoever. But as long as the temporal duration of the scattered wave (as it moves past a fixed location) is small compared to the mean time between collisions, multiply-scattered waves produced by scattering, in different orders, off a given set of scattering centers, will have negligible overlap and hence not interfere. In particle-oriented language, all we are saying is that if a particle travels from some localized source to a localized detector by scattering twice, first off impurity and then off , its path length will be different than if it scatters first off and then off . As long as this difference in path length is large compared to the spatial extent of the wave packet, no significant interference can occur. The basic criterion which is relevant here, and in all applications of kinetic theory, is that the mean time between collisions must be large compared to both the temporal duration of the initial wave packet (or over its energy), and to the time duration of a single scattering event (as determined by the energy derivative of the scattering amplitude).
The self-energy resummation on the top and bottom rails, involving insertions of the background field correlator as shown in Fig. 14, represents the inclusion of the thermal width in the propagator of the emitting particle. In hot gauge theories, the thermal width of quasiparticles is dominated by soft scattering and is actually an infrared divergent quantity in perturbation theory. However, these divergences (which represent sensitivity to ultra-soft fluctuations) will turn out to cancel in our final result when combined with the correlations that connect the top and bottom rails. Our final formulas will be well behaved in the infrared.
Before moving on to the evaluation of the ladder diagrams of Fig. 13, we may use the fuzzy classical particle picture to understand why the LPM effect depends crucially on the fact that the photon and the emitting particle move at or very close to the speed of light. If a particle of momentum moving in the direction emits a collinear photon of momentum at time and position zero, then the subsequent positions of the particle and the photon at time will be
where is the velocity of the photon and the velocity of the (quasi) particle with momentum . If, on the other hand, the same particle doesn’t emit the photon until time , the subsequent positions will be
These two processes can interfere significantly only if the photon and particle trajectories are the same, up to the fuzziness . That requires
which is natural (that is, not suppressed by additional phase space restrictions) only if the particles are all ultrarelativistic.
Iii Ladder diagrams as integral equations
iii.1 Relativistic Schrödinger equation
A concise and convenient method for representing the propagation of nearly on-shell particles in a soft gauge background is with the relativistic one-particle Schrödinger equation. The free equation is
where and is the appropriate mass (in our case , which characterizes the in-medium dispersion correction of a fermion of negligible explicit mass). The corresponding retarded free propagator is just
and satisfies
We will use the traditional normalization of one-particle quantum mechanics,
as opposed to the usual convention in relativistic field theory, which has an extra factor of on the right hand side. Except for the change in normalization, is just the positive-energy pole piece of the usual propagators of relativistic field theory (or, equivalently, the energy denominator of time-ordered perturbation theory when only single particle intermediate states contribute). The advantage of the description used here is that it does not depend on the spin or type of particle; it can be used to economically describe the propagation of hard scalars, fermions, or gauge bosons in soft background fields.
In a soft background gauge field (containing fluctuations on a scale ), the single-gluon vertex is just
up to corrections suppressed by , where is defined by
A simple, quick way to see this is to replace by in the Schrödinger equation (13) and propagator (14):
Because the background field is soft, one may treat and as if they commute, since commutators will replace the operator , whose relevant matrix elements are , by , and hence only generate a subleading contribution. Expanding the equation to first order in , one then obtains
which gives the interaction (17). Only the “convective” contribution of the charged particle to the gauge current appears in the soft gluon vertex (17); the spin dependent contributions which would normally distinguish scalars, fermions, or gauge bosons are absent. Spin dependent contributions to the current are suppressed by an additional power of , and hence only generate sub-leading corrections to the propagation of a hard excitation through the soft background gauge field.
From the point of view of one-particle quantum mechanics, anti-particles are just another type of particle. The propagation of nearly on-shell anti-particles in a soft background field is also described by the propagator (14) and vertex (17) except that the gauge field should be in the conjugate representation. Readers who prefer a discussion in terms of standard relativistic Feynman rules may refer to Ref. [25].
iii.2 The integral equation
The contribution to the leading-order photon emission rate from the sum of the ladder diagrams depicted in Fig. 13 may be expressed in terms of the solution to a suitable integral equation. Analogous summations of ladder diagrams are, of course, well known in many other contexts (such as the Bethe-Salpeter equation for bound states), although the form of the resulting integral equation intimately depends on the precise form of the propagators making up the rungs and side rails of the ladder diagrams, as well as the specific vertex factors at the ends of the diagrams.
The group structure of the diagrams of Fig. 13 is trivial: for every explicit power of , there is also one factor of the quadratic Casimir for the color representation of the emitter. This Casimir is defined by where are the generators of the representation, normalized so that . For conventional quarks or anti-quarks in the fundamental representation of color SU(3), this Casimir is
Our first step will be to resum the self-energy insertions of Fig. 14 by switching from the propagator (14) to the resummed propagator
where the sign gives the retarded propagator and the sign the advanced one. The width is proportional to the imaginary part of the self-energy of the hard particle. The explicit form for this width is discussed below. The corresponding real part of the self energy is absorbed into . In fact the real part, unlike the imaginary part, is dominated by hard rather than soft collisions and cannot be directly computed in the soft background formalism we are using. Instead, it can be taken directly from the well-known hard thermal loop result, and just gives the usual effective thermal mass. For hard quarks, the effective energy is then
up to yet higher-order corrections, where [35, 36]
The leading-order contributions to the current-current correlator
Figure 15: The leading-order contributions to the current-current correlator expressed in terms of the solution to a linear integral equation.
The sum of interferences represented by the diagrams of Fig. 13 may now be rewritten in terms of a linear integral equation as depicted in Fig. 15. To write this explicitly, it is convenient to work in frequency and momentum space. Consider the contribution to the correlator from the portion of the integration in Eq. (6).101010We could equally well focus on the portion. As noted below, the two contributions are just complex conjugates of each other. It happens to be the portion which leads to the specific form of the integral equation derived in our earlier work [25]. If we rewrite the time evolution matrix elements in Eq. (7) as then we may use retarded propagators in evaluating these matrix elements. The resulting contribution to the current correlator, illustrated in Fig. 15, is
where , which represents the shaded vertex in Fig. 15, satisfies the integral equation111111 here is analogous to the combination of Ref. [25]. We have chosen our conventions for defining so that later equations will closely resemble those of Refs. [25, 26].
The color matrices and the momentum-conserving delta function are to be understood as already factored out of the correlation in this (and subsequent) formulae. The region of the integration in (6) simply gives the complex conjugate (the replacement of retarded by advanced propagators), so that the total bremsstrahlung contribution, from a single carrier type and spin state, is
Note that the photon four-momentum is fixed, but the integral equation relates at different values of the four-vector . In fact, at the order of interest, the dependence on may be simplified. Let represent the component of in the direction of the photon, and let be the part of perpendicular to . In our process, the momentum transfer is , and so the relevant values of are . Expanding in both and the thermal mass , one has
where we have assumed that the photon is on-shell, and that so that the photon is traveling in the same direction as the emitter. The last two terms in both equations are , and so each of the propagators will be suppressed unless is also .121212 If , then the two propagators are peaked near and , and it is impossible for both propagators to simultaneously be (nearly) on-shell. Hence, only the region contributes to the leading-order bremsstrahlung rate. If is only ever going to be integrated against functions that are smooth on the scale of variations in , then one may approximate
This is commonly known as the pinching pole approximation, and in our analysis is justified for computing the rate at leading order in . Making this substitution into the integral equation (III.2), one sees that the resulting solution will have the form
Inserting this form, and rewriting
one finds
In terms of the current matrix elements appearing in the original definition (6), the function is given by that portion of the time-evolved off-diagonal matrix element of the current, evaluated in the fluctuating background gauge field, which is phase-coherent with the emitted photon. That is, to leading order in ,
Time evolution is, of course, implicit in .
In the interaction term of Eq. (35) one may, at leading order, replace by
and rewrite the integral equation in the form
As it stands, this integral equation mixes together different values of (as well as different ) in the argument of , due to the momentum transfers in the interaction term. The characteristic size of for a hard quasiparticle is , and no elements in the equation are sensitive (at leading order) to variations in . Consequently, we can treat as fixed inside the interaction term,131313 This may be argued in more detail as follows. Given the linearity of the integral equation, one could multiply the source in (38) by , at the cost of introducing an integral over in the final expression (34). The resulting solution to the integral equation would then have its dominant support on values of which differ from only by total momentum transfers of . Because the factor in (34) is smooth in (for of order ), no error is made at leading order if is replaced by in these terms. If the modified (38) is then integrated over , and is renamed , one arrives at Eqs. (34) and (39). replacing the equation by
This is now a linear integral equation in space, for fixed values of and .
One remaining complication is that both the width and the total scattering cross section have infrared divergences in our approximations, proportional to the divergence of . This divergence of in HTL perturbation theory is well known [37], arises from collisions via the exchange of unscreened low-frequency magnetic gluons, and is a symptom of sensitivity to ultra-soft non-perturbative magnetic physics at the momentum scale . But this apparent infrared sensitivity is actually illusory in our problem, as may be seen by rewriting the width in a form similar to the interaction term in Eq. (39). The width comes from the imaginary parts of the self-energy insertions in Fig. 14 for nearly on-shell particles and, to leading order, is given by
Substituting this into (39), one obtains the infrared-safe equation
We may also tidy up the expression for by noting that [since is ], so that to leading order
iii.3 Current matrix elements
The current matrix element is the only ingredient of our equations that depends on the nature of the emitter (i.e., scalar or fermion). Since is parametrically small compared to , it is sufficient to evaluate these matrix elements in the limit of small transverse momentum. We only need the transverse components of because the currents are dotted into photon polarization vectors in Eq. (2). By rotational invariance about the axis, these must be proportional to . Let denote the electric charge of the emitting particle (so that up quarks, for example, have ). Then in the small limit, the transverse current matrix element takes the form
where we have chosen to factor out the charge of the emitter. The explicit form of the “splitting function” , which depends on and as well as the spin of the emitter, will be discussed momentarily. Given the linearity of the integral equation (41) (and the fact that it only couples different values of , not ), we can factor out all dependence on the emitter type by redefining the transverse part of as
to obtain
where is the solution to141414Rotation invariance about implies that must equal times a scalar function of , , and . (See Ref. [26].) But the resulting integral equation (46) is more compact if is left as a vector function.
This gives the result for bremsstrahlung production from a single type and spin (and initial color) of charged particle in a way that clearly isolates (i) the dependence on the type of the particle in the splitting factor , and (ii) the dependence on the details of the frequency-dependent correlation of the background field in the correlator .
If the charge carriers are scalars, one would have
The numerator in the middle expression is the usual photon vertex in relativistic normalization, while the denominator arises from our use of a non-relativistic normalization (16) for our states. Making a similar calculation for fermions, one obtains151515 See, for example, Ref. [25] for more detail, bearing in mind the change to the non-relativistic normalization used here.
This result can also be expressed in terms of the leading-order Altarelli-Parisi (or DGLAP) splitting functions for (hard) photon bremsstrahlung from a charged particle ,
where161616 is the standard result for gluon bremsstrahlung from a quark. The corresponding splitting function for photon production is obtained by replacing by 1. The treatment of the soft photon singularities at is not relevant to us, since our photons are hard; hence we may ignore the and the endpoint prescription on the denominator.
for .
iii.4 Relationship to Migdal’s equation
Let us note in passing the relationship of our result to similar equations written by Migdal [29, 30] for the case of hard bremsstrahlung during electron scattering from the static Coulomb fields of randomly distributed atoms. If the background were static, we could write
and the collision term in (41) would then become
Migdal has an equivalent expression (Eq. (9) of Ref. [30]) but with replaced by
At leading order, however, both of these delta functions are equivalent to , and so our result reduces to Migdal’s in this special case.
Migdal went on to solve his equation in the approximation that the LPM effect was parametrically large—roughly, that (or equivalently that the formation time was large compared to the mean free collision time), in which case it turned out he was able to expand in the logarithm of the ratio,171717 At very high energies, this logarithm is modified in Midgal’s analysis by the finite nuclear size of the atomic scatterer, which is also irrelevant to our application. . In our application, and are of the same parametric order, and such an expansion is not useful.
Iv Pair annihilation
The treatment of pair annihilation is very closely related to that of bremsstrahlung — a fact which is slightly obscured in the present time-ordered discussion but is more manifest in our original treatment of Ref. [25]. The contribution to the current-current correlator due to pair annihilation, in a form analogous to Eq. (6) for bremsstrahlung, is
Using the fact that particles propagate independently in our approximation (the only interactions are with the background field), this can be written in terms of one-particle evolution amplitudes as
Translation invariance now gives the momentum conservation constraint
Note that we have ignored processes where the particle and antiparticle interact directly with each other via gluon exchange, rather than with the random background field created by other color charge carriers in the plasma. Such processes are sub-leading in , as are vertex corrections to bremsstrahlung.181818 A simple qualitative way to understand this is to consider the typical interaction energy times the photon formation time (which, among other things, is the time the annihilating pair spend within a Debye screening length of each other) in the center of mass frame of the almost-collinear annihilating pair. If this product is small compared to 1, then the effect of direct gluon interactions between the pair is suppressed. The formation time is in the plasma frame, and becomes smaller by a factor of when boosted to the center-of-mass frame, giving . The characteristic distance between the pair over the formation time is , giving a characteristic potential energy of order . The product is then order . For comparison, the interaction time and distance for interaction with a random colored particle are both order , and the energy-time product is again . But the pair encounters such random particles during the formation time [number density times Debye screening volume times formation time divided by time interacting with each particle]. These encounters will on average cancel each other in their effects except for statistical fluctuations, which will enhance our contribution per random encounter by to give a net, unsuppressed effect of .
Two (time-ordered) diagrams whose interference contributes
to the pair annihilation rate.
Figure 16: Two (time-ordered) diagrams whose interference contributes to the pair annihilation rate.
The pair annihilation contribution to the current correlator represents the interference of the two processes shown in Fig. 16. The interference of these amplitudes can again be depicted by a diagram like Fig. 9, but now the interpretation is slightly different. The left photon vertex represents the conjugate of the first diagram of Fig. 16, and everything else represents the second diagram of Fig. 16. The same time and space ordering considerations discussed for bremsstrahlung again imply that the only relevant correlations at leading order are those depicted by Fig. 13. The resulting integral equation is closely related to the one for bremsstrahlung:
with191919 A note on signs: the ’s from the two soft gluon interactions cancel the minus sign associated with the fact that the color generators for the anti-particle are instead of .
The pinching pole approximation is now |
039b27c6f1f1b0dc | background preloader
Facebook Twitter
Relation between Schrödinger's equation and the path integral formulation of quantum mechanics. Background[edit] Schrödinger's equation[edit] Schrödinger's equation, in bra–ket notation, is where is the Hamiltonian operator.
Relation between Schrödinger's equation and the path integral formulation of quantum mechanics
We have assumed for simplicity that there is only one spatial dimension. The Hamiltonian operator can be written is the potential energy, m is the mass and we have assumed for simplicity that there is only one spatial dimension q. The formal solution of the equation is where we have assumed the initial state is a free-particle spatial state The transition probability amplitude for a transition from an initial state to a final free-particle spatial state at time T is Path integral formulation[edit] The path integral formulation states that the transition amplitude is simply the integral of the quantity The reformulation of this transition amplitude, originally due to Dirac[1] and conceptualized by Feynman,[2] forms the basis of the path integral formulation.[3] From Schrödinger's equation to the path integral formulation[edit] The transition amplitude can then be written.
Quantum logic. Quantum logic can be formulated either as a modified version of propositional logic or as a noncommutative and non-associative many-valued (MV) logic.[1][2][3][4][5] Quantum logic has some properties which clearly distinguish it from classical logic, most notably, the failure of the distributive law of propositional logic: p and (q or r) = (p and q) or (p and r), where the symbols p, q and r are propositional variables.
Quantum logic
Quantum triviality. In a quantum field theory, charge screening can restrict the value of the observable "renormalized" charge of a classical theory.
Quantum triviality
If the only allowed value of the renormalized charge is zero, the theory is said to be "trivial" or noninteracting. Thus, surprisingly, a classical theory that appears to describe interacting particles can, when realized as a quantum field theory, become a "trivial" theory of noninteracting free particles. This phenomenon is referred to as quantum triviality. Strong evidence supports the idea that a field theory involving only a scalar Higgs boson is trivial in four spacetime dimensions,[1] but the situation for realistic models including other particles in addition to the Higgs boson is not known in general. Mathematically equivalent formulations of quantum mechanics. Path integral formulation. The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition.
Path integral formulation
The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all possible random walks. For this reason path integrals were used in the study of Brownian motion and diffusion a while before they were introduced in quantum mechanics.[3] Einstein–Maxwell–Dirac equations. Einstein–Maxwell–Dirac equations (EMD) are related to quantum field theory.
Einstein–Maxwell–Dirac equations
The current Big Bang Model is a quantum field theory in a curved spacetime. Unfortunately, no such theory is mathematically well-defined; in spite of this, theoreticians claim to extract information from this hypothetical theory. Invariance mechanics. The invariant quantities made from the input and output states of a system are the only quantities needed to give a probability amplitude to a given system.
Invariance mechanics
This is what is meant by the system obeying a symmetry. Since all the quantities involved are relative quantities, invariance mechanics can be thought of as taking relativity theory to its natural limit. Wigner's theorem. Wigner's theorem, proved by Eugene Wigner in 1931,[1] is a cornerstone of the mathematical formulation of quantum mechanics.
Wigner's theorem
The theorem specifies how physical symmetries such as rotations, translations, and CPT act on the Hilbert space of states. According to the theorem, any symmetry acts as a unitary or antiunitary transformation in the Hilbert space. More precisely, it states that a surjective (not necessarily linear) map. Common integrals in quantum field theory. There are common integrals in quantum field theory that appear repeatedly.[1] These integrals are all variations and generalizations of gaussian integrals to the complex plane and to multiple dimensions.
Common integrals in quantum field theory
Other integrals can be approximated by versions of the gaussian integral. Fourier integrals are also considered. Scalar field theory. In theoretical physics, scalar field theory can refer to a classical or quantum theory of scalar fields.
Scalar field theory
A field which is invariant under any Lorentz transformation is called a "scalar", in contrast to a vector or tensor field. The quanta of the quantized scalar field are spin-zero particles, and as such are bosons. The only fundamental scalar field that has been observed in nature is the Higgs field. However, scalar fields appear in the effective field theory descriptions of many physical phenomena. An example is the pion, which is actually a "pseudoscalar", which means it is not invariant under parity transformations which invert the spatial directions, distinguishing it from a true scalar, which is parity-invariant. . , has a particularly simple form: it is diagonal, and here we use the + − − − sign convention. List of quantum field theories. Topological quantum field theory. A topological quantum field theory (or topological field theory or TQFT) is a quantum field theory which computes topological invariants.
Topological quantum field theory
Although TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory and the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for work related to topological field theory. In condensed matter physics, topological quantum field theories are the low energy effective theories of topologically ordered states, such as fractional quantum Hall states, string-net condensed states, and other strongly correlated quantum liquid states. Overview[edit] Local quantum field theory. Let Mink be the category of open subsets of Minkowski space M with inclusion maps as morphisms. We are given a covariant functor from Mink to uC*alg, the category of unital C* algebras, such that every morphism in Mink maps to a monomorphism in uC*alg (isotony).
(Poincaré covariance). Local quantum field theory. Axiomatic quantum field theory. Axiomatic quantum field theory is a mathematical discipline which aims to describe quantum field theory in terms of rigorous axioms. It is strongly associated with functional analysis and operator algebras, but has also been studied in recent years from a more geometric and functorial perspective.
There are two main challenges in this discipline. Constructive quantum field theory. Attempts to put quantum field theory on a basis of completely defined concepts have involved most branches of mathematics, including functional analysis, differential equations, probability theory, representation theory, geometry, and topology. It is known that a quantum field is inherently hard to handle using conventional mathematical techniques like explicit estimates. This is because a quantum field has the general nature of an operator-valued distribution, a type of object from mathematical analysis. The existence theorems for quantum fields can be expected to be very difficult to find, if indeed they are possible at all. Form factor (quantum field theory) For example, at low energies the interaction of a photon with a nucleon is a very complicated calculation involving interactions between the photon and a sea of quarks and gluons, and often the calculation cannot be done.
Often in this context, form factors are also called "structure functions", since they can be used to describe the structure of the nucleon. However the general form of the interaction is known, where represents the photon momentum (equal to E/c, where E is the energy of the photon). Wheeler–Feynman absorber theory. The Wheeler–Feynman absorber theory (also called the Wheeler–Feynman time-symmetric theory) is an interpretation of electrodynamics derived from the assumption that the solutions of the electromagnetic field equations must be invariant under time-reversal symmetry, as are the field equations themselves. Indeed, there is no apparent reason for the time-reversal symmetry breaking which singles out a preferential time direction and thus makes a distinction between past and future. A time-reversal invariant theory is more logical and elegant.
Another key principle, resulting from this interpretation and reminiscent of Mach's principle due to Tetrode, is that elementary particles are not self-interacting. This immediately removes the problem of self-energies. This theory is named after its originators, the late physicists Richard Feynman and John Archibald Wheeler. T-symmetry and causality[edit] Supersymmetry. Multivalued gauge transformations. Gauge theory. Haag's theorem. Haag's theorem. Rudolf Haag postulated [1] that the interaction picture does not exist in an interacting, relativistic quantum field theory (QFT), something now commonly known as Haag's Theorem. Haag's original proof was subsequently generalized by a number of authors, notably Hall and Wightman,[2] who reached the conclusion that a single, universal Hilbert space representation does not suffice for describing both free and interacting fields.
In 1975, Reed and Simon proved [3] that a Haag-like theorem also applies to free neutral scalar fields of different masses, which implies that the interaction picture cannot exist even under the absence of interactions. Renormalization. Renormalization. Axiomatic approaches. Particle conservation and non-conservation. Physical meaning of particle indistinguishability. Unification of fields and particles. Dynamics. Second Quantization. Field operators. Boson. In quantum mechanics, a boson (/ˈboʊsɒn/,[1] /ˈboʊzɒn/[2]) is a particle that follows Bose–Einstein statistics. Bosons make up one of the two classes of particles, the other being fermions.[3] The name boson was coined by Paul Dirac[4] to commemorate the contribution of the Indian physicist Satyendra Nath Bose[5][6] in developing, with Einstein, Bose–Einstein statistics—which theorizes the characteristics of elementary particles.[7] Examples of bosons include fundamental particles such as photons, gluons, and W and Z bosons (the four force-carrying gauge bosons of the Standard Model), the recently discovered Higgs boson, and the hypothetical graviton of quantum gravity; composite particles (e.g. mesons and stable nuclei of even mass number such as deuterium (with one proton and one neutron, mass number = 2), helium-4, or lead-208[Note 1]); and some quasiparticles (e.g.
Cooper pairs, plasmons, and phonons).[8]:130 Types[edit] Properties[edit] Pauli exclusion principle. Fermion. Antisymmetric wavefunction for a (fermionic) 2-particle state in an infinite square well potential. Second quantization. Quantum hydrodynamics. Quantum electrodynamics. First quantization. Single and many-particle Quantum mechanics. Lagrangian formalism. Classical field theory. Classical and Quantum fields. Photon polarization. Symmetry in quantum mechanics. Static forces and virtual-particle exchange.
Geometrodynamics. Quantum field theory in curved spacetime. Quantum field theory. Introduction to quantum mechanics. What is Quantum Physics. Quantum mechanics. "SPOOKY" QUANTUM BIOLOGY. EINSTEIN and BLACK HOLES. INSIDE BLACK HOLES. BIRD: QUANTUM ENTENGLEMENT. RADIUS of BLACK HOLES. QUANTUM EFFECTS. Entenglement + Superposition. ISSUE of COHERENCE. Supersolidity loses its luster. Northwestern University & the Psi’ence of Presentiment. Physicists May Have Evidence Universe Is A Computer Simulation.
Einstein to shed light on black holes. Wormhole. Weird Science. Weird Science (Part 2) Quantum "spooky action at a distance" travels at least 10,000 times faster than light. Controversial quantum computer aces entanglement tests - physics-math - 08 March 2013. Breaking the Speed of (Light) Thought. So you think YOU'RE confused about quantum mechanics? Scientists await new worlds as CERN collider is refitted. Back to the Future 2 – Here We Come. Entropy law linked to intelligence, say researchers. Nothing to see: The man who made a Majorana particle - opinion - 13 May 2013. Vortex to another dimension reported in Brighton. Researcher teleports with a kitten. Quantum Puzzles: Funny Things Happen When Space And Time Vanish : 13.7: Cosmos And Culture. The True Science of Parallel Universes. Time Travel: What is the most mind boggling example of a time travel paradox.
Time Travel: Which paradox comes most close to support the possibility of Time Travel. Weird Quantum Tunneling Enables 'Impossible' Space Chemistry. If this theory is correct, we may live in a web of alternate timelines. Is The Universe A Hologram? Physicists Say It's Possible. Simulations back up theory that Universe is a hologram. Quantum black hole study opens bridge to another universe. Is Earth Weighed Down By Dark Matter, Or Internet? #3 Parallel Worlds exist and will soon be testable, expert says. The Quantum Mechanics of Fate - Issue 9: Time. Scientific Research Suggests We Unconsciously React to Events Up to 10 Seconds Before They Happen. Predicting the unpredictable: Critical analysis and practical implications of predictive anticipatory activity.
How To Change The Past. Discovery of Quantum Vibrations in 'Microtubules' Inside Brain Neurons Supports Controversial Theory of Consciousness. The Sacred, Spherical Cows of Physics - Issue 13: Symmetry. Best Explanation of Quantum Field Theory That You Will Ever Hear, Provided by Sean Carroll in Less than 2 Minutes at the 46th Annual Fermilab Users Meeting. Wormhole Photon Time Travel - Casimir Energy, Messages. Scientific Research Suggests We Unconsciously React to Events Up to 10 Seconds Before They Happen. The Quantum Mechanics of Fate - Issue 9: Time. |
2452689eea7074f1 | From the Schrödinger Equation to the Uncertainty Principle
Last time, we walked through some of the history of quantum mechanics and came out with the Schrödinger Equation, the master equation of nonrelativistic quantum mechanics. Much of what we’ll do in this course will involve solving this equation in a variety of interesting cases; but before we begin, it’s worth plunging a bit more deeply into the equation itself and seeing what we can learn just from its structure. Among other things, we’ll see the relationship of the abstract vectors we get from the linear algebra approach to the functions we use in the differential-equation approach; see the (rather simple) way that real systems evolve over time; and encounter the fundamental limitations on measurement in quantum mechanics.
The relationship between vectors and functions
We wrote down the Schrödinger equation in terms of differential operators acting on functions:
Here Ψ is a complex-valued wave function, and its magnitude squared can be interpreted as a probability — specifically, for any linear operator A built up out of X’s and P’s and so on,
\left<A\right>=\int {\rm d} x \Psi^\star(x) A \Psi(x).
We also identified some important operators:
X\Psi = x\Psi
H\Psi = +i\hbar\frac{\partial}{\partial t}\Psi
We didn’t write down the first equation last time, but it’s somewhat obvious, and I’m writing it for completeness; the X operator simply means “multiply by x.”
Let’s apply some of our linear algebra to these equations. Ψ is acting like a vector in the space of functions, so let’s explicitly denote it as a vector and write it as \left|\Psi\right>. How does this abstract vector relate to the function \Psi(x, t)? It’s the same as the relationship of an abstract vector in an abstract vector space to the explicit column of numbers as which we normally write it. Those numbers are simply the coefficients in \left|v\right>=\sum_i v_i\left|e_i\right>, where the \left|e_i\right> are the basis vectors of the space. Similarly, the \Psi(x, t) are the coefficients of \left|\Psi\right> in a basis expansion — specifically, the basis of eigenvectors of X.
To see the details, let’s look more carefully at the eigenvectors of X. If we think about these as functions, then they must satisfy X\phi = x\phi = \lambda\phi. Now, the only way that xφ(x) can be proportional to φ(x) is if φ vanishes at all but (at most) a single value of x. The function which satisfies this is the Dirac delta function:
\delta(x) = 0\ \ (x\ne 0)
\int_{\pm\infty} \delta(x) {\rm d} x = 1
This function is an infinitely high spike centered at the origin.1 It satisfies the useful relationship
\int_{\pm\infty} \delta(x-x_0) f(x) {\rm d} x = f(x_0),
which follows directly from the definition; it’s the continuous analogue of the Kronecker delta \delta_{ij}. The eigenfunctions of X are simply Dirac deltas centered at every possible value of x:
\left|x_0\right> = \delta(x - x_0).
These obviously form a basis for the set of functions on the real line. In fact, it’s not hard to expand any function in terms of them:
\left|f\right>=\int {\rm d}xf(x)\delta(x-x_0)=\int {\rm d}xf(x)\left|x\right>.
i.e., the function values f(x) are exactly the coefficients of f in the X-basis.
Why did I bother with this? Well, because we can easily consider other bases, too. For example, the eigenfunctions of P satisfy
-i\hbar\partial_x \phi(x) = p \phi(x);
\phi_p(x) = e^{i p x / \hbar}.
The subscript “p” simply indicates which eigenfunction we’re looking at.2 The fact that I’ve written these as functions is simply the expansion of the P eigenvectors in terms of the X eigenvectors:
\left|p\right> = \int e^{ipx/\hbar} \left|x\right>.
So if I’m talking about some arbitrary state vector \left|\Psi\right>, (I’ll refer to the wave function as a “state vector” often, especially when emphasizing the fact that it’s a vector in this abstract Hilbert space) I can expand it in the X-representation, i.e. as a function of x, or in the P-representation, i.e. as a function of p, and it’s the same vector. The two are related by a simple change of basis:
\Psi(p) \equiv \left<p\right|\left.\Psi\right> = \sum_x \left<p\right|\left.x\right>\left<x\right|\left.\Psi\right> = \int{\rm d} x e^{ipx/\hbar} \Psi(x).
In the first step, I used the fact that \left<p\right|\left.\Psi\right> extracts the component of \left|\Psi\right> parallel to \left|p\right>, i.e. \Psi(p). In the second step, I used the fact that the x’s form a basis, so \sum_x \left|x\right>\left<x\right| = 1. This operation is an extremely common move in QM, and is generally referred to as “inserting a complete set of states.” In the third step, I used the expansion of the \left|p\right> in terms of the \left|x\right>‘s, above. Thus the function of position and the function of momentum are related by a simple Fourier transform! In general, we will be able to switch between arbitrary pairs of basis functions by the same method. While most of the resulting integrals won’t be quite as simple as Fourier transforms, they will be reasonably manageable. We will also show later on that, whenever q and p are canonically conjugate coordinates, they will have the same relationship as x and p and thus really will have a Fourier transform relationship.
Now let’s look at our expression for expectation values. Rewritten in vector language, it says
\left<A\right> = \left<\Psi\right|A\left|\Psi\right>.
(The expression of this as an integral simply follows from inserting two complete sets of x-states, on either side of the A. Exercise: Show this in detail)
Note, also, that using this probability interpretation there is a very clear interpretation of the meaning of \left|\Psi\right> being an eigenvector of A; if A\left|\Psi\right> = a\left|\Psi\right>, then \left<\Psi\right|A\left|\Psi\right>=a\left<\Psi\right|\left.\Psi\right>=a. (Where we used the normalization relationship, \left<\Psi\right|\left.\Psi\right> = 1; note how in vector notation, this is just a statement that \left|\Psi\right> is a unit vector) An eigenstate of an operator is simply a state in which we have a definite value of the operator — i.e., a spike probability distribution.
We’ll routinely move back and forth between the function (differential-equation) description and the algebra (state-vector) description, depending on which is more convenient.
The Uncertainty Principle
Now let’s the fact that operators — or at least, Hermitian operators, which have real eigenvalues — seem to correspond to physically observable quantities with our earlier demonstration that commuting operators share eigenvectors. This means that if two operators A and B commute, then \left|\Psi\right> can simultaneously be an eigenstate of both operators — i.e., it can be described as having definite values of both quantities at once. So if we have a collection of physical observables in a system, it is natural to try to build a maximal set of commuting observables, and pick as our basis their simultaneous eigenstates. For reasons we’ll see later, we’ll almost always want the Hamiltonian to be one of these operators, even if it greatly restricts our choices of other operators to add to the set.
What happens if they don’t commute? Let’s assume that [A, B]\ne0, and that we are in some fixed state \left|\Psi\right>. Let us define the operator
\Delta A \equiv A - \left<\Psi\right|A\left|\Psi\right>.
The second term is simply a number; this operator measures the deviation of a measurement of A from the mean. The expectation value of its square is the variation, a.k.a. the mean-square deviation:
\left<(\Delta A)^2\right> = \left<A^2 - 2A\left<A\right> + \left<A\right>^2\right> = \left<A^2\right> - \left<A\right>^2.
The square root of this term is simply the standard deviation of measurements of A from the mean. (Note that, if A\left|a\right> = a \left|a\right>, then \left<a\right|A^2\left|a\right> = a\left<a\right|A\left|a\right> = a^2 = (\left<a\right|A\left|a\right>)^2, and so \left<\Delta A\right> = 0; an eigenstate of A has a definite value of A, and so its statistical dispersal is zero) It turns out that we can prove a fascinating inequality, for any operators A and B and any state \left|\Psi\right>:
\boxed{\left<(\Delta A)^2\right> \left<(\Delta B)^2\right> \ge \frac{1}{4}|\left<[A, B]\right>|^2}
This is the Heisenberg uncertainty principle.3 Before we analyze it, let’s prove it. First, we prove the Cauchy-Schwarz inequality:
\left<\alpha\right|\left.\alpha\right>\left<\beta\right|\left.\beta\right> \ge |\left<\alpha\right|\left.\beta\right>|^2.
Exercise: Show this. Hint: Start from the fact that the norm of \left|\alpha\right> + \lambda\left|\beta\right> must be ≥0, for any λ.
If we let \left|\alpha\right> = \Delta A \left|\Psi\right>, and \left|\beta\right> = \Delta B \left|\Psi\right>, this then means that
\left<(\Delta A)^2\right>\left<(\Delta B)^2\right> \ge |\left<\Delta A \Delta B\right>|^2.
Now note that
\begin{array}{rcl} \Delta A \Delta B &=& \frac{1}{2}[\Delta A, \Delta B] + \frac{1}{2}(\Delta A \Delta B + \Delta B \Delta A) \\ &\equiv& \frac{1}{2}[\Delta A, \Delta B] + \frac{1}{2}\left\{\Delta A, \Delta B\right\} \end{array}.
(The latter quantity is called the anticommutator; these show up a lot in relativistic QM) Now, the commutator of two Hermitian operators is anti-Hermitian:
\begin{array}{rcl} [A, B]^\dagger &=& (AB-BA)^\dagger\\&=&\left(B^\dagger A^\dagger - A^\dagger B^\dagger\right)\\&=&(BA - AB)\\&=&-[A, B]\end{array}
and similarly, the anticommutator of two Hermitian operators is Hermitian. It’s trivial to see that the eigenvalues of any Hermitian operator must be real, and of an anti-Hermitian operator must be imaginary; simply write the operators in the basis where they are diagonal. That in turn implies that the expectation value of an (anti-)Hermitian operator must be real (imaginary), since we can expand \left|\Psi\right> in terms of the basis vectors which diagonalize the operator, and then write out the sum. And since we’ve now written \Delta A \Delta B as a sum of a purely real and a purely imaginary term, it follows that |\left<\Delta A \Delta B\right>|^2 = \frac{1}{4}|\left<[A, B]\right>|^2 + \frac{1}{4}|\left<\left\{A, B\right\}\right>|^2. Since both of the quantities on the right are nonnegative, the theorem immediately follows. ♦
So now that we’ve proven the uncertainty principle, what does it mean? It means that, if two operators don’t commute, then no state can be in a simultaneous eigenket of both, or have a definite value of both; in fact, the product of the errors in measuring both of the quantities is bounded from below.
Let’s be concrete; take the operators X and P. Their commutator is easy to work out: for any \Psi,
[X, P]\Psi = -i\hbar\left(x \partial \Psi - \partial (x \Psi)\right) = +i\hbar \Psi,
and thus [X, P] = i\hbar. Then for any physical state, no matter what it is, no matter what the Hamiltonian or potential function or quality of the experiment,
\left<\Delta X\right> \left<\Delta P\right> \ge \hbar/2.
You can physically visualize why this happens in terms of the explicit eigenfunctions we worked out earlier for X and P. If you are in an X-eigenstate, i.e. \Psi(x) = \delta(x), then you are by no means in a P eigenstate; in fact, you are in a linear combination of infinitely many P eigenstates with different values of momenta, the coefficients coming from a Fourier transform. It should hardly be surprising, then, that measuring P in such a circumstance will lead to an infinite range of possible values. Here \Delta X = 0, and so \Delta P = \infty. Likewise, if we were in a P-eigenstate, we would be in an infinite superposition of X-eigenstates. Other functions sit between these two extremes.
Exercise: Let \Psi(x) = A e^{-x^2/2\sigma^2} be a normalized Gaussian. Find A so that \left<1\right> = 1. Evaluate \left<X\right>, \left<P\right>, \left<X^2\right>, and \left<P^2\right>. (The integrals are all standard; you should be able to do them by hand) Show that \left<\Delta X\right>\left<\Delta P\right> = \frac{\hbar}{2} for any σ.
This form of Ψ is often referred to as a wave packet. It saturates the position-momentum uncertainty relationship, and is reasonably localized in space. (With the definition of “reasonably” being “within σ”) As such, it’s a very “particle-like” state for a system to be in.4
The uncertainty principle took many years for people to fully digest, and physicists spent a great deal of time5 trying to build thought experiments (and physical experiments) designed to defeat it, simultaneously measuring the position and momentum of a particle. In every case it failed; generally, the failure takes the form of the physical action required to measure one of the quantities disturbing the other quantity by a certain minimum amount. To take a simple example, consider Heisenberg’s original motivating example, using a microscope to measure the position and velocity of a particle. In order to see the particle, we must bounce a photon off of it. But the ability to resolve the particle’s position is bounded below by the wavelength, so we need \Delta x \ge \lambda; but this implies that the photon imparts its own energy to the particle, and its own momentum: \Delta p = \hbar\omega/c = \hbar/\lambda. So \Delta x \Delta p \ge \hbar. There are obviously many possible refinements of this idea; see the Wikipedia article for a good place to start exploring if you’re interested.
The Time-Independent Equation
Very often, the Hamiltonian has no explicit time dependence. In this case, it’s possible to separate the Schrödinger equation into two simpler equations. From a differential equation perspective, we can separate the variables by conjecturing that we can write \Psi(x, t) = \psi(x) \phi(t). Then the Schrödinger equation becomes:
H\psi\phi = i\hbar\psi\partial_t \phi
Dividing both sides (on the left, if you want to be careful) by \psi\phi gives
\frac{1}{\psi}H\psi = i\hbar\frac{1}{\phi}\partial_t \phi.
The left-hand side of this equation is a function only of x; the right-hand side, only of t. The only way these two functions can therefore be equal to one another is if they’re both equal to a constant, which we’ll denote by E. (This will be our one exception to the constants-are-lowercase rule) The right-hand side is now simple to solve:
i\hbar\dot{\phi} = E\phi \Rightarrow \phi(t) = \phi(0) e^{-iEt/\hbar}.
The left-hand side is
H\psi = -\frac{\hbar^2}{2m}\nabla^2\psi + V(x) \psi = E\psi.
This is the time-independent Schrödinger equation, and is generally much easier to solve than the time-dependent version. We can immediately see that it is simply an eigenvalue equation for H; and knowing that H is our Hamiltonian, we can immediately interpret the physical meaning of E as the energy of the state.
Time Evolution
If we write down the Schrödinger equation for a time-independent Hamiltonian in vector notation,
H\left|\Psi(t)\right> = i\hbar\partial_t\left|\Psi(t)\right>,
we can solve it in a very formal sense:
\left|\Psi(t)\right> = e^{-iHt/\hbar}\left|\Psi(0)\right>.
The exponential of an operator is simply defined by its Taylor series; if you write out the infinite sum, it’s obvious that this solves the differential equation. The operator on the right-hand side is known as the time-evolution operator, U(t) \equiv e^{-iHt/\hbar}, since it transforms kets at time T to the corresponding kets at time T+t.6 This equation is most useful if we recall that the eigenvectors of H form a basis, and expand our initial condition in those terms;
\left|\Psi(0)\right> = \sum_n c_n \left|n\right>,
where n is some index that runs over the eigenvectors of H. Then
\left|\Psi(t)\right> = \sum_n c_n e^{-iHt/\hbar}\left|n\right> = \sum_n c_n e^{-iE_nt/\hbar}\left|n\right>.
This is how kets evolve over time. Note that if \left|\Psi(0)\right> is an eigenket of H, then there is only one term in this sum, and the “time-evolution” of \left|\Psi\right> is nothing more than a phase changing over time; since all of our physically measurable quantities take the form \left<\Psi\right|A\left|\Psi\right>, this means that the expectation value of any operator that doesn’t have an explicit time-dependence built in is going to be constant over time. Overall phases in the wave function have no physical meaning! (Which if you recall, is exactly why we picked complex numbers for our wave function in the first place)
If on the other hand \left|\Psi(0)\right> is not an eigenket of H, there are multiple terms in the sum, and their relative phases will change over time; this means that expectation values can evolve nontrivially. We’ll see several examples of this shortly.
Note one other thing: For any observable A, \left<\Psi(t)\right|A\left|\Psi(t)\right> = \left<\Psi(0)\right|U^\dagger A U \left|\Psi(0)\right>. Apart from the interesting fact that time-evolution just looks like a change of basis, you should note that if [H, A] = 0, then [U(t), A] = 0 (by the Taylor series), and so the U’s cancel out; the expectation value of A is a constant! The converse is, true, too; if \frac{\partial}{\partial t}\left<\Psi(t)\right|A\left|\Psi(t)\right> = 0 for any initial condition, then A must commute with H.
Proof: By Taylor expansion,
\begin{array}{rcl} 0 &=&\frac{\partial}{\partial t}\left<\Psi(t)\right|A\left|\Psi(t)\right>\\ &=&{\displaystyle \sum_{mn} \frac{(-i)^{n-m}}{\hbar^{n+m} m! n!} \frac{\partial}{\partial t} t^{m+n} H^m A H^n} \\ &=&{\displaystyle \sum_{mn}\frac{(-i)^{n-m}(m+n)}{\hbar^{m+n} m! n!} t^{m+n-1} H^m A H^n\ .}\end{array}
For this to vanish for every t, the coefficient of each power of t must vanish independently; but the coefficient of t^0 is simply \frac{i}{\hbar}[A, H]. ♦
Thus an operator corresponds to a conserved quantity if and only if it commutes with the Hamiltonian. This means that sets of commuting observables which include the Hamiltonian are particularly interesting; they represent sets of simultaneously measurable conserved quantities. Maximal sets of commuting observables are even more interesting; if two eigenkets of such a set have the same eigenvalues under each operator, then (by definition) there is no other quantity which we could measure which would distinguish the two; the two kets must correspond to the same physical state. We can therefore label the eigenkets of such a CSCO by their eigenvalues under each of the operators, and those labels form a complete description of the state of the system in each eigenket. This relatively simple statement will turn out to have profound implications later — in quantum mechanics, when two particles are identical, they’re really identical.
Next Time: A concrete example: The two-state system and nuclear magnetic resonance.
1 Dirac proposed this “function” for exactly this purpose, and mathematicians proceeded to spend decades arguing over whether or not it was a bona fide function. This required some careful rethinking of the definition of functions, some work in measure theory, and so on, and the practical upshot was that yes, this whole thing works just fine. Physicists pretty much ignored the entire controversy.
2 Note that these eigenfunctions aren’t normalized; \int {\rm d} x\ \phi^\star(p_1, x) \phi(p_2, x) is in fact infinite when p_1 = p_2. This is actually an annoying corner case in many of our discussions; the proper way to handle this is to assume that space has a finite extent L, normalize the functions there, and take the limit L\rightarrow\infty at the end. It’s not actually especially illuminating to do this, so for the rest of this course, unless explicitly indicating otherwise, I will leave planewaves unnormalized, and simply take it as implicit that whenever computing expectation values etc. with them, one should do this normalization.
3 Heisenberg considered this his most important discovery; the equation — specialized to the case of x and p — is carved on his tombstone.
4 The entire discussion over “particle-wave duality” was an artifact of the confusion in the early 20th century, especially in the aftermath of de Broglie’s paper, when the two concepts were considered to have very distinct physical meanings. From a modern perspective, the distinction is purely semantic. A system is in a “particle-like” state when it has a fairly definite value of position, i. e. \left<\Delta X\right> is small; it is in a “wave-like” state when it has a fairly definite value of momentum (as a free plane wave does), i.e. \left<\Delta P\right> is small. But these two states are simply endpoints of a continuum; there is nothing particularly privileged about one or the other.
5 Bohr and Einstein famously spent extraordinary amounts of time, especially at the Copenhagen conference in 1925, debating these; every day, Einstein would come up with a (generally extremely subtle) objection to the quantum results, and Bohr would (after much hand-wringing) come back with an explanation. Reading up on their debates is fascinating.
6 Note that we can still define U(t) in the case where H does have an explicit time-dependence, but the formula for it isn’t as simple; it’s the solution to the differential equation.
Published in: on August 9, 2010 at 10:00 Comments (2)
1. more copyediting nits, I’m afraid: with your CSS layout, the anticommutator and commutator equation gets cut off (in both Firefox and Chrome, at least); also, you’re missing a $ in the latter part of the paragraph, so the $ \TeX $ isn’t rendering.
• Thanks! Fixed.
Comments are closed.
%d bloggers like this: |
0d0dd48167e194fe | Angular Momentum
Get Angular Momentum essential facts below. View Videos or join the Angular Momentum discussion. Add Angular Momentum to your PopFlock.com topic list for future reference or share this resource on social media.
Angular Momentum
Angular momentum
This gyroscope remains upright while spinning due to the conservation of its angular momentum.
Common symbols
In SI base unitskg m2 s-1
Derivations from
other quantities
L = I? = r × p
DimensionM L2T-1
In physics, angular momentum (rarely, moment of momentum or rotational momentum) is the rotational equivalent of linear momentum. It is an important quantity in physics because it is a conserved quantity--the total angular momentum of a closed system remains constant.
In three dimensions, the angular momentum for a point particle is a pseudovector , the cross product of the particle's position vector r (relative to some origin) and its momentum vector; the latter is in Newtonian mechanics. Unlike momentum, angular momentum depends on where the origin is chosen, since the particle's position is measured from it.
Just as for angular velocity, there are two special types of angular momentum of an object: the spin angular momentum is the angular momentum about the object's centre of mass, while the orbital angular momentum is the angular momentum about a chosen center of rotation. The total angular momentum is the sum of the spin and orbital angular momenta. The orbital angular momentum vector of a point particle is always parallel and directly proportional to its orbital angular velocity vector ?, where the constant of proportionality depends on both the mass of the particle and its distance from origin. The spin angular momentum vector of a rigid body is proportional but not always parallel to the spin angular velocity vector ?, making the constant of proportionality a second-rank tensor rather than a scalar.
Angular momentum is an extensive quantity; i.e. the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. For a continuous rigid body or a fluid the total angular momentum is the volume integral of angular momentum density (i.e. angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body.
Torque can be defined as the rate of change of angular momentum, analogous to force. The net external torque on any system is always equal to the total torque on the system; in other words, the sum of all internal torques of any system is always 0 (this is the rotational analogue of Newton's Third Law). Therefore, for a closed system (where there is no net external torque), the total torque on the system must be 0, which means that the total angular momentum of the system is constant. The conservation of angular momentum helps explain many observed phenomena, for example the increase in rotational speed of a spinning figure skater as the skater's arms are contracted, the high rotational rates of neutron stars, the Coriolis effect, and the precession of gyroscopes. In general, conservation limits the possible motion of a system but does not uniquely determine it.
In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called "component") can be measured with definite precision; the other two then remain uncertain. Because of this, the axis of rotation of a quantum particle is undefined. Quantum particles do possess a type of non-orbital angular momentum called "spin", but this angular momentum does not correspond to a spinning motion.[1]
Definition in classical mechanics
Orbital angular momentum in two dimensions
Velocity of the particle m with respect to the origin O can be resolved into components parallel to (v?) and perpendicular to (v?) the radius vector r. The angular momentum of m is proportional to the perpendicular component v? of the velocity, or equivalently, to the perpendicular distance r? from the origin.
Angular momentum is a vector quantity (more precisely, a pseudovector) that represents the product of a body's rotational inertia and rotational velocity (in radians/sec) about a particular axis. However, if the particle's trajectory lies in a single plane, it is sufficient to discard the vector nature of angular momentum, and treat it as a scalar (more precisely, a pseudoscalar).[2] Angular momentum can be considered a rotational analog of linear momentum. Thus, where linear momentum p is proportional to mass m and linear speed
angular momentum L is proportional to moment of inertia I and angular speed ? measured in radians per second.[3]
Unlike mass, which depends only on amount of matter, moment of inertia is also dependent on the position of the axis of rotation and the shape of the matter. Unlike linear velocity, which does not depend upon the choice of origin, orbital angular velocity is always measured with respect to a fixed origin. Therefore, strictly speaking, L should be referred to as the angular momentum relative to that center.[4]
Because for a single particle and for circular motion, angular momentum can be expanded, and reduced to,
the product of the radius of rotation r and the linear momentum of the particle , where in this case is the equivalent linear (tangential) speed at the radius ().
This simple analysis can also apply to non-circular motion if only the component of the motion which is perpendicular to the radius vector is considered. In that case,
where is the perpendicular component of the motion. Expanding, rearranging, and reducing, angular momentum can also be expressed,
where is the length of the moment arm, a line dropped perpendicularly from the origin onto the path of the particle. It is this definition, (length of moment arm)×(linear momentum) to which the term moment of momentum refers.[5]
Scalar--angular momentum from Lagrangian mechanics
Another approach is to define angular momentum as the conjugate momentum (also called canonical momentum) of the angular coordinate expressed in the Lagrangian of the mechanical system. Consider a mechanical system with a mass constrained to move in a circle of radius in the absence of any external force field. The kinetic energy of the system is
And the potential energy is
Then the Lagrangian is
The generalized momentum "canonically conjugate to" the coordinate is defined by
Orbital angular momentum in three dimensions
Relationship between force (F), torque (?), momentum (p), and angular momentum (L) vectors in a rotating system. r is the position vector.
To completely define orbital angular momentum in three dimensions, it is required to know the rate at which the position vector sweeps out angle, the direction perpendicular to the instantaneous plane of angular displacement, and the mass involved, as well as how this mass is distributed in space.[6] By retaining this vector nature of angular momentum, the general nature of the equations is also retained, and can describe any sort of three-dimensional motion about the center of rotation - circular, linear, or otherwise. In vector notation, the orbital angular momentum of a point particle in motion about the origin can be expressed as:
is the moment of inertia for a point mass,
is the orbital angular velocity in radians/sec (units 1/sec) of the particle about the origin,
is the position vector of the particle relative to the origin, ,
is the linear velocity of the particle relative to the origin, and
is the mass of the particle.
This can be expanded, reduced, and by the rules of vector algebra, rearranged:
which is the cross product of the position vector and the linear momentum of the particle. By the definition of the cross product, the vector is perpendicular to both and . It is directed perpendicular to the plane of angular displacement, as indicated by the right-hand rule - so that the angular velocity is seen as counter-clockwise from the head of the vector. Conversely, the vector defines the plane in which and lie.
By defining a unit vector perpendicular to the plane of angular displacement, a scalar angular speed results, where
where is the perpendicular component of the motion, as above.
The two-dimensional scalar equations of the previous section can thus be given direction:
and for circular motion, where all of the motion is perpendicular to the radius .
In the spherical coordinate system the angular momentum vector expresses as
Orbital angular momentum in four or more dimensions
Angular momentum in higher dimensions can be defined by application of Noether's theorem to rotation groups of higher order.[] Generalization beyond three dimensions is best treated using differential forms.[]
Analogy to linear momentum
Angular momentum can be described as the rotational analog of linear momentum. Like linear momentum it involves elements of mass and displacement. Unlike linear momentum it also involves elements of position and shape.
Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the moving matter has on the point--can it exert energy upon it or perform work about it? Energy, the ability to do work, can be stored in matter by setting it in motion--a combination of its inertia and its displacement. Inertia is measured by its mass, and displacement by its velocity. Their product,
is the matter's momentum.[7] Referring this momentum to a central point introduces a complication: the momentum is not applied to the point directly. For instance, a particle of matter at the outer edge of a wheel is, in effect, at the end of a lever of the same length as the wheel's radius, its momentum turning the lever about the center point. This imaginary lever is known as the moment arm. It has the effect of multiplying the momentum's effort in proportion to its length, an effect known as a moment. Hence, the particle's momentum referred to a particular point,
is the angular momentum, sometimes called, as here, the moment of momentum of the particle versus that particular center point. The equation combines a moment (a mass turning moment arm ) with a linear (straight-line equivalent) speed . Linear speed referred to the central point is simply the product of the distance and the angular speed versus the point: another moment. Hence, angular momentum contains a double moment: Simplifying slightly, the quantity is the particle's moment of inertia, sometimes called the second moment of mass. It is a measure of rotational inertia.[8]
Moment of inertia (shown here), and therefore angular momentum, is different for every possible configuration of mass and axis of rotation.
Because moment of inertia is a crucial part of the spin angular momentum, the latter necessarily includes all of the complications of the former, which is calculated by multiplying elementary bits of the mass by the squares of their distances from the center of rotation.[9] Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits.
For a rigid body, for instance a wheel or an asteroid, the orientation of rotation is simply the position of the rotation axis versus the matter of the body. It may or may not pass through the center of mass, or it may lie completely outside of the body. For the same body, angular momentum may take a different value for every possible axis about which rotation may take place.[10] It reaches a minimum when the axis passes through the center of mass.[11]
For a collection of objects revolving about a center, for instance all of the bodies of the Solar System, the orientations may be somewhat organized, as is the Solar System, with most of the bodies' axes lying close to the system's axis. Their orientations may also be completely random.
In brief, the more mass and the farther it is from the center of rotation (the longer the moment arm), the greater the moment of inertia, and therefore the greater the angular momentum for a given angular velocity. In many cases the moment of inertia, and hence the angular momentum, can be simplified by,[12]
where is the radius of gyration, the distance from the axis at which the entire mass may be considered as concentrated.
Similarly, for a point mass the moment of inertia is defined as,
where is the radius of the point mass from the center of rotation,
and for any collection of particles as the sum,
Angular momentum's dependence on position and shape is reflected in its units versus linear momentum: kg?m2/s, N?m?s, or J?s for angular momentum versus kg?m/s or N?s for linear momentum. When calculating angular momentum as the product of the moment of inertia times the angular velocity, the angular velocity must be expressed in radians per second, where the radian assumes the dimensionless value of unity. (When performing dimensional analysis, it may be productive to use orientational analysis which treats radians as a base unit, but this is outside the scope of the International system of units). Angular momentum's units can be interpreted as torque?time or as energy?time per angle. An object with angular momentum of can be reduced to zero rotation (all of the rotational energy can be transferred out of it) by an angular impulse of [13] or equivalently, by torque or work of for one second, or energy of for one second.[14]
The plane perpendicular to the axis of angular momentum and passing through the center of mass[15] is sometimes called the invariable plane, because the direction of the axis remains fixed if only the interactions of the bodies within the system, free from outside influences, are considered.[16] One such plane is the invariable plane of the Solar System.
Angular momentum and torque
Newton's second law of motion can be expressed mathematically,
or force = mass × acceleration. The rotational equivalent for point particles may be derived as follows:
which means that the torque (i.e. the time derivative of the angular momentum) is
Because the moment of inertia is , it follows that , and which, reduces to
This is the rotational analog of Newton's Second Law. Note that the torque is not necessarily proportional or parallel to the angular acceleration (as one might expect). The reason for this is that the moment of inertia of a particle can change with time, something that cannot occur for ordinary mass.
Conservation of angular momentum
A figure skater in a spin uses conservation of angular momentum - decreasing her moment of inertia by drawing in her arms and legs increases her rotational speed.
General considerations
A rotational analog of Newton's third law of motion might be written, "In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque."[17] Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant (is conserved).[18]
Seen another way, a rotational analogue of Newton's first law of motion might be written, "A rigid body continues in a state of uniform rotation unless acted by an external influence."[17] Thus with no external influence to act upon it, the original angular momentum of the system remains constant.[19]
The conservation of angular momentum is used in analyzing central force motion. If the net force on some body is directed always toward some point, the center, then there is no torque on the body with respect to the center, as all of the force is directed along the radius vector, and none is perpendicular to the radius. Mathematically, torque because in this case and are parallel vectors. Therefore, the angular momentum of the body about the center is constant. This is the case with gravitational attraction in the orbits of planets and satellites, where the gravitational force is always directed toward the primary body and orbiting bodies conserve angular momentum by exchanging distance and velocity as they move about the primary. Central force motion is also used in the analysis of the Bohr model of the atom.
For a planet, angular momentum is distributed between the spin of the planet and its revolution in its orbit, and these are often exchanged by various mechanisms. The conservation of angular momentum in the Earth-Moon system results in the transfer of angular momentum from Earth to Moon, due to tidal torque the Moon exerts on the Earth. This in turn results in the slowing down of the rotation rate of Earth, at about 65.7 nanoseconds per day,[20] and in gradual increase of the radius of Moon's orbit, at about 3.82 centimeters per year.[21]
The torque caused by the two opposing forces Fg and -Fg causes a change in the angular momentum L in the direction of that torque (since torque is the time derivative of angular momentum). This causes the top to precess.
The conservation of angular momentum explains the angular acceleration of an ice skater as she brings her arms and legs close to the vertical axis of rotation. By bringing part of the mass of her body closer to the axis, she decreases her body's moment of inertia. Because angular momentum is the product of moment of inertia and angular velocity, if the angular momentum remains constant (is conserved), then the angular velocity (rotational speed) of the skater must increase.
The same phenomenon results in extremely fast spin of compact stars (like white dwarfs, neutron stars and black holes) when they are formed out of much larger and slower rotating stars. Decrease in the size of an object n times results in increase of its angular velocity by the factor of n2.
Conservation is not always a full explanation for the dynamics of a system but is a key constraint. For example, a spinning top is subject to gravitational torque making it lean over and change the angular momentum about the nutation axis, but neglecting friction at the point of spinning contact, it has a conserved angular momentum about its spinning axis, and another about its precession axis. Also, in any planetary system, the planets, star(s), comets, and asteroids can all move in numerous complicated ways, but only so that the angular momentum of the system is conserved.
Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved.[22]
Relation to Newton's second law of motion
While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force). Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with time.[23] Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space.
As an example, consider decreasing of the moment of inertia, e.g. when a figure skater is pulling in her/his hands, speeding up the circular motion. In terms of angular momentum conservation, we have, for angular momentum L, moment of inertia I and angular velocity ?:
Using this, we see that the change requires an energy of:
so that a decrease in the moment of inertia requires investing energy.
This can be compared to the work done as calculated using Newton's laws. Each point in the rotating body is accelerating, at each point of time, with radial acceleration of:
Let us observe a point of mass m, whose position vector relative to the center of motion is parallel to the z-axis at a given point of time, and is at a distance z. The centripetal force on this point, keeping the circular motion, is:
Thus the work required for moving this point to a distance dz farther from the center of motion is:
For a non-pointlike body one must integrate over this, with m replaced by the mass density per unit z. This gives:
which is exactly the energy required for keeping the angular momentum conserved.
Note, that the above calculation can also be performed per mass, using kinematics only. Thus the phenomena of figure skater accelerating tangential velocity while pulling her/his hands in, can be understood as follows in layman's language: The skater's palms are not moving in a straight line, so they are constantly accelerating inwards, but do not gain additional speed because the accelerating is always done when their motion inwards is zero. However, this is different when pulling the palms closer to the body: The acceleration due to rotation now increases the speed; but because of the rotation, the increase in speed does not translate to a significant speed inwards, but to an increase of the rotation speed.
In Lagrangian formalism
In Lagrangian mechanics, angular momentum for rotation around a given axis, is the conjugate momentum of the generalized coordinate of the angle around the same axis. For example, , the angular momentum around the z axis, is:
where is the Lagrangian and is the angle around the z axis.
Note that , the time derivative of the angle, is the angular velocity . Ordinarily, the Lagrangian depends on the angular velocity through the kinetic energy: The latter can be written by separating the velocity to its radial and tangential part, with the tangential part at the x-y plane, around the z-axis, being equal to:
where the subscript i stands for the i-th body, and m, vT and ?z stand for mass, tangential velocity around the z-axis and angular velocity around that axis, respectively.
For a body that is not point-like, with density ?, we have instead:
where Iz is the moment of inertia around the z-axis.
Thus, assuming the potential energy does not depend on ?z (this assumption may fail for electromagnetic systems), we have the angular momentum of the i-th object:
We have thus far rotated each object by a separate angle; we may also define an overall angle ?z by which we rotate the whole system, thus rotating also each object around the z-axis, and have the overall angular momentum:
From Euler-Lagrange equations it then follows that:
Since the lagrangian is dependent upon the angles of the object only through the potential, we have:
which is the torque on the i-th object.
Suppose the system is invariant to rotations, so that the potential is independent of an overall rotation by the angle ?z (thus it may depend on the angles of objects only through their differences, in the form ). We therefore get for the total angular momentum:
And thus the angular momentum around the z-axis is conserved.
This analysis can be repeated separately for each axis, giving conversation of the angular momentum vector. However, the angles around the three axes cannot be treated simultaneously as generalized coordinates, since they are not independent; in particular, two angles per point suffice to determine its position. While it is true that in the case of a rigid body, fully describing it requires, in addition to three translational degrees of freedom, also specification of three rotational degrees of freedom; however these cannot be defined as rotations around the Cartesian axes (see Euler angles). This caveat is reflected in quantum mechanics in the non-trivial commutation relations of the different components of the angular momentum operator.
In Hamiltonian formalism
Equivalently, in Hamiltonian mechanics the Hamiltonian can be described as a function of the angular momentum. As before, the part of the kinetic energy related to rotation around the z-axis for the i-th object is:
which is analogous to the energy dependence upon momentum along the z-axis, .
Hamilton's equations relate the angle around the z-axis to its conjugate momentum, the angular momentum around the same axis:
The first equation gives
And so we get the same results as in the Lagrangian formalism.
Note, that for combining all axes together, we write the kinetic energy as:
where pr is the momentum in the radial direction, and the moment of inertia is a 3-dimensional matrix; bold letters stand for 3-dimensional vectors.
For point-like bodies we have:
This form of the kinetic energy part of the Hamiltonian is useful in analyzing central potential problems, and is easily transformed to a quantum mechanical work frame (e.g. in the hydrogen atom problem).
Angular momentum in orbital mechanics
While in classical mechanics the language of angular momentum can be replaced by Newton's laws of motion, it is particularly useful for motion in central potential such as planetary motion in the solar system. Thus, the orbit of a planet in the solar system is defined by its energy, angular momentum and angles of the orbit major axis relative to a coordinate frame.
In astrodynamics and celestial mechanics, a massless (or per unit mass) angular momentum is defined[24]
called specific angular momentum. Note that Mass is often unimportant in orbital mechanics calculations, because motion is defined by gravity. The primary body of the system is often so much larger than any bodies in motion about it that the smaller bodies have a negligible gravitational effect on it; it is, in effect, stationary. All bodies are apparently attracted by its gravity in the same way, regardless of mass, and therefore all move approximately the same way under the same conditions.
Solid bodies
Angular momentum is also an extremely useful concept for describing rotating rigid bodies such as a gyroscope or a rocky planet. For a continuous mass distribution with density function ?(r), a differential volume element dV with position vector r within the mass has a mass element dm = ?(r)dV. Therefore, the infinitesimal angular momentum of this element is:
and integrating this differential over the volume of the entire mass gives its total angular momentum:
In the derivation which follows, integrals similar to this can replace the sums for the case of continuous mass.
Collection of particles
The angular momentum of the particles i is the sum of the cross products R × MV + ?ri × mivi.
For a collection of particles in motion about an arbitrary origin, it is informative to develop the equation of angular momentum by resolving their motion into components about their own center of mass and about the origin. Given,
is the mass of particle ,
is the position vector of particle vs the origin,
is the velocity of particle vs the origin,
is the position vector of the center of mass vs the origin,
is the velocity of the center of mass vs the origin,
is the position vector of particle vs the center of mass,
is the velocity of particle vs the center of mass,
The total mass of the particles is simply their sum,
The position vector of the center of mass is defined by,[25]
By inspection,
The total angular momentum of the collection of particles is the sum of the angular momentum of each particle,
Expanding ,
Expanding ,
It can be shown that (see sidebar),
therefore the second and third terms vanish,
The first term can be rearranged,
and total angular momentum for the collection of particles is finally,[26]
The first term is the angular momentum of the center of mass relative to the origin. Similar to Single particle, below, it is the angular momentum of one particle of mass M at the center of mass moving with velocity V. The second term is the angular momentum of the particles moving relative to the center of mass, similar to Fixed center of mass, below. The result is general--the motion of the particles is not restricted to rotation or revolution about the origin or center of mass. The particles need not be individual masses, but can be elements of a continuous distribution, such as a solid body.
Rearranging equation (2) by vector identities, multiplying both terms by "one", and grouping appropriately,
gives the total angular momentum of the system of particles in terms of moment of inertia and angular velocity ,
Single particle case
In the case of a single particle moving about the arbitrary origin,
and equations (2) and (3) for total angular momentum reduce to,
Case of a fixed center of mass
For the case of the center of mass fixed in space with respect to the origin,
Angular momentum in general relativity
The 3-angular momentum as a bivector (plane element) and axial vector, of a particle of mass m with instantaneous 3-position x and 3-momentum p.
In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum - see below) is described using a different formalism, instead of a classical pseudovector. In this formalism, angular momentum is the 2-form Noether charge associated with rotational invariance. As a result, angular momentum is not conserved for general curved spacetimes, unless it happens to be asymptotically rotationally invariant.[]
In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element:
in which the exterior product ? replaces the cross product × (these products have similar characteristics but are nonequivalent). This has the advantage of a clearer geometric interpretation as a plane element, defined from the x and p vectors, and the expression is true in any number of dimensions (two or higher). In Cartesian coordinates:
or more compactly in index notation:
The angular velocity can also be defined as an antisymmetric second order tensor, with components ?ij. The relation between the two antisymmetric tensors is given by the moment of inertia which must now be a fourth order tensor:[27]
Again, this equation in L and ? as tensors is true in any number of dimensions. This equation also appears in the geometric algebra formalism, in which L and ? are bivectors, and the moment of inertia is a mapping between them.
In relativistic mechanics, the relativistic angular momentum of a particle is expressed as an antisymmetric tensor of second order:
in the language of four-vectors, namely the four position X and the four momentum P, and absorbs the above L together with the motion of the centre of mass of the particle.
In each of the above cases, for a system of particles, the total angular momentum is just the sum of the individual particle angular momenta, and the centre of mass is for the system.
Angular momentum in quantum mechanics
Angular momentum in quantum mechanics differs in many profound respects from angular momentum in classical mechanics. In relativistic quantum mechanics, it differs even more, in which the above relativistic definition becomes a tensorial operator.
Spin, orbital, and total angular momentum
Angular momenta of a classical object.
• Left: "spin" angular momentum S is really orbital angular momentum of the object at every point.
• Right: extrinsic orbital angular momentum L about an axis.
• Top: the moment of inertia tensor I and angular velocity ? (L is not always parallel to ?).[28]
• Bottom: momentum p and its radial position r from the axis. The total angular momentum (spin plus orbital) is J. For a quantum particle the interpretations are different; particle spin does not have the above interpretation.
The classical definition of angular momentum as can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operator. L is then an operator, specifically called the orbital angular momentum operator. The components of the angular momentum operator satisfy the commutation relations of the Lie algebra so(3). Indeed, these operators are precisely the infinitesimal action of the rotation group on the quantum Hilbert space.[29] (See also the discussion below of the angular momentum operators as the generators of rotations.)
However, in quantum physics, there is another type of angular momentum, called spin angular momentum, represented by the spin operator S. Almost all elementary particles have nonzero spin.[30] Spin is often depicted as a particle literally spinning around an axis, but this is a misleading and inaccurate picture: spin is an intrinsic property of a particle, unrelated to any sort of motion in space and fundamentally different from orbital angular momentum. All elementary particles have a characteristic spin (possibly zero),[31] for example electrons have "spin 1/2" (this actually means "spin ?/2"), photons have "spin 1" (this actually means "spin ?"), and pi-mesons have spin 0.[32]
Finally, there is total angular momentum J, which combines both the spin and orbital angular momentum of all particles and fields. (For one particle, .) Conservation of angular momentum applies to J, but not to L or S; for example, the spin-orbit interaction allows angular momentum to transfer back and forth between L and S, with the total remaining constant. Electrons and photons need not have integer-based values for total angular momentum, but can also have fractional values.[33]
In molecules the total angular momentum F is the sum of the rovibronic (orbital) angular momentum N, the electron spin angular momentum S, and the nuclear spin angular momentum I. For electronic singlet states the rovibronic angular momentum is denoted J rather than N. As explained by Van Vleck,[34] the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those for the components about space-fixed axes.
In quantum mechanics, angular momentum is quantized - that is, it cannot vary continuously, but only in "quantum leaps" between certain allowed values. For any system, the following restrictions on measurement results apply, where is the reduced Planck constant and is any Euclidean vector such as x, y, or z:
If you measure... The result can be...
, where
or , where
In this standing wave on a circular string, the circle is broken into exactly 8 wavelengths. A standing wave like this can have 0,1,2, or any integer number of wavelengths around the circle, but it cannot have a non-integer number of wavelengths like 8.3. In quantum mechanics, angular momentum is quantized for a similar reason.
(There are additional restrictions as well, see angular momentum operator for details.)
The reduced Planck constant is tiny by everyday standards, about 10-34 J s, and therefore this quantization does not noticeably affect the angular momentum of macroscopic objects. However, it is very important in the microscopic world. For example, the structure of electron shells and subshells in chemistry is significantly affected by the quantization of angular momentum.
Quantization of angular momentum was first postulated by Niels Bohr in his Bohr model of the atom and was later predicted by Erwin Schrödinger in his Schrödinger equation.
In the definition , six operators are involved: The position operators , , , and the momentum operators , , . However, the Heisenberg uncertainty principle tells us that it is not possible for all six of these quantities to be known simultaneously with arbitrary precision. Therefore, there are limits to what can be known or measured about a particle's angular momentum. It turns out that the best that one can do is to simultaneously measure both the angular momentum vector's magnitude and its component along one axis.
The uncertainty is closely related to the fact that different components of an angular momentum operator do not commute, for example . (For the precise commutation relations, see angular momentum operator.)
Total angular momentum as generator of rotations
As mentioned above, orbital angular momentum L is defined as in classical mechanics: , but total angular momentum J is defined in a different, more basic way: J is defined as the "generator of rotations".[35] More specifically, J is defined so that the operator
is the rotation operator that takes any system and rotates it by angle about the axis . (The "exp" in the formula refers to operator exponential) To put this the other way around, whatever our quantum Hilbert space is, we expect that the rotation group SO(3) will act on it. There is then an associated action of the Lie algebra so(3) of SO(3); the operators describing the action of so(3) on our Hilbert space are the (total) angular momentum operators.
The relationship between the angular momentum operator and the rotation operators is the same as the relationship between Lie algebras and Lie groups in mathematics. The close relationship between angular momentum and rotations is reflected in Noether's theorem that proves that angular momentum is conserved whenever the laws of physics are rotationally invariant.
Angular momentum in electrodynamics
When describing the motion of a charged particle in an electromagnetic field, the canonical momentum P (derived from the Lagrangian for this system) is not gauge invariant. As a consequence, the canonical angular momentum L = r × P is not gauge invariant either. Instead, the momentum that is physical, the so-called kinetic momentum (used throughout this article), is (in SI units)
where e is the electric charge of the particle and A the magnetic vector potential of the electromagnetic field. The gauge-invariant angular momentum, that is kinetic angular momentum, is given by
The interplay with quantum mechanics is discussed further in the article on canonical commutation relations.
Angular momentum in optics
In classical Maxwell electrodynamics the Poynting vector is a linear momentum density of electromagnetic field.[36]
The angular momentum density vector is given by a vector product as in classical mechanics:[37]
The above identities are valid locally, i.e. in each space point in a given moment .
Newton, in the Principia, hinted at angular momentum in his examples of the First Law of Motion,
A top, whose parts by their cohesion are perpetually drawn aside from rectilinear motions, does not cease its rotation, otherwise than as it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in more free spaces, preserve their motions both progressive and circular for a much longer time.[38]
He did not further investigate angular momentum directly in the Principia,
From such kind of reflexions also sometimes arise the circular motions of bodies about their own centres. But these are cases which I do not consider in what follows; and it would be too tedious to demonstrate every particular that relates to this subject.[39]
However, his geometric proof of the law of areas is an outstanding example of Newton's genius, and indirectly proves angular momentum conservation in the case of a central force.
The Law of Areas
Newton's derivation
Newton's derivation of the area law using geometric means.
As a planet orbits the Sun, the line between the Sun and the planet sweeps out equal areas in equal intervals of time. This had been known since Kepler expounded his second law of planetary motion. Newton derived a unique geometric proof, and went on to show that the attractive force of the Sun's gravity was the cause of all of Kepler's laws.
During the first interval of time, an object is in motion from point A to point B. Undisturbed, it would continue to point c during the second interval. When the object arrives at B, it receives an impulse directed toward point S. The impulse gives it a small added velocity toward S, such that if this were its only velocity, it would move from B to V during the second interval. By the rules of velocity composition, these two velocities add, and point C is found by construction of parallelogram BcCV. Thus the object's path is deflected by the impulse so that it arrives at point C at the end of the second interval. Because the triangles SBc and SBC have the same base SB and the same height Bc or VC, they have the same area. By symmetry, triangle SBc also has the same area as triangle SAB, therefore the object has swept out equal areas SAB and SBC in equal times.
At point C, the object receives another impulse toward S, again deflecting its path during the third interval from d to D. Thus it continues to E and beyond, the triangles SAB, SBc, SBC, SCd, SCD, SDe, SDE all having the same area. Allowing the time intervals to become ever smaller, the path ABCDE approaches indefinitely close to a continuous curve.
Note that because this derivation is geometric, and no specific force is applied, it proves a more general law than Kepler's second law of planetary motion. It shows that the Law of Areas applies to any central force, attractive or repulsive, continuous or non-continuous, or zero.
Conservation of angular momentum in the Law of Areas
The proportionality of angular momentum to the area swept out by a moving object can be understood by realizing that the bases of the triangles, that is, the lines from S to the object, are equivalent to the radius r, and that the heights of the triangles are proportional to the perpendicular component of velocity v?. Hence, if the area swept per unit time is constant, then by the triangular area formula 1/2(base)(height), the product (base)(height) and therefore the product rv? are constant: if r and the base length are decreased, v? and height must increase proportionally. Mass is constant, therefore angular momentum rmv? is conserved by this exchange of distance and velocity.
In the case of triangle SBC, area is equal to 1/2(SB)(VC). Wherever C is eventually located due to the impulse applied at B, the product (SB)(VC), and therefore rmv? remain constant. Similarly so for each of the triangles.
After Newton
Leonhard Euler, Daniel Bernoulli, and Patrick d'Arcy all understood angular momentum in terms of conservation of areal velocity, a result of their analysis of Kepler's second law of planetary motion. It is unlikely that they realized the implications for ordinary rotating matter.[40]
In 1736 Euler, like Newton, touched on some of the equations of angular momentum in his Mechanica without further developing them.[41]
Bernoulli wrote in a 1744 letter of a "moment of rotational motion", possibly the first conception of angular momentum as we now understand it.[42]
In 1799, Pierre-Simon Laplace first realized that a fixed plane was associated with rotation--his invariable plane.
Louis Poinsot in 1803 began representing rotations as a line segment perpendicular to the rotation, and elaborated on the "conservation of moments".
In 1852 Léon Foucault used a gyroscope in an experiment to display the Earth's rotation.
William J. M. Rankine's 1858 Manual of Applied Mechanics defined angular momentum in the modern sense for the first time:
...a line whose length is proportional to the magnitude of the angular momentum, and whose direction is perpendicular to the plane of motion of the body and of the fixed point, and such, that when the motion of the body is viewed from the extremity of the line, the radius-vector of the body seems to have right-handed rotation.
In an 1872 edition of the same book, Rankine stated that "The term angular momentum was introduced by Mr. Hayward,"[43] probably referring to R.B. Hayward's article On a Direct Method of estimating Velocities, Accelerations, and all similar Quantities with respect to Axes moveable in any manner in Space with Applications,[44] which was introduced in 1856, and published in 1864. Rankine was mistaken, as numerous publications feature the term starting in the late 18th to early 19th centuries.[45] However, Hayward's article apparently was the first use of the term and the concept seen by much of the English-speaking world. Before this, angular momentum was typically referred to as "momentum of rotation" in English.[46]
See also
1. ^ de Podesta, Michael (2002). Understanding the Properties of Matter (2nd, illustrated, revised ed.). CRC Press. p. 29. ISBN 978-0-415-25788-6.
2. ^ Wilson, E. B. (1915). Linear Momentum, Kinetic Energy and Angular Momentum. The American Mathematical Monthly. XXII. Ginn and Co., Boston, in cooperation with University of Chicago, et al. p. 190 – via Google books.
3. ^ Worthington, Arthur M. (1906). Dynamics of Rotation. Longmans, Green and Co., London. p. 21 – via Google books.
4. ^ Taylor, John R. (2005). Classical Mechanics. University Science Books, Mill Valley, CA. p. 90. ISBN 978-1-891389-22-1.
5. ^ Dadourian, H. M. (1913). Analytical Mechanics for Students of Physics and Engineering. D. Van Nostrand Company, New York. p. 266 – via Google books.
6. ^ Watson, W. (1912). General Physics. Longmans, Green and Co., New York. p. 33 – via Google books.
7. ^ Barker, George F. (1893). Physics: Advanced Course (4th ed.). Henry Holt and Company, New York. p. 66 – via Google Books.
8. ^ Barker, George F. (1893). Physics: Advanced Course (4th ed.). Henry Holt and Company, New York. pp. 67-68 – via Google Books.
9. ^ Oberg, Erik; et al. (2000). Machinery's Handbook (26th ed.). Industrial Press, Inc., New York. p. 143. ISBN 978-0-8311-2625-4.
10. ^ Watson, W. (1912). General Physics. Longmans, Green and Co., New York. p. 34 – via Google books.
11. ^ Kent, William (1916). The Mechanical Engineers' Pocket Book (9th ed.). John Wiley and Sons, Inc., New York. p. 517 – via Google books.
12. ^ Oberg, Erik; et al. (2000). Machinery's Handbook (26th ed.). Industrial Press, Inc., New York. p. 146. ISBN 978-0-8311-2625-4.
13. ^ Oberg, Erik; et al. (2000). Machinery's Handbook (26th ed.). Industrial Press, Inc., New York. pp. 161-162. ISBN 978-0-8311-2625-4.
14. ^ Kent, William (1916). The Mechanical Engineers' Pocket Book (9th ed.). John Wiley and Sons, Inc., New York. p. 527 – via Google books.
15. ^ Battin, Richard H. (1999). An Introduction to the Mathematics and Methods of Astrodynamics, Revised Edition. American Institute of Aeronautics and Astronautics, Inc. ISBN 978-1-56347-342-5., p. 97
16. ^ Rankine, W. J. M. (1872). A Manual of Applied Mechanics (6th ed.). Charles Griffin and Company, London. p. 507 – via Google books.
17. ^ a b Crew, Henry (1908). The Principles of Mechanics: For Students of Physics and Engineering. Longmans, Green, and Company, New York. p. 88 – via Google books.
18. ^ Worthington, Arthur M. (1906). Dynamics of Rotation. Longmans, Green and Co., London. p. 82 – via Google books.
19. ^ Worthington, Arthur M. (1906). Dynamics of Rotation. Longmans, Green and Co., London. p. 11 – via Google books.
20. ^ Stephenson, F. R.; Morrison, L. V.; Whitrow, G. J. (1984). "Long-term changes in the rotation of the earth - 700 B.C. to A.D. 1980". Philosophical Transactions of the Royal Society. 313 (1524): 47-70. Bibcode:1984RSPTA.313...47S. doi:10.1098/rsta.1984.0082. S2CID 120566848. +2.40 ms/century divided by 36525 days.
21. ^ Dickey, J. O.; et al. (1994). "Lunar Laser Ranging: A Continuing Legacy of the Apollo Program" (PDF). Science. 265 (5171): 482-90, see 486. Bibcode:1994Sci...265..482D. doi:10.1126/science.265.5171.482. PMID 17781305. S2CID 10157934.
22. ^ Landau, L. D.; Lifshitz, E. M. (1995). The classical theory of fields. Course of Theoretical Physics. Oxford, Butterworth-Heinemann. ISBN 978-0-7506-2768-9.
23. ^ Tenenbaum, M., & Pollard, H. (1985). Ordinary differential equations en elementary textbook for students of mathematics. Engineering and the Sciences.
24. ^ Battin, Richard H. (1999). An Introduction to the Mathematics and Methods of Astrodynamics, Revised Edition. American Institute of Aeronautics and Astronautics, Inc. p. 115. ISBN 978-1-56347-342-5.
25. ^ Wilson, E. B. (1915). Linear Momentum, Kinetic Energy and Angular Momentum. The American Mathematical Monthly. XXII. Ginn and Co., Boston, in cooperation with University of Chicago, et al. p. 188, equation (3) – via Google books.
26. ^ Wilson, E. B. (1915). Linear Momentum, Kinetic Energy and Angular Momentum. The American Mathematical Monthly. XXII. Ginn and Co., Boston, in cooperation with University of Chicago, et al. p. 191, Theorem 8 – via Google books.
27. ^ Synge and Schild, Tensor calculus, Dover publications, 1978 edition, p. 161. ISBN 978-0-486-63612-2.
28. ^ R.P. Feynman; R.B. Leighton; M. Sands (1964). Feynman's Lectures on Physics (volume 2). Addison-Wesley. pp. 31-7. ISBN 978-0-201-02117-2.
29. ^ Hall 2013 Section 17.3
30. ^ Thaller, Thaller (2005). Advanced Visual Quantum Mechanics (illustrated ed.). Springer Science & Business Media. p. 114. ISBN 978-0-387-27127-9.
31. ^ Veltman, Martinus J G (2018). Facts And Mysteries In Elementary Particle Physics (revised ed.). World Scientific. ISBN 978-981-323-707-0.
32. ^ Strange, Paul (1998). Relativistic Quantum Mechanics: With Applications in Condensed Matter and Atomic Physics (illustrated ed.). Cambridge University Press. p. 64. ISBN 978-0-521-56583-7.
33. ^ Ballantine, K. E.; Donegan, J. F.; Eastham, P. R. (2016). "There are many ways to spin a photon: Half-quantization of a total optical angular momentum". Science Advances. 2 (4): e1501748. Bibcode:2016SciA....2E1748B. doi:10.1126/sciadv.1501748. PMC 5565928. PMID 28861467.
34. ^ J. H. Van Vleck (1951). "The Coupling of Angular Momentum Vectors in Molecules". Rev. Mod. Phys. 23 (3): 213. Bibcode:1951RvMP...23..213V. doi:10.1103/RevModPhys.23.213.
35. ^ Littlejohn, Robert (2011). "Lecture notes on rotations in quantum mechanics" (PDF). Physics 221B Spring 2011. Retrieved 2012.
36. ^ Okulov, A Yu (2008). "Angular momentum of photons and phase conjugation". Journal of Physics B: Atomic, Molecular and Optical Physics. 41 (10): 101001. arXiv:0801.2675. Bibcode:2008JPhB...41j1001O. doi:10.1088/0953-4075/41/10/101001.
37. ^ Okulov, A.Y. (2008). "Optical and Sound Helical structures in a Mandelstam - Brillouin mirror". JETP Letters (in Russian). 88 (8): 561-566. Bibcode:2008JETPL..88..487O. doi:10.1134/s0021364008200046. S2CID 120371573. Archived from the original on 2015-12-22. Retrieved .
38. ^ Newton, Isaac (1803). "Axioms; or Laws of Motion, Law I". The Mathematical Principles of Natural Philosophy. Andrew Motte, translator. H. D. Symonds, London. p. 322 – via Google books.
39. ^ Newton, Axioms; or Laws of Motion, Corollary III
40. ^ see Borrelli, Arianna (2011). "Angular momentum between physics and mathematics" (PDF). for an excellent and detailed summary of the concept of angular momentum through history.
41. ^ Bruce, Ian (2008). "Euler : Mechanica Vol. 1".
42. ^ "Euler's Correspondence with Daniel Bernoulli, Bernoulli to Euler, 04 February, 1744" (PDF). The Euler Archive.
43. ^ Rankine, W. J. M. (1872). A Manual of Applied Mechanics (6th ed.). Charles Griffin and Company, London. p. 506 – via Google books.
44. ^ Hayward, Robert B. (1864). "On a Direct Method of estimating Velocities, Accelerations, and all similar Quantities with respect to Axes moveable in any manner in Space with Applications". Transactions of the Cambridge Philosophical Society. 10: 1. Bibcode:1864TCaPS..10....1H.
45. ^ see, for instance, Gompertz, Benjamin (1818). "On Pendulums vibrating between Cheeks". The Journal of Science and the Arts. III (V): 17 – via Google books.; Herapath, John (1847). Mathematical Physics. Whittaker and Co., London. p. 56 – via Google books.
46. ^ see, for instance, Landen, John (1785). "Of the Rotatory Motion of a Body of any Form whatever". Philosophical Transactions. LXXV (I): 311-332. doi:10.1098/rstl.1785.0016. S2CID 186212814.
External links
Music Scenes |
0567c916e3e38177 | 2019-05-01 - 研究員
三上 渓太の個人ウェブサイト
I am Keita Mikami, a research scientist at iTHEMS. My research field is partial differential equations and I work on linear Schrödinger equation. Main subject in the research of linear Schrödinger equation is its spectrum.
I have studied localization in direction phenomena of Schrödinger operators with homogeneous potentials of order zero. Roughly speaking, this is a phenomena such that a solution to Schrödinger equation with this class of potentials localizes in direction as time goes to infinity. I have used spectral theory and semiclassical(microlocal) analysis to understand this phenomena and its application.
Though my interest comes from mathematics, I want to understand physical aspects of Schrödinger equations and find some application of my results in physics since Schrödinger equation is the governing equation in quantum mechanics.
From Eigenvalues to Resonances
2020年5月1日16:00 - 18:10
Introduction to Schroedinger Operators
2019年7月12日16:00 - 18:10 |
b1d0126e7e12d54a | @article{1244, abstract = {Cell polarity refers to a functional spatial organization of proteins that is crucial for the control of essential cellular processes such as growth and division. To establish polarity, cells rely on elaborate regulation networks that control the distribution of proteins at the cell membrane. In fission yeast cells, a microtubule-dependent network has been identified that polarizes the distribution of signaling proteins that restricts growth to cell ends and targets the cytokinetic machinery to the middle of the cell. Although many molecular components have been shown to play a role in this network, it remains unknown which molecular functionalities are minimally required to establish a polarized protein distribution in this system. Here we show that a membrane-binding protein fragment, which distributes homogeneously in wild-type fission yeast cells, can be made to concentrate at cell ends by attaching it to a cytoplasmic microtubule end-binding protein. This concentration results in a polarized pattern of chimera proteins with a spatial extension that is very reminiscent of natural polarity patterns in fission yeast. However, chimera levels fluctuate in response to microtubule dynamics, and disruption of microtubules leads to disappearance of the pattern. Numerical simulations confirm that the combined functionality of membrane anchoring and microtubule tip affinity is in principle sufficient to create polarized patterns. Our chimera protein may thus represent a simple molecular functionality that is able to polarize the membrane, onto which additional layers of molecular complexity may be built to provide the temporal robustness that is typical of natural polarity patterns.}, author = {Recouvreux, Pierre and Sokolowski, Thomas R and Grammoustianou, Aristea and Tenwolde, Pieter and Dogterom, Marileen}, journal = {PNAS}, number = {7}, pages = {1811 -- 1816}, publisher = {National Academy of Sciences}, title = {{Chimera proteins with affinity for membranes and microtubule tips polarize in the membrane of fission yeast cells}}, doi = {10.1073/pnas.1419248113}, volume = {113}, year = {2016}, } @inproceedings{1245, abstract = {To facilitate collaboration in massive online classrooms, instructors must make many decisions. For instance, the following parameters need to be decided when designing a peer-feedback system where students review each others' essays: the number of students each student must provide feedback to, an algorithm to map feedback providers to receivers, constraints that ensure students do not become free-riders (receiving feedback but not providing it), the best times to receive feedback to improve learning etc. While instructors can answer these questions by running experiments or invoking past experience, game-theoretic models with data from online learning platforms can identify better initial designs for further improvements. As an example, we explore the design space of a peer feedback system by modeling it using game theory. Our simulations show that incentivizing students to provide feedback requires the value obtained from receiving a feedback to exceed the cost of providing it by a large factor (greater than 7). Furthermore, hiding feedback from low-effort students incentivizes them to provide more feedback.}, author = {Pandey, Vineet and Chatterjee, Krishnendu}, booktitle = {Proceedings of the ACM Conference on Computer Supported Cooperative Work}, location = {San Francisco, CA, USA}, number = {Februar-2016}, pages = {365 -- 368}, publisher = {ACM}, title = {{Game-theoretic models identify useful principles for peer collaboration in online learning platforms}}, doi = {10.1145/2818052.2869122}, volume = {26}, year = {2016}, } @article{1246, abstract = {Near-field imaging is a powerful tool to investigate the complex structure of light at the nanoscale. Recent advances in near-field imaging have indicated the possibility for the complete reconstruction of both electric and magnetic components of the evanescent field. Here we study the electro-magnetic field structure of surface plasmon polariton waves propagating along subwavelength gold nanowires by performing phase- and polarization-resolved near-field microscopy in collection mode. By applying the optical reciprocity theorem, we describe the signal collected by the probe as an overlap integral of the nanowire's evanescent field and the probe's response function. As a result, we find that the probe's sensitivity to the magnetic field is approximately equal to its sensitivity to the electric field. Through rigorous modeling of the nanowire mode as well as the aperture probe response function, we obtain a good agreement between experimentally measured signals and a numerical model. Our findings provide a better understanding of aperture-based near-field imaging of the nanoscopic plasmonic and photonic structures and are helpful for the interpretation of future near-field experiments.}, author = {Kabakova, Irina and De Hoogh, Anouk and Van Der Wel, Ruben and Wulf, Matthias and Le Feber, Boris and Kuipers, Laurens}, journal = {Scientific Reports}, publisher = {Nature Publishing Group}, title = {{Imaging of electric and magnetic fields near plasmonic nanowires}}, doi = {10.1038/srep22665}, volume = {6}, year = {2016}, } @article{1247, abstract = {The shaping of organs in plants depends on the intercellular flow of the phytohormone auxin, of which the directional signaling is determined by the polar subcellular localization of PIN-FORMED (PIN) auxin transport proteins. Phosphorylation dynamics of PIN proteins are affected by the protein phosphatase 2A (PP2A) and the PINOID kinase, which act antagonistically to mediate their apical-basal polar delivery. Here, we identified the ROTUNDA3 (RON3) protein as a regulator of the PP2A phosphatase activity in Arabidopsis thaliana. The RON3 gene was map-based cloned starting from the ron3-1 leaf mutant and found to be a unique, plant-specific gene coding for a protein with high and dispersed proline content. The ron3-1 and ron3-2 mutant phenotypes [i.e., reduced apical dominance, primary root length, lateral root emergence, and growth; increased ectopic stages II, IV, and V lateral root primordia; decreased auxin maxima in indole-3-acetic acid (IAA)-treated root apical meristems; hypergravitropic root growth and response; increased IAA levels in shoot apices; and reduced auxin accumulation in root meristems] support a role for RON3 in auxin biology. The affinity-purified PP2A complex with RON3 as bait suggested that RON3 might act in PIN transporter trafficking. Indeed, pharmacological interference with vesicle trafficking processes revealed that single ron3-2 and double ron3-2 rcn1 mutants have altered PIN polarity and endocytosis in specific cells. Our data indicate that RON3 contributes to auxin-mediated development by playing a role in PIN recycling and polarity establishment through regulation of the PP2A complex activity.}, author = {Karampelias, Michael and Neyt, Pia and De Groeve, Steven and Aesaert, Stijn and Coussens, Griet and Rolčík, Jakub and Bruno, Leonardo and De Winne, Nancy and Van Minnebruggen, Annemie and Van Montagu, Marc and Ponce, Maria and Micol, José and Friml, Jirí and De Jaeger, Geert and Van Lijsebettens, Mieke}, journal = {PNAS}, number = {10}, pages = {2768 -- 2773}, publisher = {National Academy of Sciences}, title = {{ROTUNDA3 function in plant development by phosphatase 2A-mediated regulation of auxin transporter recycling}}, doi = {10.1073/pnas.1501343112}, volume = {113}, year = {2016}, } @article{1248, abstract = {Life depends as much on the flow of information as on the flow of energy. Here we review the many efforts to make this intuition precise. Starting with the building blocks of information theory, we explore examples where it has been possible to measure, directly, the flow of information in biological networks, or more generally where information-theoretic ideas have been used to guide the analysis of experiments. Systems of interest range from single molecules (the sequence diversity in families of proteins) to groups of organisms (the distribution of velocities in flocks of birds), and all scales in between. Many of these analyses are motivated by the idea that biological systems may have evolved to optimize the gathering and representation of information, and we review the experimental evidence for this optimization, again across a wide range of scales.}, author = {Tkacik, Gasper and Bialek, William}, journal = {Annual Review of Condensed Matter Physics}, pages = {89 -- 117}, publisher = {Annual Reviews}, title = {{Information processing in living systems}}, doi = {10.1146/annurev-conmatphys-031214-014803}, volume = {7}, year = {2016}, } @article{1249, abstract = {Actin and myosin assemble into a thin layer of a highly dynamic network underneath the membrane of eukaryotic cells. This network generates the forces that drive cell- and tissue-scale morphogenetic processes. The effective material properties of this active network determine large-scale deformations and other morphogenetic events. For example, the characteristic time of stress relaxation (the Maxwell time τM) in the actomyosin sets the timescale of large-scale deformation of the cortex. Similarly, the characteristic length of stress propagation (the hydrodynamic length λ) sets the length scale of slow deformations, and a large hydrodynamic length is a prerequisite for long-ranged cortical flows. Here we introduce a method to determine physical parameters of the actomyosin cortical layer in vivo directly from laser ablation experiments. For this we investigate the cortical response to laser ablation in the one-cell-stage Caenorhabditis elegans embryo and in the gastrulating zebrafish embryo. These responses can be interpreted using a coarse-grained physical description of the cortex in terms of a two-dimensional thin film of an active viscoelastic gel. To determine the Maxwell time τM, the hydrodynamic length λ, the ratio of active stress ζΔμ, and per-area friction γ, we evaluated the response to laser ablation in two different ways: by quantifying flow and density fields as a function of space and time, and by determining the time evolution of the shape of the ablated region. Importantly, both methods provide best-fit physical parameters that are in close agreement with each other and that are similar to previous estimates in the two systems. Our method provides an accurate and robust means for measuring physical parameters of the actomyosin cortical layer. It can be useful for investigations of actomyosin mechanics at the cellular-scale, but also for providing insights into the active mechanics processes that govern tissue-scale morphogenesis.}, author = {Saha, Arnab and Nishikawa, Masatoshi and Behrndt, Martin and Heisenberg, Carl-Philipp J and Julicher, Frank and Grill, Stephan}, journal = {Biophysical Journal}, number = {6}, pages = {1421 -- 1429}, publisher = {Biophysical Society}, title = {{Determining physical properties of the cell cortex}}, doi = {10.1016/j.bpj.2016.02.013}, volume = {110}, year = {2016}, } @article{1250, abstract = {In bacteria, replicative aging manifests as a difference in growth or survival between the two cells emerging from division. One cell can be regarded as an aging mother with a decreased potential for future survival and division, the other as a rejuvenated daughter. Here, we aimed at investigating some of the processes involved in aging in the bacterium Escherichia coli, where the two types of cells can be distinguished by the age of their cell poles. We found that certain changes in the regulation of the carbohydrate metabolism can affect aging. A mutation in the carbon storage regulator gene, csrA, leads to a dramatically shorter replicative lifespan; csrA mutants stop dividing once their pole exceeds an age of about five divisions. These old-pole cells accumulate glycogen at their old cell poles; after their last division, they do not contain a chromosome, presumably because of spatial exclusion by the glycogen aggregates. The new-pole daughters produced by these aging mothers are born young; they only express the deleterious phenotype once their pole is old. These results demonstrate how manipulations of nutrient allocation can lead to the exclusion of the chromosome and limit replicative lifespan in E. coli, and illustrate how mutations can have phenotypic effects that are specific for cells with old poles. This raises the question how bacteria can avoid the accumulation of such mutations in their genomes over evolutionary times, and how they can achieve the long replicative lifespans that have recently been reported.}, author = {Boehm, Alex and Arnoldini, Markus and Bergmiller, Tobias and Röösli, Thomas and Bigosch, Colette and Ackermann, Martin}, journal = {PLoS Genetics}, number = {4}, publisher = {Public Library of Science}, title = {{Genetic manipulation of glycogen allocation affects replicative lifespan in E coli}}, doi = {10.1371/journal.pgen.1005974}, volume = {12}, year = {2016}, } @article{1251, abstract = {Plant growth and architecture is regulated by the polar distribution of the hormone auxin. Polarity and flexibility of this process is provided by constant cycling of auxin transporter vesicles along actin filaments, coordinated by a positive auxinactin feedback loop. Both polar auxin transport and vesicle cycling are inhibited by synthetic auxin transport inhibitors, such as 1-Nnaphthylphthalamic acid (NPA), counteracting the effect of auxin; however, underlying targets and mechanisms are unclear. Using NMR, we map the NPA binding surface on the Arabidopsis thaliana ABCB chaperone TWISTED DWARF1 (TWD1).We identify ACTIN7 as a relevant, although likely indirect, TWD1 interactor, and show TWD1-dependent regulation of actin filament organization and dynamics and that TWD1 is required for NPA-mediated actin cytoskeleton remodeling. The TWD1-ACTIN7 axis controls plasma membrane presence of efflux transporters, and as a consequence act7 and twd1 share developmental and physiological phenotypes indicative of defects in auxin transport. These can be phenocopied by NPA treatment or by chemical actin (de)stabilization. We provide evidence that TWD1 determines downstreamlocations of auxin efflux transporters by adjusting actin filament debundling and dynamizing processes and mediating NPA action on the latter. This function appears to be evolutionary conserved since TWD1 expression in budding yeast alters actin polarization and cell polarity and provides NPA sensitivity.}, author = {Zhu, Jinsheng and Bailly, Aurélien and Zwiewka, Marta and Sovero, Valpuri and Di Donato, Martin and Ge, Pei and Oehri, Jacqueline and Aryal, Bibek and Hao, Pengchao and Linnert, Miriam and Burgardt, Noelia and Lücke, Christian and Weiwad, Matthias and Michel, Max and Weiergräber, Oliver and Pollmann, Stephan and Azzarello, Elisa and Mancuso, Stefano and Ferro, Noel and Fukao, Yoichiro and Hoffmann, Céline and Wedlich Söldner, Roland and Friml, Jirí and Thomas, Clément and Geisler, Markus}, journal = {Plant Cell}, number = {4}, pages = {930 -- 948}, publisher = {American Society of Plant Biologists}, title = {{TWISTED DWARF1 mediates the action of auxin transport inhibitors on actin cytoskeleton dynamics}}, doi = {10.1105/tpc.15.00726}, volume = {28}, year = {2016}, } @article{1252, abstract = {We study the homomorphism induced in homology by a closed correspondence between topological spaces, using projections from the graph of the correspondence to its domain and codomain. We provide assumptions under which the homomorphism induced by an outer approximation of a continuous map coincides with the homomorphism induced in homology by the map. In contrast to more classical results we do not require that the projection to the domain have acyclic preimages. Moreover, we show that it is possible to retrieve correct homological information from a correspondence even if some data is missing or perturbed. Finally, we describe an application to combinatorial maps that are either outer approximations of continuous maps or reconstructions of such maps from a finite set of data points.}, author = {Harker, Shaun and Kokubu, Hiroshi and Mischaikow, Konstantin and Pilarczyk, Pawel}, journal = {Proceedings of the American Mathematical Society}, number = {4}, pages = {1787 -- 1801}, publisher = {American Mathematical Society}, title = {{Inducing a map on homology from a correspondence}}, doi = {10.1090/proc/12812}, volume = {144}, year = {2016}, } @article{1253, abstract = {This article provides an introduction to the role of microRNAs in the nervous system and outlines their potential involvement in the pathophysiology of schizophrenia, which is hypothesized to arise owing to environmental factors and genetic predisposition.}, author = {Tsai, Lihuei and Siegert, Sandra}, journal = {JAMA Psychiatry}, number = {4}, pages = {409 -- 410}, publisher = {American Medical Association}, title = {{How MicroRNAs Are involved in splitting the mind}}, doi = {10.1001/jamapsychiatry.2015.3144}, volume = {73}, year = {2016}, } @article{1254, abstract = {We use rigorous numerical techniques to compute a lower bound for the exponent of expansivity outside a neighborhood of the critical point for thousands of intervals of parameter values in the quadratic family. We first compute a radius of the critical neighborhood outside which the map is uniformly expanding. This radius is taken as small as possible, yet large enough for our numerical procedure to succeed in proving that the expansivity exponent outside this neighborhood is positive. Then, for each of the intervals, we compute a lower bound for this expansivity exponent, valid for all the parameters in that interval. We illustrate and study the distribution of the radii and the expansivity exponents. The results of our computations are mathematically rigorous. The source code of the software and the results of the computations are made publicly available at http://www.pawelpilarczyk.com/quadratic/.}, author = {Golmakani, Ali and Luzzatto, Stefano and Pilarczyk, Pawel}, journal = {Experimental Mathematics}, number = {2}, pages = {116 -- 124}, publisher = {Taylor and Francis}, title = {{Uniform expansivity outside a critical neighborhood in the quadratic family}}, doi = {10.1080/10586458.2015.1048011}, volume = {25}, year = {2016}, } @article{1255, abstract = {Down syndrome cell adhesion molecule 1 (Dscam1) has widereaching and vital neuronal functions although the role it plays in insect and crustacean immunity is less well understood. In this study, we combine different approaches to understand the roles that Dscam1 plays in fitness-related contexts in two model insect species. Contrary to our expectations, we found no short-term modulation of Dscam1 gene expression after haemocoelic or oral bacterial exposure in Tribolium castaneum, or after haemocoelic bacterial exposure in Drosophila melanogaster. Furthermore, RNAi-mediated Dscam1 knockdown and subsequent bacterial exposure did not reduce T. castaneum survival. However, Dscam1 knockdown in larvae resulted in adult locomotion defects, as well as dramatically reduced fecundity in males and females. We suggest that Dscam1 does not always play a straightforward role in immunity, but strongly influences behaviour and fecundity. This study takes a step towards understanding more about the role of this intriguing gene from different phenotypic perspectives.}, author = {Peuß, Robert and Wensing, Kristina and Woestmann, Luisa and Eggert, Hendrik and Milutinovic, Barbara and Sroka, Marlene and Scharsack, Jörn and Kurtz, Joachim and Armitage, Sophie}, journal = {Royal Society Open Science}, number = {4}, publisher = {Royal Society, The}, title = {{Down syndrome cell adhesion molecule 1: Testing for a role in insect immunity, behaviour and reproduction}}, doi = {10.1098/rsos.160138}, volume = {3}, year = {2016}, } @inproceedings{1256, abstract = {Simulink is widely used for model driven development (MDD) of industrial software systems. Typically, the Simulink based development is initiated from Stateflow modeling, followed by simulation, validation and code generation mapped to physical execution platforms. However, recent industrial trends have raised the demands of rigorous verification on safety-critical applications, which is unfortunately challenging for Simulink. In this paper, we present an approach to bridge the Stateflow based model driven development and a well- defined rigorous verification. First, we develop a self- contained toolkit to translate Stateflow model into timed automata, where major advanced modeling features in Stateflow are supported. Taking advantage of the strong verification capability of Uppaal, we can not only find bugs in Stateflow models which are missed by Simulink Design Verifier, but also check more important temporal properties. Next, we customize a runtime verifier for the generated nonintrusive VHDL and C code of Stateflow model for monitoring. The major strength of the customization is the flexibility to collect and analyze runtime properties with a pure software monitor, which opens more opportunities for engineers to achieve high reliability of the target system compared with the traditional act that only relies on Simulink Polyspace. We incorporate these two parts into original Stateflow based MDD seamlessly. In this way, safety-critical properties are both verified at the model level, and at the consistent system implementation level with physical execution environment in consideration. We apply our approach on a train controller design, and the verified implementation is tested and deployed on a real hardware platform.}, author = {Jiang, Yu and Yang, Yixiao and Liu, Han and Kong, Hui and Gu, Ming and Sun, Jiaguang and Sha, Lui}, location = {Vienna, Austria}, publisher = {IEEE}, title = {{From stateflow simulation to verified implementation: A verification approach and a real-time train controller design}}, doi = {10.1109/RTAS.2016.7461337}, year = {2016}, } @article{1257, abstract = {We consider products of random matrices that are small, independent identically distributed perturbations of a fixed matrix (Formula presented.). Focusing on the eigenvalues of (Formula presented.) of a particular size we obtain a limit to a SDE in a critical scaling. Previous results required (Formula presented.) to be a (conjugated) unitary matrix so it could not have eigenvalues of different modulus. From the result we can also obtain a limit SDE for the Markov process given by the action of the random products on the flag manifold. Applying the result to random Schrödinger operators we can improve some results by Valko and Virag showing GOE statistics for the rescaled eigenvalue process of a sequence of Anderson models on long boxes. In particular, we solve a problem posed in their work.}, author = {Sadel, Christian and Virág, Bálint}, journal = {Communications in Mathematical Physics}, number = {3}, pages = {881 -- 919}, publisher = {Springer}, title = {{A central limit theorem for products of random matrices and GOE statistics for the Anderson model on long boxes}}, doi = {10.1007/s00220-016-2600-4}, volume = {343}, year = {2016}, } @article{1258, abstract = {When plants grow in close proximity basic resources such as light can become limiting. Under such conditions plants respond to anticipate and/or adapt to the light shortage, a process known as the shade avoidance syndrome (SAS). Following genetic screening using a shade-responsive luciferase reporter line (PHYB:LUC), we identified DRACULA2 (DRA2), which encodes an Arabidopsis homolog of mammalian nucleoporin 98, a component of the nuclear pore complex (NPC). DRA2, together with other nucleoporins, participates positively in the control of the hypocotyl elongation response to plant proximity, a role that can be considered dependent on the nucleocytoplasmic transport of macromolecules (i.e. is transport dependent). In addition, our results reveal a specific role for DRA2 in controlling shade-induced gene expression. We suggest that this novel regulatory role of DRA2 is transport independent and that it might rely on its dynamic localization within and outside of the NPC. These results provide mechanistic insights in to how SAS responses are rapidly established by light conditions. They also indicate that nucleoporins have an active role in plant signaling.}, author = {Gallemi Rovira, Marcal and Galstyan, Anahit and Paulišić, Sandi and Then, Christiane and Ferrández Ayela, Almudena and Lorenzo Orts, Laura and Roig Villanova, Irma and Wang, Xuewen and Micol, José and Ponce, Maria and Devlin, Paul and Martínez García, Jaime}, journal = {Development}, number = {9}, pages = {1623 -- 1631}, publisher = {Company of Biologists}, title = {{DRACULA2 is a dynamic nucleoporin with a role in regulating the shade avoidance syndrome in Arabidopsis}}, doi = {10.1242/dev.130211}, volume = {143}, year = {2016}, } @article{1259, abstract = {We consider the Bogolubov–Hartree–Fock functional for a fermionic many-body system with two-body interactions. For suitable interaction potentials that have a strong enough attractive tail in order to allow for two-body bound states, but are otherwise sufficiently repulsive to guarantee stability of the system, we show that in the low-density limit the ground state of this model consists of a Bose–Einstein condensate of fermion pairs. The latter can be described by means of the Gross–Pitaevskii energy functional.}, author = {Bräunlich, Gerhard and Hainzl, Christian and Seiringer, Robert}, journal = {Mathematical Physics, Analysis and Geometry}, number = {2}, publisher = {Springer}, title = {{Bogolubov–Hartree–Fock theory for strongly interacting fermions in the low density limit}}, doi = {10.1007/s11040-016-9209-x}, volume = {19}, year = {2016}, } @article{1260, abstract = {In this work, the Gardner problem of inferring interactions and fields for an Ising neural network from given patterns under a local stability hypothesis is addressed under a dual perspective. By means of duality arguments, an integer linear system is defined whose solution space is the dual of the Gardner space and whose solutions represent mutually unstable patterns. We propose and discuss Monte Carlo methods in order to find and remove unstable patterns and uniformly sample the space of interactions thereafter. We illustrate the problem on a set of real data and perform ensemble calculation that shows how the emergence of phase dominated by unstable patterns can be triggered in a nonlinear discontinuous way.}, author = {De Martino, Daniele}, journal = {International Journal of Modern Physics C}, number = {6}, publisher = {World Scientific Publishing}, title = {{The dual of the space of interactions in neural network models}}, doi = {10.1142/S0129183116500674}, volume = {27}, year = {2016}, } @article{1261, abstract = {We consider a non-standard finite-volume discretization of a strongly non-linear fourth order diffusion equation on the d-dimensional cube, for arbitrary . The scheme preserves two important structural properties of the equation: the first is the interpretation as a gradient flow in a mass transportation metric, and the second is an intimate relation to a linear Fokker-Planck equation. Thanks to these structural properties, the scheme possesses two discrete Lyapunov functionals. These functionals approximate the entropy and the Fisher information, respectively, and their dissipation rates converge to the optimal ones in the discrete-to-continuous limit. Using the dissipation, we derive estimates on the long-time asymptotics of the discrete solutions. Finally, we present results from numerical experiments which indicate that our discretization is able to capture significant features of the complex original dynamics, even with a rather coarse spatial resolution.}, author = {Maas, Jan and Matthes, Daniel}, journal = {Nonlinearity}, number = {7}, pages = {1992 -- 2023}, publisher = {IOP Publishing Ltd.}, title = {{Long-time behavior of a finite volume discretization for a fourth order diffusion equation}}, doi = {10.1088/0951-7715/29/7/1992}, volume = {29}, year = {2016}, } @article{1262, abstract = {Emerging infectious diseases (EIDs) have contributed significantly to the current biodiversity crisis, leading to widespread epidemics and population loss. Owing to genetic variation in pathogen virulence, a complete understanding of species decline requires the accurate identification and characterization of EIDs. We explore this issue in the Western honeybee, where increasing mortality of populations in the Northern Hemisphere has caused major concern. Specifically, we investigate the importance of genetic identity of the main suspect in mortality, deformed wing virus (DWV), in driving honeybee loss. Using laboratory experiments and a systematic field survey, we demonstrate that an emerging DWV genotype (DWV-B) is more virulent than the established DWV genotype (DWV-A) and is widespread in the landscape. Furthermore, we show in a simple model that colonies infected with DWV-B collapse sooner than colonies infected with DWV-A. We also identify potential for rapid DWV evolution by revealing extensive genome-wide recombination in vivo. The emergence of DWV-B in naive honeybee populations, including via recombination with DWV-A, could be of significant ecological and economic importance. Our findings emphasize that knowledge of pathogen genetic identity and diversity is critical to understanding drivers of species decline.}, author = {Mcmahon, Dino and Natsopoulou, Myrsini and Doublet, Vincent and Fürst, Matthias and Weging, Silvio and Brown, Mark and Gogol Döring, Andreas and Paxton, Robert}, journal = {Proceedings of the Royal Society of London Series B Biological Sciences}, number = {1833}, publisher = {Royal Society, The}, title = {{Elevated virulence of an emerging viral genotype as a driver of honeybee loss}}, doi = {10.1098/rspb.2016.0811}, volume = {283}, year = {2016}, } @article{1263, abstract = {Linking classical microwave electrical circuits to the optical telecommunication band is at the core of modern communication. Future quantum information networks will require coherent microwave-to-optical conversion to link electronic quantum processors and memories via low-loss optical telecommunication networks. Efficient conversion can be achieved with electro-optical modulators operating at the single microwave photon level. In the standard electro-optic modulation scheme, this is impossible because both up- and down-converted sidebands are necessarily present. Here, we demonstrate true single-sideband up- or down-conversion in a triply resonant whispering gallery mode resonator by explicitly addressing modes with asymmetric free spectral range. Compared to previous experiments, we show a 3 orders of magnitude improvement of the electro-optical conversion efficiency, reaching 0.1% photon number conversion for a 10 GHz microwave tone at 0.42 mW of optical pump power. The presented scheme is fully compatible with existing superconducting 3D circuit quantum electrodynamics technology and can be used for nonclassical state conversion and communication. Our conversion bandwidth is larger than 1 MHz and is not fundamentally limited.}, author = {Rueda, Alfredo and Sedlmeir, Florian and Collodo, Michele and Vogl, Ulrich and Stiller, Birgit and Schunk, Gerhard and Strekalov, Dmitry and Marquardt, Christoph and Fink, Johannes M and Painter, Oskar and Leuchs, Gerd and Schwefel, Harald}, journal = {Optica}, number = {6}, pages = {597 -- 604}, publisher = {OSA Publishing}, title = {{Efficient microwave to optical photon conversion: An electro-optical realization}}, doi = {10.1364/OPTICA.3.000597}, volume = {3}, year = {2016}, } @article{1264, abstract = {n contrast with the wealth of recent reports about the function of μ-adaptins and clathrin adaptor protein (AP) complexes, there is very little information about the motifs that determine the sorting of membrane proteins within clathrin-coated vesicles in plants. Here, we investigated putative sorting signals in the large cytosolic loop of the Arabidopsis (Arabidopsis thaliana) PIN-FORMED1 (PIN1) auxin transporter, which are involved in binding μ-adaptins and thus in PIN1 trafficking and localization. We found that Phe-165 and Tyr-280, Tyr-328, and Tyr-394 are involved in the binding of different μ-adaptins in vitro. However, only Phe-165, which binds μA(μ2)- and μD(μ3)-adaptin, was found to be essential for PIN1 trafficking and localization in vivo. The PIN1:GFP-F165A mutant showed reduced endocytosis but also localized to intracellular structures containing several layers of membranes and endoplasmic reticulum (ER) markers, suggesting that they correspond to ER or ER-derived membranes. While PIN1:GFP localized normally in a μA (μ2)-adaptin mutant, it accumulated in big intracellular structures containing LysoTracker in a μD (μ3)-adaptin mutant, consistent with previous results obtained with mutants of other subunits of the AP-3 complex. Our data suggest that Phe-165, through the binding of μA (μ2)- and μD (μ3)-adaptin, is important for PIN1 endocytosis and for PIN1 trafficking along the secretory pathway, respectively.}, author = {Sancho Andrés, Gloria and Soriano Ortega, Esther and Gao, Caiji and Bernabé Orts, Joan and Narasimhan, Madhumitha and Müller, Anna and Tejos, Ricardo and Jiang, Liwen and Friml, Jirí and Aniento, Fernando and Marcote, Maria}, journal = {Plant Physiology}, number = {3}, pages = {1965 -- 1982}, publisher = {American Society of Plant Biologists}, title = {{Sorting motifs involved in the trafficking and localization of the PIN1 auxin efflux carrier}}, doi = {10.1104/pp.16.00373}, volume = {171}, year = {2016}, } @article{1265, abstract = {Extracellular matrices (ECMs) are central to the advent of multicellular life, and their mechanical propertiesare modulated by and impinge on intracellular signaling pathways that regulate vital cellular functions. High spatial-resolution mapping of mechanical properties in live cells is, however, extremely challenging. Thus, our understanding of how signaling pathways process physiological signals to generate appropriate mechanical responses is limited. We introduce fluorescence emission-Brillouin scattering imaging (FBi), a method for the parallel and all-optical measurements of mechanical properties and fluorescence at the submicrometer scale in living organisms. Using FBi, we showed thatchanges in cellular hydrostatic pressure and cytoplasm viscoelasticity modulate the mechanical signatures of plant ECMs. We further established that the measured "stiffness" of plant ECMs is symmetrically patternedin hypocotyl cells undergoing directional growth. Finally, application of this method to Arabidopsis thaliana with photoreceptor mutants revealed that red and far-red light signals are essential modulators of ECM viscoelasticity. By mapping the viscoelastic signatures of a complex ECM, we provide proof of principlefor the organism-wide applicability of FBi for measuring the mechanical outputs of intracellular signaling pathways. As such, our work has implications for investigations of mechanosignaling pathways and developmental biology.}, author = {Elsayad, Kareem and Werner, Stephanie and Gallemi Rovira, Marcal and Kong, Jixiang and Guajardo, Edmundo and Zhang, Lijuan and Jaillais, Yvon and Greb, Thomas and Belkhadir, Youssef}, journal = {Science Signaling}, number = {435}, publisher = {American Association for the Advancement of Science}, title = {{Mapping the subcellular mechanical properties of live cells in tissues with fluorescence emission-Brillouin imaging}}, doi = {10.1126/scisignal.aaf6326}, volume = {9}, year = {2016}, } @article{1266, abstract = {Cortical networks exhibit ‘global oscillations’, in which neural spike times are entrained to an underlying oscillatory rhythm, but where individual neurons fire irregularly, on only a fraction of cycles. While the network dynamics underlying global oscillations have been well characterised, their function is debated. Here, we show that such global oscillations are a direct consequence of optimal efficient coding in spiking networks with synaptic delays and noise. To avoid firing unnecessary spikes, neurons need to share information about the network state. Ideally, membrane potentials should be strongly correlated and reflect a ‘prediction error’ while the spikes themselves are uncorrelated and occur rarely. We show that the most efficient representation is when: (i) spike times are entrained to a global Gamma rhythm (implying a consistent representation of the error); but (ii) few neurons fire on each cycle (implying high efficiency), while (iii) excitation and inhibition are tightly balanced. This suggests that cortical networks exhibiting such dynamics are tuned to achieve a maximally efficient population code.}, author = {Chalk, Matthew J and Gutkin, Boris and Denève, Sophie}, journal = {eLife}, number = {2016JULY}, publisher = {eLife Sciences Publications}, title = {{Neural oscillations as a signature of efficient coding in the presence of synaptic delays}}, doi = {10.7554/eLife.13824}, volume = {5}, year = {2016}, } @article{1267, abstract = {We give a simplified proof of the nonexistence of large nuclei in the liquid drop model and provide an explicit bound. Our bound is within a factor of 2.3 of the conjectured value and seems to be the first quantitative result.}, author = {Frank, Rupert and Killip, Rowan and Nam, Phan}, journal = {Letters in Mathematical Physics}, number = {8}, pages = {1033 -- 1036}, publisher = {Springer}, title = {{Nonexistence of large nuclei in the liquid drop model}}, doi = {10.1007/s11005-016-0860-8}, volume = {106}, year = {2016}, } @article{1268, author = {Milutinovic, Barbara and Kurtz, Joachim}, journal = {Seminars in Immunology}, number = {4}, pages = {328 -- 342}, publisher = {Academic Press}, title = {{Immune memory in invertebrates}}, doi = {10.1016/j.smim.2016.05.004}, volume = {28}, year = {2016}, } @article{1269, abstract = {Plants are continuously exposed to a myriad of external signals such as fluctuating nutrients availability, drought, heat, cold, high salinity, or pathogen/pest attacks that can severely affect their development, growth, and fertility. As sessile organisms, plants must therefore be able to sense and rapidly react to these external inputs, activate efficient responses, and adjust development to changing conditions. In recent years, significant progress has been made towards understanding the molecular mechanisms underlying the intricate and complex communication between plants and the environment. It is now becoming increasingly evident that hormones have an important regulatory role in plant adaptation and defense mechanisms.}, author = {Benková, Eva}, journal = {Plant Molecular Biology}, number = {6}, pages = {597}, publisher = {Springer}, title = {{Plant hormones in interactions with the environment}}, doi = {10.1007/s11103-016-0501-8}, volume = {91}, year = {2016}, } @article{896, abstract = {Multicellular eukaryotes have evolved a range of mechanisms for immune recognition. A widespread family involved in innate immunity are the NACHT-domain and leucine-rich-repeat-containing (NLR) proteins.Mammals have small numbers of NLR proteins, whereas in some species, mostly those without adaptive immune systems, NLRs have expanded into very large families.We describe a family of nearly 400NLR proteins encoded in the zebrafish genome. The proteins share a defining overall structure, which arose in fishes after a fusion of the core NLR domains with a B30.2 domain, but can be subdivided into four groups based on their NACHT domains. Gene conversion acting differentially on the NACHT and B30.2 domains has shaped the family and created the groups. Evidence of positive selection in the B30.2 domain indicates that this domain rather than the leucine-rich repeats acts as the pathogen recognition module. In an unusual chromosomal organization, the majority of the genes are located on one chromosome arm, interspersed with other large multigene families, including a new family encoding zinc-finger proteins. The NLR-B30.2 proteins represent a new family with diversity in the specific recognition module that is present in fishes in spite of the parallel existence of an adaptive immune system.}, author = {Howe, Kerstin L and Schiffer, Philipp H and Zielinski, Julia G and Wiehe, Thomas H and Laird, Gavin K and Marioni, John C and Soylemez, Onuralp and Fyodor Kondrashov and Leptin, Maria}, journal = {Open Biology}, number = {4}, publisher = {Royal Society, The}, title = {{Structure and evolutionary history of a large family of NLR proteins in the zebrafish}}, doi = {10.1098/rsob.160009}, volume = {6}, year = {2016}, } @article{9019, abstract = {Targeting protein–protein interactions has long been considered as a very difficult if impossible task, but over the past decade, front lines have moved. The number of successful examples is exponentially growing. This review presents a rapid overview of recent advances in this field considering the strengths and weaknesses of the small molecule approaches and alternative strategies such as the selection or design of artificial antibodies, peptides or peptidomimetics.}, author = {Bakail, May M and Ochsenbein, Francoise}, issn = {1631-0748}, journal = {Comptes Rendus Chimie}, keywords = {General Chemistry, General Chemical Engineering}, number = {1-2}, pages = {19--27}, publisher = {Elsevier}, title = {{Targeting protein–protein interactions, a wide open field for drug design}}, doi = {10.1016/j.crci.2015.12.004}, volume = {19}, year = {2016}, } @article{9051, abstract = {Biological systems often involve the self-assembly of basic components into complex and functioning structures. Artificial systems that mimic such processes can provide a well-controlled setting to explore the principles involved and also synthesize useful micromachines. Our experiments show that immotile, but active, components self-assemble into two types of structure that exhibit the fundamental forms of motility: translation and rotation. Specifically, micron-scale metallic rods are designed to induce extensile surface flows in the presence of a chemical fuel; these rods interact with each other and pair up to form either a swimmer or a rotor. Such pairs can transition reversibly between these two configurations, leading to kinetics reminiscent of bacterial run-and-tumble motion.}, author = {Davies Wykes, Megan S. and Palacci, Jérémie A and Adachi, Takuji and Ristroph, Leif and Zhong, Xiao and Ward, Michael D. and Zhang, Jun and Shelley, Michael J.}, issn = {1744-6848}, journal = {Soft Matter}, number = {20}, pages = {4584--4589}, publisher = {Royal Society of Chemistry}, title = {{Dynamic self-assembly of microscale rotors and swimmers}}, doi = {10.1039/c5sm03127c}, volume = {12}, year = {2016}, } @article{9052, abstract = {We describe colloidal Janus particles with metallic and dielectric faces that swim vigorously when illuminated by defocused optical tweezers without consuming any chemical fuel. Rather than wandering randomly, these optically-activated colloidal swimmers circulate back and forth through the beam of light, tracing out sinuous rosette patterns. We propose a model for this mode of light-activated transport that accounts for the observed behavior through a combination of self-thermophoresis and optically-induced torque. In the deterministic limit, this model yields trajectories that resemble rosette curves known as hypotrochoids.}, author = {Moyses, Henrique and Palacci, Jérémie A and Sacanna, Stefano and Grier, David G.}, issn = {1744-6848}, journal = {Soft Matter}, keywords = {General Chemistry, Condensed Matter Physics}, number = {30}, pages = {6357--6364}, publisher = {Royal Society of Chemistry }, title = {{Trochoidal trajectories of self-propelled Janus particles in a diverging laser beam}}, doi = {10.1039/c6sm01163b}, volume = {12}, year = {2016}, } @article{9140, abstract = {Expected changes to future extreme precipitation remain a key uncertainty associated with anthropogenic climate change. Extreme precipitation has been proposed to scale with the precipitable water content in the atmosphere. Assuming constant relative humidity, this implies an increase of precipitation extremes at a rate of about 7% °C−1 globally as indicated by the Clausius‐Clapeyron relationship. Increases faster and slower than Clausius‐Clapeyron have also been reported. In this work, we examine the scaling between precipitation extremes and temperature in the present climate using simulations and measurements from surface weather stations collected in the frame of the HyMeX and MED‐CORDEX programs in Southern France. Of particular interest are departures from the Clausius‐Clapeyron thermodynamic expectation, their spatial and temporal distribution, and their origin. Looking at the scaling of precipitation extreme with temperature, two regimes emerge which form a hook shape: one at low temperatures (cooler than around 15°C) with rates of increase close to the Clausius‐Clapeyron rate and one at high temperatures (warmer than about 15°C) with sub‐Clausius‐Clapeyron rates and most often negative rates. On average, the region of focus does not seem to exhibit super Clausius‐Clapeyron behavior except at some stations, in contrast to earlier studies. Many factors can contribute to departure from Clausius‐Clapeyron scaling: time and spatial averaging, choice of scaling temperature (surface versus condensation level), and precipitation efficiency and vertical velocity in updrafts that are not necessarily constant with temperature. But most importantly, the dynamical contribution of orography to precipitation in the fall over this area during the so‐called “Cevenoles” events, explains the hook shape of the scaling of precipitation extremes.}, author = {Drobinski, P. and Alonzo, B. and Bastin, S. and Silva, N. Da and MULLER, Caroline J}, issn = {2169-897X}, journal = {Journal of Geophysical Research: Atmospheres}, number = {7}, pages = {3100--3119}, publisher = {American Geophysical Union}, title = {{Scaling of precipitation extremes with temperature in the French Mediterranean region: What explains the hook shape?}}, doi = {10.1002/2015jd023497}, volume = {121}, year = {2016}, } @article{92, abstract = {Advanced organic nonlinear optical (NLO) materials have attracted increasing attention due to their multitude of applications in modern telecommunication devices. Arguably the most important advantage of organic NLO materials, relative to traditionally used inorganic NLO materials, is their short optical response time. Geminal amido esters with their donor-π-acceptor (D-π-A) architecture exhibit high levels of electron delocalization and substantial intramolecular charge transfer, which should endow these materials with short optical response times and large molecular (hyper)polarizabilities. In order to test this hypothesis, the linear and second-order nonlinear optical properties of five geminal amido esters, (E)-ethyl 3-(X-phenylamino)-2-(Y-phenylcarbamoyl)acrylate (1, X = 4-H, Y = 4-H; 2, X = 4-CH3, Y = 4-CH3; 3, X = 4-NO2, Y = 2,5-OCH3; 4, X = 2-Cl, Y = 2-Cl; 5, X = 4-Cl, Y = 4-Cl) were synthesized and characterized, whereby NLO structure-function relationships were established including intramolecular charge transfer characteristics, crystal field effects, and molecular first hyperpolarizabilities (β). Given the typically large errors (10-30%) associated with the determination of β coefficients, three independent methods were used: (i) density functional theory, (ii) hyper-Rayleigh scattering, and (iii) high-resolution X-ray diffraction data analysis based on multipolar modeling of electron densities at each atom. These three methods delivered consistent values of β, and based on these results, 3 should hold the most promise for NLO applications. The correlation between the molecular structure of these geminal amido esters and their linear and nonlinear optical properties thus provide molecular design guidelines for organic NLO materials; this leads to the ultimate goal of generating bespoke organic molecules to suit a given NLO device application.}, author = {Cole, Jaqueline and Lin, Tzechia and Ashcroft, Christopher and Pérez Moreno, Javier and Tan, Yizhou and Venkatesan, Perumal and Higginbotham, Andrew P and Pattison, Philip and Edwards, Alison and Piltz, Ross and Clays, Koen and Ilangovan, Andivelu}, journal = {Journal of Physical Chemistry C}, number = {51}, pages = {29439 -- 29448}, publisher = {American Chemical Society}, title = {{Relating the structure of geminal Amido Esters to their molecular hyperpolarizability}}, doi = {10.1021/acs.jpcc.6b10724}, volume = {120}, year = {2016}, } @article{930, abstract = {The changes in cell dynamics after oncogenic mutation that lead to the development of tumours are currently unknown. Here, using skin epidermis as a model, we assessed the effect of oncogenic hedgehog signalling in distinct cell populations and their capacity to induce basal cell carcinoma, the most frequent cancer in humans. We found that only stem cells, and not progenitors, initiated tumour formation upon oncogenic hedgehog signalling. This difference was due to the hierarchical organization of tumour growth in oncogene-targeted stem cells, characterized by an increase in symmetric self-renewing divisions and a higher p53-dependent resistance to apoptosis, leading to rapid clonal expansion and progression into invasive tumours. Our work reveals that the capacity of oncogene-targeted cells to induce tumour formation is dependent not only on their long-term survival and expansion, but also on the specific clonal dynamics of the cancer cell of origin.}, author = {Sánchez Danés, Adriana and Hannezo, Edouard B and Larsimont, Jean and Liagre, Mélanie and Youssef, Khalil and Simons, Benjamin and Blanpain, Cédric}, journal = {Nature}, number = {7616}, pages = {298 -- 303}, publisher = {Nature Publishing Group}, title = {{Defining the clonal dynamics leading to mouse skin tumour initiation}}, doi = {10.1038/nature19069}, volume = {536}, year = {2016}, } @article{931, abstract = {In many adult tissues, stem cells and differentiated cells are not homogeneously distributed: stem cells are arranged in periodic "niches," and differentiated cells are constantly produced and migrate out of these niches. In this article, we provide a general theoretical framework to study mixtures of dividing and actively migrating particles, which we apply to biological tissues. We show in particular that the interplay between the stresses arising from active cell migration and stem cell division give rise to robust stem cell patterns. The instability of the tissue leads to spatial patterns which are either steady or oscillating in time. The wavelength of the instability has an order of magnitude consistent with the biological observations. We also discuss the implications of these results for future in vitro and in vivo experiments.}, author = {Hannezo, Edouard B and Coucke, Alice and Joanny, Jean}, journal = {Physical Review E Statistical Nonlinear and Soft Matter Physics}, number = {2}, publisher = {American Institute of Physics}, title = {{Interplay of migratory and division forces as a generic mechanism for stem cell patterns}}, doi = {10.1103/PhysRevE.93.022405}, volume = {93}, year = {2016}, } @article{932, abstract = {Epithelial sheets are crucial components of all metazoan animals, enclosing organs and protecting the animal from its environment. Epithelial homeostasis poses unique challenges, as addition of new cells and loss of old cells must be achieved without disrupting the fluid-tight barrier and apicobasal polarity of the epithelium. Several studies have identified cell biological mechanisms underlying extrusion of cells from epithelia, but far less is known of the converse mechanism by which new cells are added. Here, we combine molecular, pharmacological, and laser-dissection experiments with theoretical modeling to characterize forces driving emergence of an apical surface as single nascent cells are added to a vertebrate epithelium in vivo. We find that this process involves the interplay between cell-autonomous actin-generated pushing forces in the emerging cell and mechanical properties of neighboring cells. Our findings define the forces driving this cell behavior, contributing to a more comprehensive understanding of epithelial homeostasis.}, author = {Sedzinski, Jakub and Hannezo, Edouard B and Tu, Fan and Biro, Maté and Wallingford, John}, journal = {Developmental Cell}, number = {1}, pages = {24 -- 35}, publisher = {Cell Press}, title = {{Emergence of an Apical Epithelial Cell Surface In Vivo}}, doi = {10.1016/j.devcel.2015.12.013}, volume = {36}, year = {2016}, } @inproceedings{948, abstract = {Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP. Numerical experiments verify our analytical derivations and show robust behavior for artificial and natural stimuli. Our results link IP to non-trivial input statistics, in particular the statistics of stimulus intensities for classes to which a neuron is sensitive. More generally, our work paves the way toward new classification algorithms that are robust to intensity variations.}, author = {Monk, Travis and Savin, Cristina and Lücke, Jörg}, location = {Barcelona, Spaine}, pages = {4285 -- 4293}, publisher = {Neural Information Processing Systems}, title = {{Neurons equipped with intrinsic plasticity learn stimulus intensity statistics}}, volume = {29}, year = {2016}, } @article{983, abstract = {The half-filled Landau level is expected to be approximately particle-hole symmetric, which requires an extension of the Halperin-Lee-Read (HLR) theory of the compressible state observed at this filling. Recent work indicates that, when particle-hole symmetry is preserved, the composite fermions experience a quantized π-Berry phase upon winding around the composite Fermi surface, analogous to Dirac fermions at the surface of a 3D topological insulator. In contrast, the effective low-energy theory of the composite fermion liquid originally proposed by HLR lacks particle-hole symmetry and has vanishing Berry phase. In this paper, we explain how thermoelectric transport measurements can be used to test the Dirac nature of the composite fermions by quantitatively extracting this Berry phase. First, we point out that longitudinal thermopower (Seebeck effect) is nonvanishing because of the unusual nature of particle-hole symmetry in this context and is not sensitive to the Berry phase. In contrast, we find that off-diagonal thermopower (Nernst effect) is directly related to the topological structure of the composite Fermi surface, vanishing for zero Berry phase and taking its maximal value for π Berry phase. In contrast, in purely electrical transport signatures, the Berry phase contributions appear as small corrections to a large background signal, making the Nernst effect a promising diagnostic of the Dirac nature of composite fermions.}, author = {Potter, Andrew C and Maksym Serbyn and Vishwanath, Ashvin K}, journal = {Physical Review X}, number = {3}, publisher = {American Physical Society}, title = {{Thermoelectric transport signatures of Dirac composite fermions in the half-filled Landau level}}, doi = {10.1103/PhysRevX.6.031026}, volume = {6}, year = {2016}, } @article{984, abstract = {The entanglement spectrum of the reduced density matrix contains information beyond the von Neumann entropy and provides unique insights into exotic orders or critical behavior of quantum systems. Here, we show that strongly disordered systems in the many-body localized phase have power-law entanglement spectra, arising from the presence of extensively many local integrals of motion. The power-law entanglement spectrum distinguishes many-body localized systems from ergodic systems, as well as from ground states of gapped integrable models or free systems in the vicinity of scale-invariant critical points. We confirm our results using large-scale exact diagonalization. In addition, we develop a matrix-product state algorithm which allows us to access the eigenstates of large systems close to the localization transition, and discuss general implications of our results for variational studies of highly excited eigenstates in many-body localized systems.}, author = {Maksym Serbyn and Alexios Michailidis and Abanin, Dmitry A and Papić, Zlatko}, journal = {Physical Review Letters}, number = {16}, publisher = {American Physical Society}, title = {{Power-law entanglement spectrum in many-body localized phases}}, doi = {10.1103/PhysRevLett.117.160601}, volume = {117}, year = {2016}, } @article{985, abstract = {We report on magnetotransport studies of dual-gated, Bernal-stacked trilayer graphene (TLG) encapsulated in boron nitride crystals. We observe a quantum Hall effect staircase which indicates a complete lifting of the 12-fold degeneracy of the zeroth Landau level. As a function of perpendicular electric field, our data exhibit a sequence of phase transitions between all integer quantum Hall states in the filling factor interval -8<ν<0. We develop a theoretical model and argue that, in contrast to monolayer and bilayer graphene, the observed Landau level splittings and quantum Hall phase transitions can be understood within a single-particle picture, but imply the presence of a charge density imbalance between the inner and outer layers of TLG, even at charge neutrality and zero transverse electric field. Our results indicate the importance of a previously unaccounted band structure parameter which, together with a more accurate estimate of the other tight-binding parameters, results in a significantly improved determination of the electronic and Landau level structure of TLG.}, author = {Campos, Leonardo C and Taychatanapat, Thiti and Maksym Serbyn and Surakitbovorn, Kawin N and Watanabe, Kenji and Taniguchi, Takashi and Abanin, Dmitry A and Jarillo-Herrero, Pablo}, journal = {Physical Review Letters}, number = {6}, publisher = {American Physical Society}, title = {{Landau Level Splittings, Phase Transitions, and Nonuniform Charge Distribution in Trilayer Graphene}}, doi = {10.1103/PhysRevLett.117.066601}, volume = {117}, year = {2016}, } @article{986, abstract = {The many-body localization transition (MBLT) between ergodic and many-body localized phases in disordered interacting systems is a subject of much recent interest. The statistics of eigenenergies is known to be a powerful probe of crossovers between ergodic and integrable systems in simpler examples of quantum chaos. We consider the evolution of the spectral statistics across the MBLT, starting with mapping to a Brownian motion process that analytically relates the spectral properties to the statistics of matrix elements. We demonstrate that the flow from Wigner-Dyson to Poisson statistics is a two-stage process. First, a fractal enhancement of matrix elements upon approaching the MBLT from the delocalized side produces an effective power-law interaction between energy levels, and leads to a plasma model for level statistics. At the second stage, the gas of eigenvalues has local interactions and the level statistics belongs to a semi-Poisson universality class. We verify our findings numerically on the XXZ spin chain. We provide a microscopic understanding of the level statistics across the MBLT and discuss implications for the transition that are strong constraints on possible theories.}, author = {Maksym Serbyn and Moore, Joel E}, journal = {Physical Review B - Condensed Matter and Materials Physics}, number = {4}, publisher = {American Physical Society}, title = {{Spectral statistics across the many-body localization transition}}, doi = {10.1103/PhysRevB.93.041424}, volume = {93}, year = {2016}, } @article{987, abstract = {In contrast to bulk FeSe, which exhibits nematic order and low temperature superconductivity, highly doped FeSe reverses the situation, having high temperature superconductivity appearing alongside a suppression of nematic order. To investigate this phenomenon, we study a minimal electronic model of FeSe, with interactions that enhance nematic fluctuations. This model is sign problem free, and is simulated using determinant quantum Monte Carlo (DQMC). We developed a DQMC algorithm with parallel tempering, which proves to be an efficient source of global updates and allows us to access the region of strong interactions. Over a wide range of intermediate couplings, we observe superconductivity with an extended s-wave order parameter, along with enhanced, but short-ranged, q=(0,0) ferro-orbital (nematic) order. These results are consistent with approximate weak-coupling treatments that predict that nematic fluctuations lead to superconducting pairing. Surprisingly, in the parameter range under study, we do not observe nematic long-range order. Instead, at stronger coupling an unusual insulating phase with q=(π,π) antiferro-orbital order appears, which is missed by weak-coupling approximations.}, author = {Dumitrescu, Philipp T and Maksym Serbyn and Scalettar, Richard T and Vishwanath, Ashvin K}, journal = {Physical Review B - Condensed Matter and Materials Physics}, number = {15}, publisher = {American Physical Society}, title = {{Superconductivity and nematic fluctuations in a model of doped FeSe monolayers: Determinant quantum Monte Carlo study}}, doi = {10.1103/PhysRevB.94.155127}, volume = {94}, year = {2016}, } @article{372, abstract = {The optimization of a material functionality requires both the rational design and precise engineering of its structural and chemical parameters. In this work, we show how colloidal chemistry is an excellent synthetic choice for the synthesis of novel ternary nanostructured chalcogenides, containing exclusively noble metals, with tailored morphology and composition and with potential application in the energy conversion field. Specifically, the Ag-Au-Se system has been explored from a synthetic point of view, which leads to a set of Ag2Se-based hybrid and ternary nanoparticles including the room temperature synthesis of the rare ternary Ag3AuSe2 fischesserite phase. An in-depth structural and chemical characterization of all nanomaterials has been performed, which proofed especially useful for unravelling the reaction mechanism behind the formation of the ternary phase in solution. The work is complemented with the thermal and electric characterization of a ternary Ag-Au-Se nanocomposite with promising results: we found that the use of the ternary nanocomposite represents a clear improvement in terms of thermoelectric energy conversion as compared to a binary Ag-Se nanocomposite analogue. }, author = {Dalmases, Mariona and Ibanez Sabate, Maria and Torruella, Paul and Fernàndez Altable, Victor and López Conesa, Luis and Cadavid, Doris and Piveteau, Laura and Nachtegaal, Maarten and Llorca, Jordi and Ruiz González, Maria and Estradé, Sònia and Peiró, Francesca and Kovalenko, Maksym and Cabot, Andreu and Figuerola, Albert}, journal = {Chemistry of Materials}, number = {19}, pages = {7017 -- 7028}, publisher = {American Chemical Society}, title = {{Synthesis and thermoelectric properties of noble metal ternary chalcogenide systems of Ag Au Se in the forms of alloyed nanoparticles and colloidal nanoheterostructures}}, doi = {10.1021/acs.chemmater.6b02845}, volume = {28}, year = {2016}, } @article{379, abstract = {Monodisperse Cu2ZnSnS4 (CZTS) nanocrystals (NCs), with quasi-spherical shape, were prepared by a facile, high-yield, scalable, and high-concentration heat-up procedure. The key parameters to minimize the NC size distribution were efficient mixing and heat transfer in the reaction mixture through intensive argon bubbling and improved control of the heating ramp stability. Optimized synthetic conditions allowed the production of several grams of highly monodisperse CZTS NCs per batch, with up to 5 wt % concentration in a crude solution and a yield above 90%.}, author = {Shavel, Alexey and Ibáñez, Maria and Luo, Zhishan and De Roo, Jonathan and Carrete, Alex and Dimitrievska, Mirjana and Genç, Aziz and Meyns, Michaela and Pérez Rodríguez, Alejandro and Kovalenko, Maksym and Arbol, Jordi and Cabot, Andreu}, journal = {Chemistry of Materials}, number = {3}, pages = {720 -- 726}, publisher = {American Chemical Society}, title = {{Scalable heating-up synthesis of monodisperse Cu2ZnSnS4 nanocrystals}}, doi = {10.1021/acs.chemmater.5b03417}, volume = {28}, year = {2016}, } @article{380, abstract = {Size and shape tunability and low-cost solution processability make colloidal lead chalcogenide quantum dots (QDs) an emerging class of building blocks for innovative photovoltaic, thermoelectric and optoelectronic devices. Lead chalcogenide QDs are known to crystallize in the rock-salt structure, although with very different atomic order and stoichiometry in the core and surface regions; however, there exists no convincing prior identification of how extreme downsizing and surface-induced ligand effects influence structural distortion. Using forefront X-ray scattering techniques and density functional theory calculations, here we have identified that, at sizes below 8 nm, PbS and PbSe QDs undergo a lattice distortion with displacement of the Pb sublattice, driven by ligand-induced tensile strain. The resulting permanent electric dipoles may have implications on the oriented attachment of these QDs. Evidence is found for a Pb-deficient core and, in the as-synthesized QDs, for a rhombic dodecahedral shape with nonpolar {110} facets. On varying the nature of the surface ligands, differences in lattice strains are found.}, author = {Bertolotti, Federica and Dirin, Dmitry and Ibanez Sabate, Maria and Krumreich, Frank and Cervellino, Antonio and Frison, Ruggero and Voznyy, Oleksandr and Sargent, Edward and Kovalenko, Maksym and Guagliardi, Antonietta and Masciocchi, Norberto}, journal = {Nature Materials}, pages = {987 -- 994}, publisher = {Nature Publishing Group}, title = {{Crystal symmetry breaking and role of vacancies in colloidal lead chalcogenide quantum dots}}, doi = {10.1038/NMAT4661}, volume = {15}, year = {2016}, } @article{381, abstract = {We present a high-yield and scalable colloidal synthesis to produce monodisperse AgSbSe2 nanocrystals (NCs). Using nuclear magnetic resonance (NMR) spectroscopy, we characterized the NC surface chemistry and demonstrate the presence of surfactants in dynamic exchange, which controls the NC growth mechanism. In addition, these NCs were electronically doped by introducing small amounts of bismuth. To demonstrate the technological potential of such processed material, after ligand removal by means of NaNH2, AgSbSe2 NCs were used as building blocks to produce thermoelectric (TE) nanomaterials. A preliminary optimization of the doping concentration resulted in a thermoelectric figure of merit (ZT) of 1.1 at 640 K, which is comparable to the best ZT values obtained with a Pb- and Te-free material in this middle temperature range, with the additional advantage of the high versatility and low cost associated with solution processing technologies.}, author = {Liu, Yu and Cadavid, Doris and Ibanez Sabate, Maria and De Roo, Jonathan and Ortega, Silvia and Dobrozhan, Oleksandr and Kovalenko, Maksym and Cabot, Andreu}, journal = {Journal of Materials Chemistry C}, pages = {4756 -- 4762}, publisher = {Royal Society of Chemistry}, title = {{Colloidal AgSbSe2 nanocrystals: surface analysis, electronic doping and processing into thermoelectric nanomaterials}}, doi = {10.1039/c6tc00893c}, volume = {4}, year = {2016}, } @article{382, abstract = {Mn3O4@CoMn2O4 nanoparticles (NPs) were produced at low temperature and ambient atmosphere using a one-pot two-step synthesis protocol involving the cation exchange of Mn by Co in preformed Mn3O4 NPs. Selecting the proper cobalt precursor, the nucleation of CoxOy crystallites at the Mn3O4@CoMn2O4 surface could be simultaneously promoted to form Mn3O4@CoMn2O4–CoxOy NPs. Such heterostructured NPs were investigated for oxygen reduction and evolution reactions (ORR, OER) in alkaline solution. Mn3O4@CoMn2O4–CoxOy NPs with [Co]/[Mn] = 1 showed low overpotentials of 0.31 V at −3 mA·cm–2 and a small Tafel slope of 52 mV·dec–1 for ORR, and overpotentials of 0.31 V at 10 mA·cm–2 and a Tafel slope of 81 mV·dec–1 for OER, thus outperforming commercial Pt-, IrO2-based and previously reported transition metal oxides. This cation-exchange-based synthesis protocol opens up a new approach to design novel heterostructured NPs as efficient nonprecious metal bifunctional oxygen catalysts.}, author = {Luo, Zhishan and Irtem, Erdem and Ibanez, Maria and Nafria, Raquel and Márti Sánchez, Sara and Genç, Aziz and De La Mata, Maria and Liu, Yu and Cadavid, Doris and Llorca, Jordi and Arbiol, Jordi and Andreu, Teresa and Morante, Joan and Cabot, Andreu}, journal = {ACS Applied Materials and Interfaces}, pages = {17435 -- 17444}, publisher = {American Chemical Society}, title = {{Mn3O4@CoMn2O4–CoxOy nanoparticles: Partial cation exchange synthesis and electrocatalytic properties toward the oxygen reduction and evolution reactions}}, doi = {10.1021/acsami.6b02786}, volume = {8}, year = {2016}, } @article{383, abstract = {In the quest for more efficient thermoelectric material able to convert thermal to electrical energy and vice versa, composites that combine a semiconductor host having a large Seebeck coefficient with metal nanodomains that provide phonon scattering and free charge carriers are particularly appealing. Here, we present our experimental results on the thermal and electrical transport properties of PbS-metal composites produced by a versatile particle blending procedure, and where the metal work function allows injecting electrons to the intrinsic PbS host. We compare the thermoelectric performance of composites with microcrystalline or nanocrystalline structures. The electrical conductivity of the microcrystalline host can be increased several orders of magnitude with the metal inclusion, while relatively high Seebeck coefficient can be simultaneously conserved. On the other hand, in nanostructured materials, the host crystallites are not able to sustain a band bending at its interface with the metal, becoming flooded with electrons. This translates into even higher electrical conductivities than the microcrystalline material, but at the expense of lower Seebeck coefficient values.}, author = {Liu, Yu and Cadavid, Doris and Ibanez Sabate, Maria and Ortega, Silvia and Márti Sánchez, Sara and Dobrozhan, Oleksander and Kovalenko, Maksym and Arbiol, Jordi and Cabot, Andreu}, journal = {Applied Physics Letters}, publisher = {American Institute of Physics}, title = {{Thermoelectric properties of semiconductor-metal composites produced by particle blending}}, doi = {https://doi.org/10.1063/1.4961679}, volume = {4}, year = {2016}, } @article{389, abstract = {The coherent optical manipulation of solids is emerging as a promising way to engineer novel quantum states of matter. The strong time-periodic potential of intense laser light can be used to generate hybrid photon-electron states. Interaction of light with Bloch states leads to Floquet-Bloch states, which are essential in realizing new photo-induced quantum phases. Similarly, dressing of free-electron states near the surface of a solid generates Volkov states, which are used to study nonlinear optics in atoms and semiconductors. The interaction of these two dynamic states with each other remains an open experimental problem. Here we use time- and angle-resolved photoemission spectroscopy (Tr-ARPES) to selectively study the transition between these two states on the surface of the topological insulator Bi2Se3. We find that the coupling between the two strongly depends on the electron momentum, providing a route to enhance or inhibit it. Moreover, by controlling the light polarization we can negate Volkov states to generate pure Floquet-Bloch states. This work establishes a systematic path for the coherent manipulation of solids via light-matter interaction.}, author = {Mahmood, Fahad and Chan, Ching and Alpichshev, Zhanybek and Gardner, Dillon and Lee, Young and Lee, Patrick and Gedik, Nuh}, journal = {Nature Physics}, number = {4}, pages = {306 -- 310}, publisher = {Nature Publishing Group}, title = {{Selective scattering between Floquet Bloch and Volkov states in a topological insulator}}, doi = {10.1038/nphys3609}, volume = {12}, year = {2016}, } @article{390, abstract = {In the underdoped copper-oxides, high-temperature superconductivity condenses from a nonconventional metallic "pseudogap" phase that exhibits a variety of non-Fermi liquid properties. Recently, it has become clear that a charge density wave (CDW) phase exists within the pseudogap regime. This CDW coexists and competes with superconductivity (SC) below the transition temperature Tc, suggesting that these two orders are intimately related. Here we show that the condensation of the superfluid from this unconventional precursor is reflected in deviations from the predictions of BSC theory regarding the recombination rate of quasiparticles. We report a detailed investigation of the quasiparticle (QP) recombination lifetime, τqp, as a function of temperature and magnetic field in underdoped HgBa2CuO4+δ (Hg-1201) and YBa2Cu3O6+x (YBCO) single crystals by ultrafast time-resolved reflectivity. We find that τqp (T) exhibits a local maximum in a small temperature window near Tc that is prominent in underdoped samples with coexisting charge order and vanishes with application of a small magnetic field. We explain this unusual, non-BCS behavior by positing that Tc marks a transition from phase-fluctuating SC/CDW composite order above to a SC/CDW condensate below. Our results suggest that the superfluid in underdoped cuprates is a condensate of coherently-mixed particle-particle and particle-hole pairs.}, author = {Hinton, James and Thewalt, E and Alpichshev, Zhanybek and Mahmood, Fahad and Koralek, Jake and Chan, Mun and Veit, Michael and Dorow, Chelsey and Barišić, Neven and Kemper, Alexander and Bonn, Doug and Hardy, Walter and Liang, Ruixing and Gedik, Nuh and Greven, Martin and Lanzara, Alessandra and Orenstein, Joseph}, journal = {Scientific Reports}, publisher = {Nature Publishing Group}, title = {{The rate of quasiparticle recombination probes the onset of coherence in cuprate superconductors}}, doi = {10.1038/srep23610}, volume = {6}, year = {2016}, } @article{363, abstract = {Lead halide perovskite materials have attracted significant attention in the context of photovoltaics and other optoelectronic applications, and recently, research efforts have been directed to nanostructured lead halide perovskites. Collodial nanocrystals (NCs) of cesium lead halides (CsPbX3, X = Cl, Br, I) exhibit bright photoluminescence, with emission tunable over the entire visible spectral region. However, previous studies on CsPbX3 NCs did not address key aspects of their chemistry and photophysics such as surface chemistry and quantitative light absorption. Here, we elaborate on the synthesis of CsPbBr3 NCs and their surface chemistry. In addition, the intrinsic absorption coefficient was determined experimentally by combining elemental analysis with accurate optical absorption measurements. 1H solution nuclear magnetic resonance spectroscopy was used to characterize sample purity, elucidate the surface chemistry, and evaluate the influence of purification methods on the surface composition. We find that ligand binding to the NC surface is highly dynamic, and therefore, ligands are easily lost during the isolation and purification procedures. However, when a small amount of both oleic acid and oleylamine is added, the NCs can be purified, maintaining optical, colloidal, and material integrity. In addition, we find that a high amine content in the ligand shell increases the quantum yield due to the improved binding of the carboxylic acid.}, author = {De Roo, Jonathan and Ibáñez, Maria and Geiregat, Pieter and Nedelcu, Georgian and Walravens, Willem and Maes, Jorick and Martins, Jose and Van Driessche, Isabel and Kovalenko, Maksym and Hens, Zeger}, journal = {ACS Nano}, number = {2}, pages = {2071 -- 2081}, publisher = {American Chemical Society}, title = {{Highly dynamic ligand binding and light absorption coefficient of cesium lead bromide perovskite nanocrystals}}, doi = {10.1021/acsnano.5b06295}, volume = {10}, year = {2016}, } @article{364, abstract = {The development of highly active, low cost and stable electrocatalysts for direct alcohol fuel cells remains a critical challenge. While Pd2Sn has been reported as an excellent catalyst for the ethanol oxidation reaction (EOR), here we present DFT analysis results showing the (100) and (001) facets of orthorhombic Pd2Sn to be more favourable for the EOR than (010). Accordingly, using tri-n-octylphosphine, oleylamine (OLA) and methylamine hydrochloride as size and shape directing agents, we produced colloidal Pd2Sn nanorods (NRs) grown in the [010] direction. Such Pd2Sn NRs, supported on graphitic carbon, showed excellent performance and stability as an anode electrocatalyst for the EOR in alkaline media, exhibiting 3 times and 10 times higher EOR current densities than that of Pd2Sn and Pd nanospheres, respectively. We associate this improved performance with the favourable faceting of the NRs.}, author = {Luo, Zhishan and Lu, Jianmin and Flox, Cristina and Nafria, Raquel and Genç, Aziz and Arbiol, Jordi and Llorca, Jordi and Ibanez Sabate, Maria and Morante, Joan and Cabot, Andreu}, journal = {Journal of Materials Chemistry A}, number = {42}, pages = {16706 -- 16713}, publisher = {Royal Society of Chemistry}, title = {{Pd2Sn [010] nanorods as a highly active and stable ethanol oxidation catalyst}}, doi = {10.1039/c6ta06430b}, volume = {4}, year = {2016}, } @article{366, abstract = {Cesium lead halide (CsPbX3, X = Cl, Br, I) nanocrystals (NCs) offer exceptional optical properties for several potential applications but their implementation is hindered by a low chemical and structural stability and limited processability. In the present work, we developed a new method to efficiently coat CsPbX3 NCs, which resulted in their increased chemical and optical stability as well as processability. The method is based on the incorporation of poly(maleic anhydride-alt-1-octadecene) (PMA) into the synthesis of the perovskite NCs. The presence of PMA in the ligand shell stabilizes the NCs by tightening the ligand binding, limiting in this way the NC surface interaction with the surrounding media. We further show that these NCs can be embedded in self-standing silicone/glass plates as down-conversion filters for the fabrication of monochromatic green and white light emitting diodes (LEDs) with narrow bandwidths and appealing color characteristics.}, author = {Meyn, Michaela and Perálvarez, Mariano and Heuer Jungemann, Amelie and Hertog, Wim and Ibanez Sabate, Maria and Nafria, Raquel and Genç, Aziz and Arbiol, Jordi and Kovalenko, Maksym and Carreras, Josep and Cabot, Andreu and Kanaras, Antonios}, journal = {ACS Applied Materials and Interfaces}, number = {30}, pages = {19579 -- 19586}, publisher = {American Chemical Society}, title = {{Polymer enhanced stability of inorganic perovskite nanocrystals and their application in color conversion LEDs}}, doi = {10.1021/acsami.6b02529}, volume = {8}, year = {2016}, } @article{367, abstract = {The functional properties of quaternary I2–II–IV–VI4 nanomaterials, with potential interest in various technological fields, are highly sensitive to compositional variations, which is a challenging parameter to adjust. Here we demonstrate the presence of phosphonic acids to aid controlling the reactivity of the II element monomer to be incorporated in quaternary Cu2ZnSnSe4 nanoparticles and thus to provide a more reliable way to adjust the final nanoparticle metal ratios. Furthermore, we demonstrate the composition control in such multivalence nanoparticles to allow modifying charge carrier concentrations in nanomaterials produced from the assembly of these building blocks. }, author = {Ibáñez, Maria and Berestok, Taisiia and Dobrozhan, Oleksandr and Lalonde, Aaron and Izquierdo Roca, Victor and Shavel, Alexey and Pérez Rodríguez, Alejandro and Snyder, G Jeffrey and Cabot, Andreu}, journal = {Journal of Nanoparticle Research}, number = {8}, publisher = {Springer}, title = {{Phosphonic acids aid composition adjustment in the synthesis of Cu2+xZn1−xSnSe4−y nanoparticles}}, doi = {10.1007/s11051-016-3545-4}, volume = {18}, year = {2016}, } @article{368, abstract = {The control of the phase distribution in multicomponent nanomaterials is critical to optimize their catalytic performance. In this direction, while impressive advances have been achieved in the past decade in the synthesis of multicomponent nanoparticles and nanocomposites, element rearrangement during catalyst activation has been frequently overseen. Here, we present a facile galvanic replacement-based procedure to synthesize Co@Cu nanoparticles with narrow size and composition distributions. We further characterize their phase arrangement before and after catalytic activation. When oxidized at 350 °C in air to remove organics, Co@Cu core-shell nanostructures oxidize to polycrystalline CuO-Co3O4 nanoparticles with randomly distributed CuO and Co3O4 crystallites. During a posterior reduction treatment in H2 atmosphere, Cu precipitates in a metallic core and Co migrates to the nanoparticle surface to form Cu@Co core-shell nanostructures. The catalytic behavior of such Cu@Co nanoparticles supported on mesoporous silica was further analyzed toward CO2 hydrogenation in real working conditions.}, author = {Nafria, Raquel and Genç, Aziz and Ibáñez, Maria and Arbiol, Jprdi and Ramírez De La Piscina, Pilar and Homs, Narcís and Cabot, Andreu}, journal = {Langmuir}, number = {9}, pages = {2267 -- 2276}, publisher = {American Chemical Society}, title = {{Co Cu nanoparticles synthesis by galvanic replacement and phase rearrangement during catalytic activation}}, doi = {10.1021/acs.langmuir.5b04622}, volume = {32}, year = {2016}, } @article{369, abstract = {The efficient conversion between thermal and electrical energy by means of durable, silent and scalable solid-state thermoelectric devices has been a long standing goal. While nanocrystalline materials have already led to substantially higher thermoelectric efficiencies, further improvements are expected to arise from precise chemical engineering of nanoscale building blocks and interfaces. Here we present a simple and versatile bottom-up strategy based on the assembly of colloidal nanocrystals to produce consolidated yet nanostructured thermoelectric materials. In the case study on the PbS-Ag system, Ag nanodomains not only contribute to block phonon propagation, but also provide electrons to the PbS host semiconductor and reduce the PbS intergrain energy barriers for charge transport. Thus, PbS-Ag nanocomposites exhibit reduced thermal conductivities and higher charge carrier concentrations and mobilities than PbS nanomaterial. Such improvements of the material transport properties provide thermoelectric figures of merit up to 1.7 at 850 K.}, author = {Ibanez Sabate, Maria and Luo, Zhishan and Genç, Azoz and Piveteau, Laura and Ortega, Silvia and Cadavid, Doris and Dobrozhan, Oleksandr and Liu, Yu and Nachtegaal, Maarten and Zebarjadi, Mona and Arbiol, Jordi and Kovalenko, Maksym and Cabot, Andreu}, journal = {Nature Communications}, publisher = {Nature Publishing Group}, title = {{High performance thermoelectric nanocomposites from nanocrystal building blocks}}, doi = {doi:10.1038/ncomms10766}, volume = {7}, year = {2016}, } @article{370, abstract = {Copper-based chalcogenides that comprise abundant, low-cost, and environmental friendly elements are excellent materials for a number of energy conversion applications, including photovoltaics, photocatalysis, and thermoelectrics (TE). In such applications, the use of solution-processed nanocrystals (NCs) to produce thin films or bulk nanomaterials has associated several potential advantages, such as high material yield and throughput, and composition control with unmatched spatial resolution and cost. Here we report on the production of Cu3SbSe4 (CASe) NCs with tuned amounts of Sn and Bi dopants. After proper ligand removal, as monitored by nuclear magnetic resonance and infrared spectroscopy, these NCs were used to produce dense CASe bulk nanomaterials for solid state TE energy conversion. By adjusting the amount of extrinsic dopants, dimensionless TE figures of merit (ZT) up to 1.26 at 673 K were reached. Such high ZT values are related to an optimized carrier concentration by Sn doping, a minimized lattice thermal conductivity due to efficient phonon scattering at point defects and grain boundaries, and to an increase of the Seebeck coefficient obtained by a modification of the electronic band structure with Bi doping. Nanomaterials were further employed to fabricate ring-shaped TE generators to be coupled to hot pipes, which provided 20 mV and 1 mW per TE element when exposed to a 160 °C temperature gradient. The simple design and good thermal contact associated with the ring geometry and the potential low cost of the material solution processing may allow the fabrication of TE generators with short payback times.}, author = {Liu, Yu and García, Gregorio and Ortega, Silvia and Cadavid, Doris and Palacios, Pablo and Lu, Jinyu and Ibanez, Maria and Xi, Lili and De Roo, Jonathan and López, Antonio and Márti Sánchez, Sara and Cabezas, Ignasi and De La Mata, Maria and Luo, Zhishan and Dun, Chaocha and Dobrozhan, Oleksandr and Carroll, David and Zhang, Wenging and Martins, José and Kovalenko, Mksym and Arbiol, Jordi and Noriega, German and Song, Jiming and Wahnón, Perla and Cabot, Andreu}, journal = {Journal of Materials Chemistry A}, number = {6}, pages = {2592 -- 2602}, publisher = {Royal Society of Chemistry}, title = {{Solution based synthesis and processing of Sn and Bi doped Cu inf 3 inf SbSe inf 4 inf nanocrystals nanomaterials and ring shaped thermoelectric generators}}, doi = {10.1039/C6TA08467B}, volume = {5}, year = {2016}, } @article{371, abstract = {The design and engineering of earth-abundant catalysts that are both cost-effective and highly active for water splitting are crucial challenges in a number of energy conversion and storage technologies. In this direction, herein we report the synthesis of Fe3O4@NiFexOy core-shell nanoheterostructures and the characterization of their electrocatalytic performance toward the oxygen evolution reaction (OER). Such nanoparticles (NPs) were produced by a two-step synthesis procedure involving the colloidal synthesis of Fe3O4 nanocubes with a defective shell and the posterior diffusion of nickel cations within this defective shell. Fe3O4@NiFexOy NPs were subsequently spin-coated over ITO-covered glass and their electrocatalytic activity toward water oxidation in carbonate electrolyte was characterized. Fe3O4@NiFexOy catalysts reached current densities above 1 mA/cm2 with a 410 mV overpotential and Tafel slopes of 48 mV/dec, which is among the best electrocatalytic performances reported in carbonate electrolyte.}, author = {Luo, Zhishan and Márti Sánchez, Sara and Nafria, Raquel and Joshua, Gihan and De La Mata, Maria and Guardia, Pablo and Flox, Christina and Martínez Boubeta, Carlos and Simeonidis, Konstantinos and Llorca, Jordi and Morante, Joan and Arbiol, Jordi and Ibanez Sabate, Maria and Cabot, Andreu}, journal = {ACS Applied Materials and Interfaces}, number = {43}, pages = {29461 -- 29469}, publisher = {American Chemical Society}, title = {{Fe3O4@NiFexOy nanoparticles with enhanced electrocatalytic properties for oxygen evolution in carbonate electrolyte}}, doi = {10.1021/acsami.6b09888}, volume = {8}, year = {2016}, } @phdthesis{1130, abstract = {In this thesis we present a computer-aided programming approach to concurrency. Our approach helps the programmer by automatically fixing concurrency-related bugs, i.e. bugs that occur when the program is executed using an aggressive preemptive scheduler, but not when using a non-preemptive (cooperative) scheduler. Bugs are program behaviours that are incorrect w.r.t. a specification. We consider both user-provided explicit specifications in the form of assertion statements in the code as well as an implicit specification. The implicit specification is inferred from the non-preemptive behaviour. Let us consider sequences of calls that the program makes to an external interface. The implicit specification requires that any such sequence produced under a preemptive scheduler should be included in the set of sequences produced under a non-preemptive scheduler. We consider several semantics-preserving fixes that go beyond atomic sections typically explored in the synchronisation synthesis literature. Our synthesis is able to place locks, barriers and wait-signal statements and last, but not least reorder independent statements. The latter may be useful if a thread is released to early, e.g., before some initialisation is completed. We guarantee that our synthesis does not introduce deadlocks and that the synchronisation inserted is optimal w.r.t. a given objective function. We dub our solution trace-based synchronisation synthesis and it is loosely based on counterexample-guided inductive synthesis (CEGIS). The synthesis works by discovering a trace that is incorrect w.r.t. the specification and identifying ordering constraints crucial to trigger the specification violation. Synchronisation may be placed immediately (greedy approach) or delayed until all incorrect traces are found (non-greedy approach). For the non-greedy approach we construct a set of global constraints over synchronisation placements. Each model of the global constraints set corresponds to a correctness-ensuring synchronisation placement. The placement that is optimal w.r.t. the given objective function is chosen as the synchronisation solution. We evaluate our approach on a number of realistic (albeit simplified) Linux device-driver benchmarks. The benchmarks are versions of the drivers with known concurrency-related bugs. For the experiments with an explicit specification we added assertions that would detect the bugs in the experiments. Device drivers lend themselves to implicit specification, where the device and the operating system are the external interfaces. Our experiments demonstrate that our synthesis method is precise and efficient. We implemented objective functions for coarse-grained and fine-grained locking and observed that different synchronisation placements are produced for our experiments, favouring e.g. a minimal number of synchronisation operations or maximum concurrency.}, author = {Tarrach, Thorsten}, pages = {151}, publisher = {IST Austria}, title = {{Automatic synthesis of synchronisation primitives for concurrent programs}}, year = {2016}, } @article{1432, abstract = {CA3–CA3 recurrent excitatory synapses are thought to play a key role in memory storage and pattern completion. Whether the plasticity properties of these synapses are consistent with their proposed network functions remains unclear. Here, we examine the properties of spike timing-dependent plasticity (STDP) at CA3–CA3 synapses. Low-frequency pairing of excitatory postsynaptic potentials (EPSPs) and action potentials (APs) induces long-term potentiation (LTP), independent of temporal order. The STDP curve is symmetric and broad (half-width ~150 ms). Consistent with these STDP induction properties, AP–EPSP sequences lead to supralinear summation of spine [Ca2+] transients. Furthermore, afterdepolarizations (ADPs) following APs efficiently propagate into dendrites of CA3 pyramidal neurons, and EPSPs summate with dendritic ADPs. In autoassociative network models, storage and recall are more robust with symmetric than with asymmetric STDP rules. Thus, a specialized STDP induction rule allows reliable storage and recall of information in the hippocampal CA3 network.}, author = {Mishra, Rajiv Kumar and Kim, Sooyun and Guzmán, José and Jonas, Peter M}, journal = {Nature Communications}, publisher = {Nature Publishing Group}, title = {{Symmetric spike timing-dependent plasticity at CA3–CA3 synapses optimizes storage and recall in autoassociative networks}}, doi = {10.1038/ncomms11552}, volume = {7}, year = {2016}, } @phdthesis{1396, abstract = {CA3 pyramidal neurons are thought to pay a key role in memory storage and pattern completion by activity-dependent synaptic plasticity between CA3-CA3 recurrent excitatory synapses. To examine the induction rules of synaptic plasticity at CA3-CA3 synapses, we performed whole-cell patch-clamp recordings in acute hippocampal slices from rats (postnatal 21-24 days) at room temperature. Compound excitatory postsynaptic potentials (ESPSs) were recorded by tract stimulation in stratum oriens in the presence of 10 µM gabazine. High-frequency stimulation (HFS) induced N-methyl-D-aspartate (NMDA) receptor-dependent long-term potentiation (LTP). Although LTP by HFS did not requier postsynaptic spikes, it was blocked by Na+-channel blockers suggesting that local active processes (e.g.) dendritic spikes) may contribute to LTP induction without requirement of a somatic action potential (AP). We next examined the properties of spike timing-dependent plasticity (STDP) at CA3-CA3 synapses. Unexpectedly, low-frequency pairing of EPSPs and backpropagated action potentialy (bAPs) induced LTP, independent of temporal order. The STDP curve was symmetric and broad, with a half-width of ~150 ms. Consistent with these specific STDP induction properties, post-presynaptic sequences led to a supralinear summation of spine [Ca2+] transients. Furthermore, in autoassociative network models, storage and recall was substantially more robust with symmetric than with asymmetric STDP rules. In conclusion, we found associative forms of LTP at CA3-CA3 recurrent collateral synapses with distinct induction rules. LTP induced by HFS may be associated with dendritic spikes. In contrast, low frequency pairing of pre- and postsynaptic activity induced LTP only if EPSP-AP were temporally very close. Together, these induction mechanisms of synaptiic plasticity may contribute to memory storage in the CA3-CA3 microcircuit at different ranges of activity.}, author = {Mishra, Rajiv Kumar}, pages = {83}, publisher = {IST Austria}, title = {{Synaptic plasticity rules at CA3-CA3 recurrent synapses in hippocampus}}, year = {2016}, } @phdthesis{1129, abstract = {Directed cell migration is a hallmark feature, present in almost all multi-cellular organisms. Despite its importance, basic questions regarding force transduction or directional sensing are still heavily investigated. Directed migration of cells guided by immobilized guidance cues - haptotaxis - occurs in key-processes, such as embryonic development and immunity (Middleton et al., 1997; Nguyen et al., 2000; Thiery, 1984; Weber et al., 2013). Immobilized guidance cues comprise adhesive ligands, such as collagen and fibronectin (Barczyk et al., 2009), or chemokines - the main guidance cues for migratory leukocytes (Middleton et al., 1997; Weber et al., 2013). While adhesive ligands serve as attachment sites guiding cell migration (Carter, 1965), chemokines instruct haptotactic migration by inducing adhesion to adhesive ligands and directional guidance (Rot and Andrian, 2004; Schumann et al., 2010). Quantitative analysis of the cellular response to immobilized guidance cues requires in vitro assays that foster cell migration, offer accurate control of the immobilized cues on a subcellular scale and in the ideal case closely reproduce in vivo conditions. The exploration of haptotactic cell migration through design and employment of such assays represents the main focus of this work. Dendritic cells (DCs) are leukocytes, which after encountering danger signals such as pathogens in peripheral organs instruct naïve T-cells and consequently the adaptive immune response in the lymph node (Mellman and Steinman, 2001). To reach the lymph node from the periphery, DCs follow haptotactic gradients of the chemokine CCL21 towards lymphatic vessels (Weber et al., 2013). Questions about how DCs interpret haptotactic CCL21 gradients have not yet been addressed. The main reason for this is the lack of an assay that offers diverse haptotactic environments, hence allowing the study of DC migration as a response to different signals of immobilized guidance cue. In this work, we developed an in vitro assay that enables us to quantitatively assess DC haptotaxis, by combining precisely controllable chemokine photo-patterning with physically confining migration conditions. With this tool at hand, we studied the influence of CCL21 gradient properties and concentration on DC haptotaxis. We found that haptotactic gradient sensing depends on the absolute CCL21 concentration in combination with the local steepness of the gradient. Our analysis suggests that the directionality of migrating DCs is governed by the signal-to-noise ratio of CCL21 binding to its receptor CCR7. Moreover, the haptotactic CCL21 gradient formed in vivo provides an optimal shape for DCs to recognize haptotactic guidance cue. By reconstitution of the CCL21 gradient in vitro we were also able to study the influence of CCR7 signal termination on DC haptotaxis. To this end, we used DCs lacking the G-protein coupled receptor kinase GRK6, which is responsible for CCL21 induced CCR7 receptor phosphorylation and desensitization (Zidar et al., 2009). We found that CCR7 desensitization by GRK6 is crucial for maintenance of haptotactic CCL21 gradient sensing in vitro and confirm those observations in vivo. In the context of the organism, immobilized haptotactic guidance cues often coincide and compete with soluble chemotactic guidance cues. During wound healing, fibroblasts are exposed and influenced by adhesive cues and soluble factors at the same time (Wu et al., 2012; Wynn, 2008). Similarly, migrating DCs are exposed to both, soluble chemokines (CCL19 and truncated CCL21) inducing chemotactic behavior as well as the immobilized CCL21. To quantitatively assess these complex coinciding immobilized and soluble guidance cues, we implemented our chemokine photo-patterning technique in a microfluidic system allowing for chemotactic gradient generation. To validate the assay, we observed DC migration in competing CCL19/CCL21 environments. Adhesiveness guided haptotaxis has been studied intensively over the last century. However, quantitative studies leading to conceptual models are largely missing, again due to the lack of a precisely controllable in vitro assay. A requirement for such an in vitro assay is that it must prevent any uncontrolled cell adhesion. This can be accomplished by stable passivation of the surface. In addition, controlled adhesion must be sustainable, quantifiable and dose dependent in order to create homogenous gradients. Therefore, we developed a novel covalent photo-patterning technique satisfying all these needs. In combination with a sustainable poly-vinyl alcohol (PVA) surface coating we were able to generate gradients of adhesive cue to direct cell migration. This approach allowed us to characterize the haptotactic migratory behavior of zebrafish keratocytes in vitro. Furthermore, defined patterns of adhesive cue allowed us to control for cell shape and growth on a subcellular scale.}, author = {Schwarz, Jan}, pages = {178}, publisher = {IST Austria}, title = {{Quantitative analysis of haptotactic cell migration}}, year = {2016}, } @phdthesis{1123, abstract = {Motivated by topological Tverberg-type problems in topological combinatorics and by classical results about embeddings (maps without double points), we study the question whether a finite simplicial complex K can be mapped into Rd without triple, quadruple, or, more generally, r-fold points (image points with at least r distinct preimages), for a given multiplicity r ≤ 2. In particular, we are interested in maps f : K → Rd that have no global r -fold intersection points, i.e., no r -fold points with preimages in r pairwise disjoint simplices of K , and we seek necessary and sufficient conditions for the existence of such maps. We present higher-multiplicity analogues of several classical results for embeddings, in particular of the completeness of the Van Kampen obstruction for embeddability of k -dimensional complexes into R2k , k ≥ 3. Speciffically, we show that under suitable restrictions on the dimensions(viz., if dimK = (r ≥ 1)k and d = rk \ for some k ≥ 3), a well-known deleted product criterion (DPC ) is not only necessary but also sufficient for the existence of maps without global r -fold points. Our main technical tool is a higher-multiplicity version of the classical Whitney trick , by which pairs of isolated r -fold points of opposite sign can be eliminated by local modiffications of the map, assuming codimension d – dimK ≥ 3. An important guiding idea for our work was that suffciency of the DPC, together with an old result of Özaydin's on the existence of equivariant maps, might yield an approach to disproving the remaining open cases of the the long-standing topological Tverberg conjecture , i.e., to construct maps from the N -simplex σN to Rd without r-Tverberg points when r not a prime power and N = (d + 1)(r – 1). Unfortunately, our proof of the sufficiency of the DPC requires codimension d – dimK ≥ 3, which is not satisfied for K = σN . In 2015, Frick [16] found a very elegant way to overcome this \codimension 3 obstacle" and to construct the first counterexamples to the topological Tverberg conjecture for all parameters(d; r ) with d ≥ 3r + 1 and r not a prime power, by a reduction1 to a suitable lower-dimensional skeleton, for which the codimension 3 restriction is satisfied and maps without r -Tverberg points exist by Özaydin's result and sufficiency of the DPC. In this thesis, we present a different construction (which does not use the constraint method) that yields counterexamples for d ≥ 3r , r not a prime power. }, author = {Mabillard, Isaac}, pages = {55}, publisher = {IST Austria}, title = {{Eliminating higher-multiplicity intersections: an r-fold Whitney trick for the topological Tverberg conjecture}}, year = {2016}, } @phdthesis{1121, abstract = {Horizontal gene transfer (HGT), the lateral acquisition of genes across existing species boundaries, is a major evolutionary force shaping microbial genomes that facilitates adaptation to new environments as well as resistance to antimicrobial drugs. As such, understanding the mechanisms and constraints that determine the outcomes of HGT events is crucial to understand the dynamics of HGT and to design better strategies to overcome the challenges that originate from it. Following the insertion and expression of a newly transferred gene, the success of an HGT event will depend on the fitness effect it has on the recipient (host) cell. Therefore, predicting the impact of HGT on the genetic composition of a population critically depends on the distribution of fitness effects (DFE) of horizontally transferred genes. However, to date, we have little knowledge of the DFE of newly transferred genes, and hence little is known about the shape and scale of this distribution. It is particularly important to better understand the selective barriers that determine the fitness effects of newly transferred genes. In spite of substantial bioinformatics efforts to identify horizontally transferred genes and selective barriers, a systematic experimental approach to elucidate the roles of different selective barriers in defining the fate of a transfer event has largely been absent. Similarly, although the fact that environment might alter the fitness effect of a horizontally transferred gene may seem obvious, little attention has been given to it in a systematic experimental manner. In this study, we developed a systematic experimental approach that consists of transferring 44 arbitrarily selected Salmonella typhimurium orthologous genes into an Escherichia coli host, and estimating the fitness effects of these transferred genes at a constant expression level by performing competition assays against the wild type. In chapter 2, we performed one-to-one competition assays between a mutant strain carrying a transferred gene and the wild type strain. By using flow cytometry we estimated selection coefficients for the transferred genes with a precision level of 10-3,and obtained the DFE of horizontally transferred genes. We then investigated if these fitness effects could be predicted by any of the intrinsic properties of the genes, namely, functional category, degree of complexity (protein-protein interactions), GC content, codon usage and length. Our analyses revealed that the functional category and length of the genes act as potential selective barriers. Finally, using the same procedure with the endogenous E. coli orthologs of these 44 genes, we demonstrated that gene dosage is the most prominent selective barrier to HGT. In chapter 3, using the same set of genes we investigated the role of environment on the success of HGT events. Under six different environments with different levels of stress we performed more complex competition assays, where we mixed all 44 mutant strains carrying transferred genes with the wild type strain. To estimate the fitness effects of genes relative to wild type we used next generation sequencing. We found that the DFEs of horizontally transferred genes are highly dependent on the environment, with abundant gene–by-environment interactions. Furthermore, we demonstrated a relationship between average fitness effect of a gene across all environments and its environmental variance, and thus its predictability. Finally, in spite of the fitness effects of genes being highly environment-dependent, we still observed a common shape of DFEs across all tested environments.}, author = {Acar, Hande}, pages = {75}, publisher = {IST Austria}, title = {{Selective barriers to horizontal gene transfer}}, year = {2016}, } @phdthesis{1131, abstract = {Evolution of gene regulation is important for phenotypic evolution and diversity. Sequence-specific binding of regulatory proteins is one of the key regulatory mechanisms determining gene expression. Although there has been intense interest in evolution of regulatory binding sites in the last decades, a theoretical understanding is far from being complete. In this thesis, I aim at a better understanding of the evolution of transcriptional regulatory binding sequences by using biophysical and population genetic models. In the first part of the thesis, I discuss how to formulate the evolutionary dynamics of binding se- quences in a single isolated binding site and in promoter/enhancer regions. I develop a theoretical framework bridging between a thermodynamical model for transcription and a mutation-selection-drift model for monomorphic populations. I mainly address the typical evolutionary rates, and how they de- pend on biophysical parameters (e.g. binding length and specificity) and population genetic parameters (e.g. population size and selection strength). In the second part of the thesis, I analyse empirical data for a better evolutionary and biophysical understanding of sequence-specific binding of bacterial RNA polymerase. First, I infer selection on regulatory and non-regulatory binding sites of RNA polymerase in the E. coli K12 genome. Second, I infer the chemical potential of RNA polymerase, an important but unknown physical parameter defining the threshold energy for strong binding. Furthermore, I try to understand the relation between the lac promoter sequence diversity and the LacZ activity variation among 20 bacterial isolates by constructing a simple but biophysically motivated gene expression model. Lastly, I lay out a statistical framework to predict adaptive point mutations in de novo promoter evolution in a selection experiment.}, author = {Tugrul, Murat}, pages = {89}, publisher = {IST Austria}, title = {{Evolution of transcriptional regulatory sequences}}, year = {2016}, } @phdthesis{1125, abstract = {Natural environments are never constant but subject to spatial and temporal change on all scales, increasingly so due to human activity. Hence, it is crucial to understand the impact of environmental variation on evolutionary processes. In this thesis, I present three topics that share the common theme of environmental variation, yet illustrate its effect from different perspectives. First, I show how a temporally fluctuating environment gives rise to second-order selection on a modifier for stress-induced mutagenesis. Without fluctuations, when populations are adapted to their environment, mutation rates are minimized. I argue that a stress-induced mutator mechanism may only be maintained if the population is repeatedly subjected to diverse environmental challenges, and I outline implications of the presented results to antibiotic treatment strategies. Second, I discuss my work on the evolution of dispersal. Besides reproducing known results about the effect of heterogeneous habitats on dispersal, it identifies spatial changes in dispersal type frequencies as a source for selection for increased propensities to disperse. This concept contains effects of relatedness that are known to promote dispersal, and I explain how it identifies other forces selecting for dispersal and puts them on a common scale. Third, I analyse genetic variances of phenotypic traits under multivariate stabilizing selection. For the case of constant environments, I generalize known formulae of equilibrium variances to multiple traits and discuss how the genetic variance of a focal trait is influenced by selection on background traits. I conclude by presenting ideas and preliminary work aiming at including environmental fluctuations in the form of moving trait optima into the model.}, author = {Novak, Sebastian}, pages = {124}, publisher = {IST Austria}, title = {{Evolutionary proccesses in variable emvironments}}, year = {2016}, } @misc{5554, abstract = {The data stored here is used in Murat Tugrul's PhD thesis (Chapter 3), which is related to the evolution of bacterial RNA polymerase binding. Magdalena Steinrueck (PhD Student in Calin Guet's group at IST Austria) performed the experiments and created the data on de novo promoter evolution. Fabienne Jesse (PhD Student in Jon Bollback's group at IST Austria) performed the experiments and created the data on lac promoter evolution.}, author = {Tugrul, Murat}, keywords = {RNAP binding, de novo promoter evolution, lac promoter}, publisher = {IST Austria}, title = {{Experimental Data for Binding Site Evolution of Bacterial RNA Polymerase}}, doi = {10.15479/AT:ISTA:43}, year = {2016}, } @phdthesis{1124, author = {Morri, Maurizio}, pages = {129}, publisher = {IST Austria}, title = {{Optical functionalization of human class A orphan G-protein coupled receptors}}, year = {2016}, } @article{1321, abstract = {Most migrating cells extrude their front by the force of actin polymerization. Polymerization requires an initial nucleation step, which is mediated by factors establishing either parallel filaments in the case of filopodia or branched filaments that form the branched lamellipodial network. Branches are considered essential for regular cell motility and are initiated by the Arp2/3 complex, which in turn is activated by nucleation-promoting factors of the WASP and WAVE families. Here we employed rapid amoeboid crawling leukocytes and found that deletion of the WAVE complex eliminated actin branching and thus lamellipodia formation. The cells were left with parallel filaments at the leading edge, which translated, depending on the differentiation status of the cell, into a unipolar pointed cell shape or cells with multiple filopodia. Remarkably, unipolar cells migrated with increased speed and enormous directional persistence, while they were unable to turn towards chemotactic gradients. Cells with multiple filopodia retained chemotactic activity but their migration was progressively impaired with increasing geometrical complexity of the extracellular environment. These findings establish that diversified leading edge protrusions serve as explorative structures while they slow down actual locomotion.}, author = {Leithner, Alexander F and Eichner, Alexander and Müller, Jan and Reversat, Anne and Brown, Markus and Schwarz, Jan and Merrin, Jack and De Gorter, David and Schur, Florian and Bayerl, Jonathan and De Vries, Ingrid and Wieser, Stefan and Hauschild, Robert and Lai, Frank and Moser, Markus and Kerjaschki, Dontscho and Rottner, Klemens and Small, Victor and Stradal, Theresia and Sixt, Michael K}, journal = {Nature Cell Biology}, pages = {1253 -- 1259}, publisher = {Nature Publishing Group}, title = {{Diversified actin protrusions promote environmental exploration but are dispensable for locomotion of leukocytes}}, doi = {10.1038/ncb3426}, volume = {18}, year = {2016}, } @article{1183, abstract = {Autism spectrum disorders (ASD) are a group of genetic disorders often overlapping with other neurological conditions. We previously described abnormalities in the branched-chain amino acid (BCAA) catabolic pathway as a cause of ASD. Here, we show that the solute carrier transporter 7a5 (SLC7A5), a large neutral amino acid transporter localized at the blood brain barrier (BBB), has an essential role in maintaining normal levels of brain BCAAs. In mice, deletion of Slc7a5 from the endothelial cells of the BBB leads to atypical brain amino acid profile, abnormal mRNA translation, and severe neurological abnormalities. Furthermore, we identified several patients with autistic traits and motor delay carrying deleterious homozygous mutations in the SLC7A5 gene. Finally, we demonstrate that BCAA intracerebroventricular administration ameliorates abnormal behaviors in adult mutant mice. Our data elucidate a neurological syndrome defined by SLC7A5 mutations and support an essential role for the BCAA in human brain function.}, author = {Tarlungeanu, Dora-Clara and Deliu, Elena and Dotter, Christoph and Kara, Majdi and Janiesch, Philipp and Scalise, Mariafrancesca and Galluccio, Michele and Tesulov, Mateja and Morelli, Emanuela and Sönmez, Fatma and Bilgüvar, Kaya and Ohgaki, Ryuichi and Kanai, Yoshikatsu and Johansen, Anide and Esharif, Seham and Ben Omran, Tawfeg and Topcu, Meral and Schlessinger, Avner and Indiveri, Cesare and Duncan, Kent and Caglayan, Ahmet and Günel, Murat and Gleeson, Joseph and Novarino, Gaia}, journal = {Cell}, number = {6}, pages = {1481 -- 1494}, publisher = {Cell Press}, title = {{Impaired amino acid transport at the blood brain barrier is a cause of autism spectrum disorder}}, doi = {10.1016/j.cell.2016.11.013}, volume = {167}, year = {2016}, } @inproceedings{1437, abstract = {We study algorithmic questions for concurrent systems where the transitions are labeled from a complete, closed semiring, and path properties are algebraic with semiring operations. The algebraic path properties can model dataflow analysis problems, the shortest path problem, and many other natural problems that arise in program analysis. We consider that each component of the concurrent system is a graph with constant treewidth, a property satisfied by the controlflow graphs of most programs. We allow for multiple possible queries, which arise naturally in demand driven dataflow analysis. The study of multiple queries allows us to consider the tradeoff between the resource usage of the one-time preprocessing and for each individual query. The traditional approach constructs the product graph of all components and applies the best-known graph algorithm on the product. In this approach, even the answer to a single query requires the transitive closure (i.e., the results of all possible queries), which provides no room for tradeoff between preprocessing and query time. Our main contributions are algorithms that significantly improve the worst-case running time of the traditional approach, and provide various tradeoffs depending on the number of queries. For example, in a concurrent system of two components, the traditional approach requires hexic time in the worst case for answering one query as well as computing the transitive closure, whereas we show that with one-time preprocessing in almost cubic time, each subsequent query can be answered in at most linear time, and even the transitive closure can be computed in almost quartic time. Furthermore, we establish conditional optimality results showing that the worst-case running time of our algorithms cannot be improved without achieving major breakthroughs in graph algorithms (i.e., improving the worst-case bound for the shortest path problem in general graphs). Preliminary experimental results show that our algorithms perform favorably on several benchmarks.}, author = {Chatterjee, Krishnendu and Goharshady, Amir and Ibsen-Jensen, Rasmus and Pavlogiannis, Andreas}, location = {St. Petersburg, FL, USA}, pages = {733 -- 747}, publisher = {ACM}, title = {{Algorithms for algebraic path properties in concurrent systems of constant treewidth components}}, doi = {10.1145/2837614.2837624}, volume = {20-22}, year = {2016}, } @inproceedings{1386, abstract = {We consider nondeterministic probabilistic programs with the most basic liveness property of termination. We present efficient methods for termination analysis of nondeterministic probabilistic programs with polynomial guards and assignments. Our approach is through synthesis of polynomial ranking supermartingales, that on one hand significantly generalizes linear ranking supermartingales and on the other hand is a counterpart of polynomial ranking-functions for proving termination of nonprobabilistic programs. The approach synthesizes polynomial ranking-supermartingales through Positivstellensatz's, yielding an efficient method which is not only sound, but also semi-complete over a large subclass of programs. We show experimental results to demonstrate that our approach can handle several classical programs with complex polynomial guards and assignments, and can synthesize efficient quadratic ranking-supermartingales when a linear one does not exist even for simple affine programs.}, author = {Chatterjee, Krishnendu and Fu, Hongfei and Goharshady, Amir}, location = {Toronto, Canada}, pages = {3 -- 22}, publisher = {Springer}, title = {{Termination analysis of probabilistic programs through Positivstellensatz's}}, doi = {10.1007/978-3-319-41528-4_1}, volume = {9779}, year = {2016}, } @article{1100, abstract = {During metazoan development, the temporal pattern of morphogen signaling is critical for organizing cell fates in space and time. Yet, tools for temporally controlling morphogen signaling within the embryo are still scarce. Here, we developed a photoactivatable Nodal receptor to determine how the temporal pattern of Nodal signaling affects cell fate specification during zebrafish gastrulation. By using this receptor to manipulate the duration of Nodal signaling in vivo by light, we show that extended Nodal signaling within the organizer promotes prechordal plate specification and suppresses endoderm differentiation. Endoderm differentiation is suppressed by extended Nodal signaling inducing expression of the transcriptional repressor goosecoid (gsc) in prechordal plate progenitors, which in turn restrains Nodal signaling from upregulating the endoderm differentiation gene sox17 within these cells. Thus, optogenetic manipulation of Nodal signaling identifies a critical role of Nodal signaling duration for organizer cell fate specification during gastrulation.}, author = {Sako, Keisuke and Pradhan, Saurabh and Barone, Vanessa and Inglés Prieto, Álvaro and Mueller, Patrick and Ruprecht, Verena and Capek, Daniel and Galande, Sanjeev and Janovjak, Harald L and Heisenberg, Carl-Philipp J}, journal = {Cell Reports}, number = {3}, pages = {866 -- 877}, publisher = {Cell Press}, title = {{Optogenetic control of nodal signaling reveals a temporal pattern of nodal signaling regulating cell fate specification during gastrulation}}, doi = {10.1016/j.celrep.2016.06.036}, volume = {16}, year = {2016}, } @article{2271, abstract = {A class of valued constraint satisfaction problems (VCSPs) is characterised by a valued constraint language, a fixed set of cost functions on a finite domain. Finite-valued constraint languages contain functions that take on rational costs and general-valued constraint languages contain functions that take on rational or infinite costs. An instance of the problem is specified by a sum of functions from the language with the goal to minimise the sum. This framework includes and generalises well-studied constraint satisfaction problems (CSPs) and maximum constraint satisfaction problems (Max-CSPs). Our main result is a precise algebraic characterisation of valued constraint languages whose instances can be solved exactly by the basic linear programming relaxation (BLP). For a general-valued constraint language Γ, BLP is a decision procedure for Γ if and only if Γ admits a symmetric fractional polymorphism of every arity. For a finite-valued constraint language Γ, BLP is a decision procedure if and only if Γ admits a symmetric fractional polymorphism of some arity, or equivalently, if Γ admits a symmetric fractional polymorphism of arity 2. Using these results, we obtain tractability of several novel and previously widely-open classes of VCSPs, including problems over valued constraint languages that are: (1) submodular on arbitrary lattices; (2) bisubmodular (also known as k-submodular) on arbitrary finite domains; (3) weakly (and hence strongly) tree-submodular on arbitrary trees. }, author = {Kolmogorov, Vladimir and Thapper, Johan and Živný, Stanislav}, journal = {SIAM Journal on Computing}, number = {1}, pages = {1 -- 36}, publisher = {SIAM}, title = {{The power of linear programming for general-valued CSPs}}, doi = {10.1137/130945648}, volume = {44}, year = {2015}, } @article{256, abstract = {We show that a non-singular integral form of degree d is soluble over the integers if and only if it is soluble over ℝ and over ℚp for all primes p, provided that the form has at least (d - 1/2 √d)2d variables. This improves on a longstanding result of Birch.}, author = {Timothy Browning and Prendiville, Sean M}, journal = {Journal fur die Reine und Angewandte Mathematik}, number = {731}, publisher = {Walter de Gruyter}, title = {{Improvements in Birch's theorem on forms in many variables}}, doi = {10.1515/crelle-2014-0122}, volume = {2017}, year = {2015}, } @article{257, abstract = {For suitable pairs of diagonal quadratic forms in eight variables we use the circle method to investigate the density of simultaneous integer solutions and relate this to the problem of estimating linear correlations among sums of two squares.}, author = {Timothy Browning and Munshi, Ritabrata}, journal = {Forum Mathematicum}, number = {4}, pages = {2025 -- 2050}, publisher = {Walter de Gruyter GmbH}, title = {{Pairs of diagonal quadratic forms and linear correlations among sums of two squares}}, doi = {10.1515/forum-2013-6024}, volume = {27}, year = {2015}, } @inbook{258, abstract = {Given a number field k and a projective algebraic variety X defined over k, the question of whether X contains a k-rational point is both very natural and very difficult. In the event that the set X(k) of k-rational points is not empty, one can also ask how the points of X(k) are distributed. Are they dense in X under the Zariski topology? Are they dense in the set.}, author = {Browning, Timothy D}, booktitle = {Arithmetic and Geometry}, pages = {89 -- 113}, publisher = {Cambridge University Press}, title = {{A survey of applications of the circle method to rational points}}, doi = {10.1017/CBO9781316106877.009}, year = {2015}, } @article{259, abstract = {The Hasse principle and weak approximation is established for non-singular cubic hypersurfaces X over the function field }, author = {Timothy Browning and Vishe, Pankaj}, journal = {Geometric and Functional Analysis}, number = {3}, pages = {671 -- 732}, publisher = {Birkhäuser}, title = {{Rational points on cubic hypersurfaces over F_q(t) }}, doi = {10.1007/s00039-015-0328-5}, volume = {25}, year = {2015}, } @article{260, author = {Timothy Browning and Dietmann, Rainer and Heath-Brown, Roger}, journal = {Journal of the Institute of Mathematics of Jussieu}, number = {4}, publisher = {Cambridge University Press}, title = {{Erratum Rational points on intersections of cubic and quadric hypersurfaces}}, doi = {10.1017/S1474748014000279}, volume = {14}, year = {2015}, } @article{802, abstract = {Glycoinositolphosphoceramides (GIPCs) are complex sphingolipids present at the plasma membrane of various eukaryotes with the important exception of mammals. In fungi, these glycosphingolipids commonly contain an alpha-mannose residue (Man) linked at position 2 of the inositol. However, several pathogenic fungi additionally synthesize zwitterionic GIPCs carrying an alpha-glucosamine residue (GlcN) at this position. In the human pathogen Aspergillus fumigatus, the GlcNalpha1,2IPC core (where IPC is inositolphosphoceramide) is elongated to Manalpha1,3Manalpha1,6GlcNalpha1,2IPC, which is the most abundant GIPC synthesized by this fungus. In this study, we identified an A. fumigatus N-acetylglucosaminyltransferase, named GntA, and demonstrate its involvement in the initiation of zwitterionic GIPC biosynthesis. Targeted deletion of the gene encoding GntA in A. fumigatus resulted in complete absence of zwitterionic GIPC; a phenotype that could be reverted by episomal expression of GntA in the mutant. The N-acetylhexosaminyltransferase activity of GntA was substantiated by production of N-acetylhexosamine-IPC in the yeast Saccharomyces cerevisiae upon GntA expression. Using an in vitro assay, GntA was furthermore shown to use UDP-N-acetylglucosamine as donor substrate to generate a glycolipid product resistant to saponification and to digestion by phosphatidylinositol-phospholipase C as expected for GlcNAcalpha1,2IPC. Finally, as the enzymes involved in mannosylation of IPC, GntA was localized to the Golgi apparatus, the site of IPC synthesis.}, author = {Engel, Jakob and Schmalhorst, Philipp S and Kruger, Anke and Muller, Christina and Buettner, Falk and Routier, Françoise}, journal = {Glycobiology}, number = {12}, pages = {1423 -- 1430}, publisher = {Oxford University Press}, title = {{Characterization of an N-acetylglucosaminyltransferase involved in Aspergillus fumigatus zwitterionic glycoinositolphosphoceramide biosynthesis}}, doi = {10.1093/glycob/cwv059}, volume = {25}, year = {2015}, } @article{814, abstract = {Human immunodeficiency virus type 1 (HIV-1) assembly proceeds in two stages. First, the 55 kilodalton viral Gag polyprotein assembles into a hexameric protein lattice at the plasma membrane of the infected cell, inducing budding and release of an immature particle. Second, Gag is cleaved by the viral protease, leading to internal rearrangement of the virus into the mature, infectious form. Immature and mature HIV-1 particles are heterogeneous in size and morphology, preventing high-resolution analysis of their protein arrangement in situ by conventional structural biology methods. Here we apply cryo-electron tomography and sub-tomogram averaging methods to resolve the structure of the capsid lattice within intact immature HIV-1 particles at subnanometre resolution, allowing unambiguous positioning of all α-helices. The resulting model reveals tertiary and quaternary structural interactions that mediate HIV-1 assembly. Strikingly, these interactions differ from those predicted by the current model based on in vitro-assembled arrays of Gag-derived proteins from Mason-Pfizer monkey virus. To validate this difference, we solve the structure of the capsid lattice within intact immature Mason-Pfizer monkey virus particles. Comparison with the immature HIV-1 structure reveals that retroviral capsid proteins, while having conserved tertiary structures, adopt different quaternary arrangements during virus assembly. The approach demonstrated here should be applicable to determine structures of other proteins at subnanometre resolution within heterogeneous environments.}, author = {Florian Schur and Hagen, Wim J and Rumlová, Michaela and Ruml, Tomáš and Müller B and Kraüsslich, Hans Georg and Briggs, John A}, journal = {Nature}, number = {7535}, pages = {505 -- 508}, publisher = {Nature Publishing Group}, title = {{Structure of the immature HIV-1 capsid in intact virus particles at 8.8 Å resolution}}, doi = {10.1038/nature13838}, volume = {517}, year = {2015}, } @article{815, abstract = {The polyprotein Gag is the primary structural component of retroviruses. Gag consists of independently folded domains connected by flexible linkers. Interactions between the conserved capsid (CA) domains of Gag mediate formation of hexameric protein lattices that drive assembly of immature virus particles. Proteolytic cleavage of Gag by the viral protease (PR) is required for maturation of retroviruses from an immature form into an infectious form. Within the assembled Gag lattices of HIV-1 and Mason- Pfizer monkey virus (M-PMV), the C-terminal domain of CA adopts similar quaternary arrangements, while the N-terminal domain of CA is packed in very different manners. Here, we have used cryo-electron tomography and subtomogram averaging to study in vitro-assembled, immature virus-like Rous sarcoma virus (RSV) Gag particles and have determined the structure of CA and the surrounding regions to a resolution of ~8 Å. We found that the C-terminal domain of RSV CA is arranged similarly to HIV-1 and M-PMV, whereas the N-terminal domain of CA adopts a novel arrangement in which the upstream p10 domain folds back into the CA lattice. In this position the cleavage site between CA and p10 appears to be inaccessible to PR. Below CA, an extended density is consistent with the presence of a six-helix bundle formed by the spacer-peptide region. We have also assessed the affect of lattice assembly on proteolytic processing by exogenous PR. The cleavage between p10 and CA is indeed inhibited in the assembled lattice, a finding consistent with structural regulation of proteolytic maturation. }, author = {Schur, Florian and Dick, Robert and Hagen, Wim and Vogt, Volker and Briggs, John}, journal = {Journal of Virology}, number = {20}, pages = {10294 -- 10302}, publisher = {ASM}, title = {{The structure of immature virus like Rous sarcoma virus gag particles reveals a structural role for the p10 domain in assembly}}, doi = {10.1128/JVI.01502-15}, volume = {89}, year = {2015}, } @article{8242, author = {Einhorn, Lukas and Fazekas, Judit and Muhr, Martina and Schoos, Alexandra and Oida, Kumiko and Singer, Josef and Panakova, Lucia and Manzano-Szalai, Krisztina and Jensen-Jarolim, Erika}, issn = {0091-6749}, journal = {Journal of Allergy and Clinical Immunology}, number = {2}, publisher = {Elsevier}, title = {{Generation of recombinant FcεRIα of dog, cat and horse for component-resolved allergy diagnosis in veterinary patients}}, doi = {10.1016/j.jaci.2014.12.1263}, volume = {135}, year = {2015}, } @article{832, abstract = {Plants maintain capacity to form new organs such as leaves, flowers, lateral shoots and roots throughout their postembryonic lifetime. Lateral roots (LRs) originate from a few pericycle cells that acquire attributes of founder cells (FCs), undergo series of anticlinal divisions, and give rise to a few short initial cells. After initiation, coordinated cell division and differentiation occur, giving rise to lateral root primordia (LRP). Primordia continue to grow, emerge through the cortex and epidermal layers of the primary root, and finally a new apical meristem is established taking over the responsibility for growth of mature lateral roots [for detailed description of the individual stages of lateral root organogenesis see Malamy and Benfey (1997)]. To examine this highly dynamic developmental process and to investigate a role of various hormonal, genetic and environmental factors in the regulation of lateral root organogenesis, the real time imaging based analyses represent extremely powerful tools (Laskowski et al., 2008; De Smet et al., 2012; Marhavy et al., 2013 and 2014). Herein, we describe a protocol for real time lateral root primordia (LRP) analysis, which enables the monitoring of an onset of the specific gene expression and subcellular protein localization during primordia organogenesis, as well as the evaluation of the impact of genetic and environmental perturbations on LRP organogenesis.}, author = {Peter Marhavy and Eva Benková}, journal = {Bio-protocol}, number = {8}, publisher = {Bio-protocol LLC}, title = {{Real time analysis of lateral root organogenesis in arabidopsis}}, doi = {10.21769/BioProtoc.1446}, volume = {5}, year = {2015}, } @article{8456, abstract = {The large majority of three-dimensional structures of biological macromolecules have been determined by X-ray diffraction of crystalline samples. High-resolution structure determination crucially depends on the homogeneity of the protein crystal. Overall ‘rocking’ motion of molecules in the crystal is expected to influence diffraction quality, and such motion may therefore affect the process of solving crystal structures. Yet, so far overall molecular motion has not directly been observed in protein crystals, and the timescale of such dynamics remains unclear. Here we use solid-state NMR, X-ray diffraction methods and μs-long molecular dynamics simulations to directly characterize the rigid-body motion of a protein in different crystal forms. For ubiquitin crystals investigated in this study we determine the range of possible correlation times of rocking motion, 0.1–100 μs. The amplitude of rocking varies from one crystal form to another and is correlated with the resolution obtainable in X-ray diffraction experiments.}, author = {Ma, Peixiang and Xue, Yi and Coquelle, Nicolas and Haller, Jens D. and Yuwen, Tairan and Ayala, Isabel and Mikhailovskii, Oleg and Willbold, Dieter and Colletier, Jacques-Philippe and Skrynnikov, Nikolai R. and Schanda, Paul}, issn = {2041-1723}, journal = {Nature Communications}, keywords = {General Biochemistry, Genetics and Molecular Biology, General Physics and Astronomy, General Chemistry}, publisher = {Springer Nature}, title = {{Observing the overall rocking motion of a protein in a crystal}}, doi = {10.1038/ncomms9361}, volume = {6}, year = {2015}, } @article{8457, abstract = {We review recent advances in methodologies to study microseconds‐to‐milliseconds exchange processes in biological molecules using magic‐angle spinning solid‐state nuclear magnetic resonance (MAS ssNMR) spectroscopy. The particularities of MAS ssNMR, as compared to solution‐state NMR, are elucidated using numerical simulations and experimental data. These simulations reveal the potential of MAS NMR to provide detailed insight into short‐lived conformations of biological molecules. Recent studies of conformational exchange dynamics in microcrystalline ubiquitin are discussed.}, author = {Ma, Peixiang and Schanda, Paul}, isbn = {9780470034590}, journal = {eMagRes}, number = {3}, pages = {699--708}, publisher = {Wiley}, title = {{Conformational exchange processes in biological systems: Detection by solid-state NMR}}, doi = {10.1002/9780470034590.emrstm1418}, volume = {4}, year = {2015}, } @article{848, abstract = {The nature of factors governing the tempo and mode of protein evolution is a fundamental issue in evolutionary biology. Specifically, whether or not interactions between different sites, or epistasis, are important in directing the course of evolution became one of the central questions. Several recent reports have scrutinized patterns of long-term protein evolution claiming them to be compatible only with an epistatic fitness landscape. However, these claims have not yet been substantiated with a formal model of protein evolution. Here, we formulate a simple covarion-like model of protein evolution focusing on the rate at which the fitness impact of amino acids at a site changes with time. We then apply the model to the data on convergent and divergent protein evolution to test whether or not the incorporation of epistatic interactions is necessary to explain the data. We find that convergent evolution cannot be explained without the incorporation of epistasis and the rate at which an amino acid state switches from being acceptable at a site to being deleterious is faster than the rate of amino acid substitution. Specifically, for proteins that have persisted in modern prokaryotic organisms since the last universal common ancestor for one amino acid substitution approximately ten amino acid states switch from being accessible to being deleterious, or vice versa. Thus, molecular evolution can only be perceived in the context of rapid turnover of which amino acids are available for evolution.}, author = {Usmanova, Dinara and Ferretti, Luca and Povolotskaya, Inna and Vlasov, Peter and Kondrashov, Fyodor}, journal = {Molecular Biology and Evolution}, number = {2}, pages = {542 -- 554}, publisher = {Oxford University Press}, title = {{A model of substitution trajectories in sequence space and long-term protein evolution}}, doi = {10.1093/molbev/msu318}, volume = {32}, year = {2015}, } @article{8495, abstract = {In this note, we consider the dynamics associated to a perturbation of an integrable Hamiltonian system in action-angle coordinates in any number of degrees of freedom and we prove the following result of ``micro-diffusion'': under generic assumptions on $ h$ and $ f$, there exists an orbit of the system for which the drift of its action variables is at least of order $ \sqrt {\varepsilon }$, after a time of order $ \sqrt {\varepsilon }^{-1}$. The assumptions, which are essentially minimal, are that there exists a resonant point for $ h$ and that the corresponding averaged perturbation is non-constant. The conclusions, although very weak when compared to usual instability phenomena, are also essentially optimal within this setting.}, author = {Bounemoura, Abed and Kaloshin, Vadim}, issn = {0002-9939}, journal = {Proceedings of the American Mathematical Society}, number = {4}, pages = {1553--1560}, publisher = {American Mathematical Society}, title = {{A note on micro-instability for Hamiltonian systems close to integrable}}, doi = {10.1090/proc/12796}, volume = {144}, year = {2015}, } @article{8498, abstract = {In the present note we announce a proof of a strong form of Arnold diffusion for smooth convex Hamiltonian systems. Let ${\mathbb T}^2$ be a 2-dimensional torus and B2 be the unit ball around the origin in ${\mathbb R}^2$ . Fix ρ > 0. Our main result says that for a 'generic' time-periodic perturbation of an integrable system of two degrees of freedom $H_0(p)+\varepsilon H_1(\theta,p,t),\quad \ \theta\in {\mathbb T}^2,\ p\in B^2,\ t\in {\mathbb T}={\mathbb R}/{\mathbb Z}$ , with a strictly convex H0, there exists a ρ-dense orbit (θε, pε, t)(t) in ${\mathbb T}^2 \times B^2 \times {\mathbb T}$ , namely, a ρ-neighborhood of the orbit contains ${\mathbb T}^2 \times B^2 \times {\mathbb T}$ . Our proof is a combination of geometric and variational methods. The fundamental elements of the construction are the usage of crumpled normally hyperbolic invariant cylinders from [9], flower and simple normally hyperbolic invariant manifolds from [36] as well as their kissing property at a strong double resonance. This allows us to build a 'connected' net of three-dimensional normally hyperbolic invariant manifolds. To construct diffusing orbits along this net we employ a version of the Mather variational method [41] equipped with weak KAM theory [28], proposed by Bernard in [7].}, author = {Kaloshin, Vadim and Zhang, K}, issn = {0951-7715}, journal = {Nonlinearity}, keywords = {Mathematical Physics, General Physics and Astronomy, Applied Mathematics, Statistical and Nonlinear Physics}, number = {8}, pages = {2699--2720}, publisher = {IOP Publishing}, title = {{Arnold diffusion for smooth convex systems of two and a half degrees of freedom}}, doi = {10.1088/0951-7715/28/8/2699}, volume = {28}, year = {2015}, } @article{8499, abstract = {We consider the cubic defocusing nonlinear Schrödinger equation in the two dimensional torus. Fix s>1. Recently Colliander, Keel, Staffilani, Tao and Takaoka proved the existence of solutions with s-Sobolev norm growing in time. We establish the existence of solutions with polynomial time estimates. More exactly, there is c>0 such that for any K≫1 we find a solution u and a time T such that ∥u(T)∥Hs≥K∥u(0)∥Hs. Moreover, the time T satisfies the polynomial bound 0 |
fa95f37a9d83817b |
The Schrödinger cat male and female states are discussed. The Wigner and Q–functions of generalized correlated light are given. Linear transformator of photon statistics is reviewed.
[7mm] V. I. Man’ko
Lebedev Physical Institute
53 Leninsky Prospekt, Moscow 117333, Russia
1 Introduction
The integral of motion which is quadratic in position and momentum was found for classical oscillator with time-dependent frequency by Ermakov [1]. Two time-dependent integrals of motion which are linear forms in position and momentum for the classical and quantum oscillator with time-dependent frequency were found in [2]; for a charge moving in varying in time uniform magnetic field, this was done in [3]. For the multimode nonstationary oscillatory systems, such new integrals of motion, both of Ermakov’s type (quadratic in positions and momenta) and linear in position and momenta, generalizing the results of [2] were constructed in [4]. We will consider below the parametric oscillator using the integrals of motion. The Wigner function of multimode squeezed light is studied using such special functions as multivariable Hermite polynomials.
The theory of parametric oscillator is appropriate to consider the problem of creation of photons from vacuum in a resonator with moving walls (with moving mirrors) which is the phenomenon based on the existence of Casimir forces (so-called nonstationary Casimir effect). The resonator with moving boundaries (moving mirrors, media with time-dependent refractive index) produces also effect of squeezing in the light quadratures. In the high energy physics very fast particle collisions may produce new types of states of boson fields (pions, for example) which are squeezed and correlated states studied in quantum optics but almost unknown in particle physics, both theoretically and experimentally.
2 Multimode Quadratic Systems
The generic nonstationary linear system has the Hamiltonian
where we use 2N–vectors
The real symplectic matrix is the solution to the system of equations
where the real antisymmetrical matrix is 2N–dimensional analog of the Pauli matrix and the vector is the solution to the system of equations
If for time one has the initial Wigner function of the system in the form
the Wigner function of the system at time is (due to the density operator is the integral of motion)
This formula may be interpreted as transformation of input Wigner function into output Wigner function due to symplectic quadrature transform (2). An optical linear transformator of photon distribution function using this output Wigner function is suggested in [7].
The Hamiltonian (1) may be rewritten in terms of creation and annihilation operators
where we use 2N–vectors
The complex matrix is the solution to the system of equations
where the imaginary antisymmetric matrix is 2N2N–analog of the Pauli matrix and the vector is the solution to the system of equations
Analogously to the Wigner function evolution, if for time one has the initial Q–function of the system in the form
the Q–function of the system at time is
For time-independent Hamiltonian (1), the matrix is
and the vector is
For time-independent Hamiltonian (7), the matrix is
and the vector is
For time-dependent linear systems, the Wigner function of generic squeezed and correlated state (generalized correlated state [8] ) has Gaussian form and it was calculated in [5].
Thus the evolution of the Wigner function and Q–function for systems with quadratic Hamiltonians for any state is given by the following prescription. Given the Wigner function for the initial moment of time Then the Wigner function for time is obtained by the replacement
where time-dependent arguments are the linear integrals of motion of the quadratic system found in [5], [4], and [9]. This formula was given as integral with –function kernel in [10]. The linear integrals of motion describe initial values of classical trajectories in the phase space of the system. The same ansatz is used for the Q–function. Namely, given the Q–function of the quadratic system for the initial moment of time Then the Q–function for time is given by the replacement
where the 2N–vector is the integral of motion linear in the annihilation and creation operators. This ansatz follows from the statement that the density operator of the Hamiltonian system is the integral of motion, and its matrix elements in any basis must depend on appropriate integrals of motion.
3 Multimode Mixed Correlated Light
The most general mixed squeezed state of the N–mode light with a Gaussian density operator is described by the Wigner function of the generic Gaussian form,
where 2N parameters and , , combined into vector , are average values of quadratures,
A real symmetric dispersion matrix consists of 2N+N variances
They obey uncertainty relations constraints [5]. According to previous section the Wigner function of parametric linear system with initial value (17) is
The photon distribution function of the state (17)
where the state is photon number state, which was calculated in [11], [12] and it is
The trace (21) may be calculated using the explicit form of the Wigner function of the operator (see, [5]) which is the product of Wigner functions of one-dimensional oscillator expressed in terms of Laguerre polynomials of the form
The function is multidimensional Hermite polynomial. The probability to have no photons is
where we introduced the matrix
and the matrix
The argument of Hermite polynomial is
and the 2N–dimensional unitary matrix
is introduced, in which is the NN–identity matrix. Also, we use the notation
The mean photon number for j–th mode is expressed in terms of photon quadrature means and dispersions
The photon distribution function for transformed state (20) is given by the same formulae (22), (24)–(28) but with changed dispersion matrix
and quadrature means
Thus we have a linear transformator of photon statistics suggested in [7].
Let us now introduce a complex 2N–vector
Thus, if the Wigner function (17) is given one has the Q–function. Also, if one has the Q–function (32), i.e., the matrix and vector y, the Wigner function may be obtained due to relations
Multivariable Hermite polynomials describe the photon distribution function for the multimode mixed and pure correlated light [11], [13], [14]. The nonclassical state of light may be created due to nonstationary Casimir effect [15], [16] and the multimode oscillator is the model to describe the behaviour of squeezed and correlated photons.
4 Parametric Oscillator
For the parametric oscillator with the Hamiltonian
where we take , there exists the time-dependent integral of motion found in [2]
satisfying the commutation relation
It is easy to show that packet solutions of the Schrödinger equation may be introduced and interpreted as coherent states [2], since they are eigenstates of the operator (35), of the form
is analog of the ground state of the oscillator and is a complex number.
Variances of the position and momentum of the parametric oscillator in the state (38), (39) are
and the correlation coefficient of the position and momentum has the value corresponding to minimization of the Schrödinger uncertainty relation [17]
If we have squeezing in photon quadrature components.
The analogs of orthogonal and complete system of states which are excited states of stationary oscillator are obtained by expansion of (38) into power series in We have
and these squeezed and correlated number states are eigenstates of invariant In case of periodical dependence of frequency on time the classical solution in stable regime may be taken in Floquet form
where is a periodical function of time. Then the states (42) are quasienergy states realizing the unitary irreducible representation of time translation symmetry group of the Hamiltonian and the parameter determines the quasienergy spectrum. Unstable classical solutions give continuous spectrum of quasienergy states.
The partial cases of parametric oscillator are free motion stationary harmonic oscillator and repulsive oscillator The solutions obtained above are described by the function which is equal to for free particle, for usual oscillator, and for repulsive oscillator.
Another normalized solution to the Schrödinger equation
is the even coherent state [18] (the Schrödinger cat male state). The odd coherent state of the parametric oscillator (the Schrödinger cat female state)
satisfies the Schrödinger equation and is the eigenstate of the integral of motion (as well as the even coherent state) with the eigenvalue . These states are one-mode examples of squeezed and correlated Schrödinger cat states constructed in [19]. The experimental creation of the Schrödinger cat states is discussed in [20]. These states belong to family of nonclassical superposition states studied in [21], [22].
• [1] P. Ermakov, Univ. Izv. Kiev 20, N9, 1 (1880).
• [2] I. A. Malkin and V. I. Man’ko, Phys. Lett. A 32, 243 (1970).
• [3] I. A. Malkin, V. I. Man’ko, and D. A. Trifonov, Phys. Lett. A 30, 414 (1969).
• [4] I. A. Malkin, V. I. Man’ko, and D. A. Trifonov, J. Math. Phys. 14, 576 (1973).
• [5] V. V. Dodonov and V. I. Man’ko, Invariants and Evolution of Nonstationary Quantum Systems, Proceedings of Lebedev Physical Institute 183, ed. M. A. Markov (Nova Science, Commack, New York, 1989).
• [6] I. A. Malkin and V. I. Man’ko, Dynamical Symmetries and Coherent States of Quantum Systems (Nauka Publishers, Moscow, 1979) [in Russian].
• [7] V. V. Dodonov, O. V. Man’ko, V. I. Man’ko, and P. G. Polynkin, Talk at the International Conference on Coherent and Nonlinear Optics, St.-Petersburg, July 1995 (to be published in SPIE Proceedings).
• [8] E. C. G. Sudarshan, Charles B. Chiu, and G. Bhamathi, Phys. Rev. A 52, 43 (1995).
• [9] I. A. Malkin, V. I. Man’ko, and D. A. Trifonov, Phys. Rev. D 2, 1371 (1970).
• [10] V. V. Dodonov, O. V. Man’ko, and V. I. Man’ko, Proceedings of Lebedev Physical Institute 191, ed. M. A. Markov (Nauka Publishers, Moscow, 1989) p.171 [English translation: J. Russ. Laser Research (Plenum Press, New York) 16, 1 (1995)].
• [11] V. V. Dodonov, O. V. Man’ko, and V. I. Man’ko, Phys. Rev. A 50, 813 (1994).
• [12] V. V. Dodonov, V. I. Man’ko, and V. V. Semjonov, Nuovo Cim. B 83, 145 (1984).
• [13] V. V. Dodonov and V. I. Man’ko, J. Math. Phys. 35, 4277 (1994).
• [14] V. V. Dodonov, J. Math. Phys. A: Math. Gen. 27, 6191 (1994).
• [15] V. I. Man’ko, J. Sov. Laser Research (Plenum Press, New York) 12 N5 (1991).
• [16] V. V. Dodonov, A. B. Klimov, and V. I. Man’ko, Phys. Lett. A 49, 255 (1990).
• [17] E. Schrödinger, Ber. Kgl. Akad. Wiss. Berlin, 24, 296 (1930).
• [18] V. V. Dodonov, I. A. Malkin, and V. V. Man’ko, Physica, 72, 597 (1974).
• [19] V. V. Dodonov, V. I. Man’ko, and D. E. Nikonov, Phys. Rev. A 51, 3328 (1995).
• [20] S. Haroche, Nuovo Cim. B 110, 545 (1995).
• [21] M. M. Nieto and D. R. Truax, Phys. Rev. Lett. 71, 2843 (1993).
• [22] J. Janszky, Talk at the IV International Conference on Squeezed States and Uncertainty Relations, Shanxi, China, June 1995.
For everything else, email us at [email protected]. |
dd859f22059b80f9 | Schrodinger equation
From Conservapedia
(Redirected from Schrödinger equation)
Jump to: navigation, search
Mathematical forms
General time-dependent form
The Schrodinger equation may generally be written
i\hbar\frac{\partial}{\partial t}|\Psi\rangle=\hat H|\Psi\rangle
where i is the imaginary unit,
\hbar is Planck's constant divided by ,
|\Psi\rangle is the quantum mechanical state or wavefunction (expressed here in Dirac notation), and
\hat H is the Hamiltonian operator.
-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2}+V(x)\psi=i\hbar\frac{\partial \psi}{\partial t}
from which Schrodinger's equation and the eigenvalue problem \hat H\Psi = E\Psi can be easily seen.
Eigenvalue problems
E\psi=\hat H\psi
Here, E is energy, H is once again the Hamiltonian operator, and ψ is the energy eigenstate for E.
Examples for the time-independent equation
Free particle in one dimension
In this case, V(x) = 0 and so we see that the solution to the Schrodinger equation must be
ψ = Aeikx
with energy given by
Physically, this corresponds to a wave travelling with a momentum given by \hbar k, where k can in principle take any value.
Particle in a box
Consider a one-dimensional box of width a, where the potential energy is 0 inside the box and infinite outside of it. This means that ψ must be zero outside the box. One can verify (by substituting into the Schrodinger equation) that
ψ = sin(kx)
is a solution if k = nπ where n is any integer. Thus, rather than the continuum of solutions for the free particle, for the particle in a box there is a set of discrete solutions with energies given by
E_n=\frac{\hbar^2 k^2}{2m}=\frac{\hbar^2n^2\pi^2}{2m}
Personal tools |
ac9dfe3e39f60acd | Support Options
Submit a Support Ticket
Home Series Quantum Mechanics: Wavepackets About
Quantum Mechanics: Wavepackets
By Dragica Vasileska1, Gerhard Klimeck2
1. Arizona State University 2. Purdue University
View Series
Slides/Notes podcast
Licensed according to this deed.
Published on
In physics, a wave packet is an envelope or packet containing an arbitrary number of wave forms. In quantum mechanics the wave packet is ascribed a special significance: it is interpreted to be a "probability wave" describing the probability that a particle or particles in a particular state will be measured to have a given position and momentum.
By applying the Schrödinger equation in quantum mechanics it is possible to deduce the time evolution of a system, similar to the process of the Hamiltonian formalism in classical mechanics. The wave packet is a mathematical solution to the Schrödinger equation. The square of the area under the wave packet solution is interpreted to be the probability density of finding the particle in a region.
In the coordinate representation of the wave (such as the Cartesian coordinate system) the position of the wave is given by the position of the packet. Moreover, the narrower the spatial wave packet, and therefore the better defined the position of the wave packet, the larger the spread in the momentum of the wave. This trade-off between spread in position and spread in momentum is one example of the Heisenberg uncertainty principle.
• Wavepackets Description
• Homework Assignment on Wavepackets
• Sponsored by
Cite this work
Researchers should cite this work as follows:
• Dragica Vasileska; Gerhard Klimeck (2008), "Quantum Mechanics: Wavepackets,"
BibTex | EndNote
In This Series
1. Reading Material: Wavepackets
2. Homework Assignment: Wavepackets
|
8cc310a309b76295 | From Wikipedia, the free encyclopedia
Jump to: navigation, search
For the component in an internal combustion engine, see Crankcase ventilation system.
In physics, a breather is a nonlinear wave in which energy concentrates in a localized and oscillatory fashion. This contradicts with the expectations derived from the corresponding linear system for infinitesimal amplitudes, which tends towards an even distribution of initially localized energy.
A discrete breather is a breather solution on a nonlinear lattice.
The term breather originates from the characteristic that most breathers are localized in space and oscillate (breathe) in time.[1] But also the opposite situation: oscillations in space and localized in time[clarification needed], is denoted as a breather.
This breather pseudospherical surface corresponds to a solution of a non-linear wave-equation.
Pseudospherical breather surface
Sine-Gordon standing breather is a swinging in time coupled kink-antikink 2-soliton solution.
Large amplitude moving sine-Gordon breather.
A breather is a localized periodic solution of either continuous media equations or discrete lattice equations. The exactly solvable sine-Gordon equation[1] and the focusing nonlinear Schrödinger equation[2] are examples of one-dimensional partial differential equations that possess breather solutions.[3] Discrete nonlinear Hamiltonian lattices in many cases support breather solutions.
Breathers are solitonic structures. There are two types of breathers: standing or traveling ones.[4] Standing breathers correspond to localized solutions whose amplitude vary in time (they are sometimes called oscillons). A necessary condition for the existence of breathers in discrete lattices is that the breather main frequency and all its multipliers are located outside of the phonon spectrum of the lattice.
Example of a breather solution for the sine-Gordon equation[edit]
The sine-Gordon equation is the nonlinear dispersive partial differential equation
with the field u a function of the spatial coordinate x and time t.
An exact solution found by using the inverse scattering transform is:[1]
which, for ω < 1, is periodic in time t and decays exponentially when moving away from x = 0.
Example of a breather solution for the nonlinear Schrödinger equation[edit]
The focusing nonlinear Schrödinger equation [5] is the dispersive partial differential equation:
with u a complex field as a function of x and t. Further i denotes the imaginary unit.
One of the breather solutions is [2]
which gives breathers periodic in space x and approaching the uniform value a when moving away from the focus time t = 0. These breathers exist for values of the modulation parameter b less than √ 2. Note that a limiting case of the breather solution is the Peregrine soliton.[6]
See also[edit]
References and notes[edit]
1. ^ a b c M. J. Ablowitz; D. J. Kaup; A. C. Newell; H. Segur (1973). "Method for solving the sine-Gordon equation". Physical Review Letters. 30 (25): 1262–1264. Bibcode:1973PhRvL..30.1262A. doi:10.1103/PhysRevLett.30.1262.
2. ^ a b N. N. Akhmediev; V. M. Eleonskiǐ; N. E. Kulagin (1987). "First-order exact solutions of the nonlinear Schrödinger equation". Theoretical and Mathematical Physics. 72 (2): 809–818. Bibcode:1987TMP....72..809A. doi:10.1007/BF01017105. Translated from Teoreticheskaya i Matematicheskaya Fizika 72(2): 183–196, August, 1987.
3. ^ N. N. Akhmediev; A. Ankiewicz (1997). Solitons, non-linear pulses and beams. Springer. ISBN 978-0-412-75450-0.
4. ^ Miroshnichenko A, Vasiliev A, Dmitriev S. Solitons and Soliton Collisions.
5. ^ The focusing nonlinear Schrödinger equation has a nonlinearity parameter κ of the same sign (mathematics) as the dispersive term proportional to 2u/∂x2, and has soliton solutions. In the de-focusing nonlinear Schrödinger equation the nonlinearity parameter is of opposite sign.
6. ^ Kibler, B.; Fatome, J.; Finot, C.; Millot, G.; Dias, F.; Genty, G.; Akhmediev, N.; Dudley, J.M. (2010). "The Peregrine soliton in nonlinear fibre optics". Nature Physics. 6 (10): 790. Bibcode:2010NatPh...6..790K. doi:10.1038/nphys1740. |
ec458d0f85e8652b | Monday, April 22, 2013
Listen to Spacetime
Quantum gravity researcher at work.
Achim calls it “a quantum version of yard sticks.”
saibod said...
Without having looked into Achim's work in detail, I wonder: how does it relate to Connes' reconstruction theorem, which proves that a Riemannian manifold can be recovered from its underlying spectral triple?
Plato Hagel said...
Yes I like this approach Bee.
The conversion process still has to have specifics and the theoretic involved in terms of it's construction would have to have some association for the correlation to work.
For example:
This conversion process is very important.
Another example would be:
See: Listen to the decay of a god particle
It is exciting for me to see your demonstration in concert with the approach of quantum gravity.
Plato Hagel said...
You might like this link below as well.
kneemo said...
Indeed the paper appears to describe techniques dual to those used by Connes in his noncommutative geometrical studies of the standard model and gravity. The two approaches are related through the fact that the Dirac operator can be thought of as the square-root of the Laplacian. Kempf prefers to use the Laplacian while Connes uses the Dirac operator in his spectral triples (A,H,D) to encode the spectral geometry. In Connes spectral triple, A is the operator algebra of functions over the given manifold, H is the Hilbert space on which it acts on and D is the Dirac operator whose spectrum is used to recover the structure of the manifold, much like Kempf uses the spectrum of the Laplacian to recover the "shape" of the manifold.
In Connes approach to the standard model and gravity, to recover the gauge group of the standard model he considers a product space M x F, where F is a finite geometry, related to the "sprinkling of points" mentioned in Kempf's paper that has a matrix interpretation. Specifically, Connes considers the algebra of functions A over the 6-point space in his model, where A=C+H+M_3(C). Here, C is the set of complex numbers, H the algebra of quaternions (transforming in M_2(C)) and M_3(C) the set of 3x3 matrices over the complex numbers, acting on the one, two and 3-point spaces respectively. Classically, the manifolds which these encode are the unit circle, CP^1 and CP^2, each discretized by the eigenvalues of the matrix operators in the algebra of functions over the finite geometry F.
In string theory, such a finite geometry also arises in the guise of internal worldvolume degrees of freedom. In this framework, gauge groups can be seen as the internal degree of freedom at every point on the world-volume of N-coincident branes. The gauge symmetry is the freedom that a fundamental string has in deciding which of the N identical branes it can end on. In Connes' model, there would be a total of six branes encoded by the spectral triple of his finite geometry F, giving the U(1), SU(2) and SU(3) symmetry groups of the standard model.
Uncle Al said...
"The correlations of the quantum vacuum are encoded in the Greensfunction which is a function of pairs of points." Green’s function opens Newton (e.g., terrain gravitometer sweeps to reconstruct buried dense ore or low density petroleum). To my knowledge, Green functions are not validated for general relativity. Green functions are all coordinate squares, removing chirality (versus Ashtekar). Green functions are defective if they uncreate fermionic matter parity violations.
Quantum gravitation and SUSY will founder until somebody discovers why persuasive maths do not empirically apply. Euclid plus perturbation is terrestrial cartography, and still fails to navigate the high seas, because rigorously derived Euclid is wrong in context. Green functions for linearized theory are established. Green functions describe complete non-linear theory to any required accuracy. An odd polynomial to any number of terms is not a sine wave. It fails at boundaries.
Sabine Hossenfelder said...
Hi Saibod,
Achim submits the following: "Connes' spectral triple has much more information than just the spectrum of the Dirac operator. Namely, to know the spectral triple is also to know how the Dirac operator acts on concrete spinor fields. Having this much more information makes it way easier to reconstruct a manifold. The difficult part is to show under which conditions the spectrum (or spectra) *alone* suffice(s) to determine a manifold."
Phillip Helbig said...
Greensfunction ---> Green function.
The first form is not correct and never was. The second is the preferred form now, e.g. Schrödinger equation, Maxwell equations, not Schrödinger's equation, not Maxwell's equation (though the possessive forms are grammatically correct).
I remember that Max Tegmark commented in a talk at the 1994 Texas Symposium in Munich that he had looked up the official recommendations and "Green function" is correct, though he found that rather funny.
Sabine Hossenfelder said...
Hi Phillip,
I also find that rather funny, but I'll keep it in mind. Though I'm afraid that if I would write "Green function" nobody would know what I mean, which somewhat defeats the purpose of language. It's like that, after some years of complaining about the way the Swedes write dates that nobody knows how to read, I found out that it's the "international standard" for dates they're using... Best,
Phillip Helbig said...
I'm pretty sure that there is no-one who knows what a Greensfunction is but doesn't know what a Green function is. The fact that it is capitalized hints that it is a proper name.
Christine said...
This comment has been removed by the author.
Christine said...
This comment has been removed by the author.
Juan F. said...
Awesome post Sabine! Should we call you The Quantum Gravity Doctor? ;)
Giotis said...
Nice picture Sabine...
I guess finally your mother's dream become true. You are a 'real' doctor now with a stethoscope:-)
Christine said...
This comment has been removed by the author.
Christine said...
Oops, sorry for my badly written comments. Anyway, "Green function" or "Green's function" are the terms that I know. Never heard of Greensfunctions...
Sabine Hossenfelder said...
Hi Christine,
He's only considering manifolds without boundary. I've been a little brief on the details for the sake of readability, but it arguably goes on the expenses of clarity, sorry about that. I can recommend Achim's paper though, I found it very well written and understandable. It's also not very long. Best,
Sabine Hossenfelder said...
Hi Giotis,
There are some Dr med's in our family. I don't think it ever was my mother's dream I join them. My younger brother and I, we'd sometimes sneak into the doctor's office on weekends and play with the equipment. I've always been more interested in basic research though. And my younger brother, he's a mechanical engineer now. Best,
Sabine Hossenfelder said...
Yes, you can call me the Quantum Gravity Doctor. The patient is noncompliant :p Best,
Plato Hagel said...
Hearing the Shape of the Drum
Thanks Bee
Markus Maute said...
This comment has been removed by the author.
Raisonator said...
the Dirac operator acts on concrete spinor fields."
This sounds like an interesting mathematical question but in terms of physics one needs the spinors anyway to have fermions and to be able to reconstruct the Standard Model.
The Laplacian alone will not do. One just gets the bosonic part of the spectral action.
Moreover, to do serious physics at least an almost commutative spectral triple is required
anyway, rendering the overall manifold non-commutative.
Also, if just considering the Laplacian, I don't think one gets the gauge fields which are part of the bosonic action.
Regarding spacetime in isolation I regard as a major step backwards and completely against the very spirit of unification (of spacetime and matter), in particular given the sheer success of the noncommutative standard model.
Well, that's all based on my limited understanding of the subject, so please correct me if I am wrong.
DaveS said...
You go, girl and get that Quantum Gravity!
Zephir said...
Quantum gravity is the theory supposed to bridge the dimensional scales of quantum mechanics and general relativity, i.e. the human observer scales. I can see no reason, why the common chemistry and biology couldn't fall into subject of quantum gravity as well.
Robert L. Oldershaw said...
Last line from Alan Lightman's review of Smolin's new book in the NY Times Book Review section of the Sunday paper.
"For if we must appeal to the existence of other universes - unknown and unknowable - to explain out universe, then science has progressed into a cul-de-sac with no scientific escape."
Science 1 ; pseudo-science 0 |
d2fd05c3cf5bb0b2 | tag:blogger.com,1999:blog-8669468Thu, 12 Jan 2017 16:40:00 +0000conferencesblogstravellattice fermionsexperimentarXivperturbation theorygeneral newsMITPpublicityweirdnesslat2013astronomybook reviewdata analysisgeneral physicsimprovementnobel prizepoliticsstrong couplinganalytical resultscomputingforecastingquarkssimulationsenergyfinancefittingfortranfunneutrinosobituariesscience publishingseminarschemistryevolutiongraphenehopeinterviewslanguageslower dimensionsmathematicsphilosophypythonrelativitysupersymmetrytadpolestopologyvirusesLife on the latticeThoughts on lattice QCD, particle physics and the world at large.http://latticeqcd.blogspot.com/noreply@blogger.com (Georg v. Hippel)Blogger215125tag:blogger.com,1999:blog-8669468.post-2868000246317540094Thu, 12 Jan 2017 16:37:00 +00002017-01-12T16:38:42.408+00:00Book Review: "Lattice QCD — Practical Essentials"There is a new book about Lattice QCD, <a href="http://www.springer.com/la/book/9789402409970">Lattice Quantum Chromodynamics: Practical Essentials</a> by Francesco Knechtli, Michael Günther and Mike Peardon. At a 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.<br /><br />In line with this aim, the authors spend relatively little time on the physical or field theoretic background; while some more advanced topics such as the Nielson-Ninomiya theorem and the Symanzik effective theory or touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are altogether omitted. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation fro pure gauge fields, and hybrid Monte Carlo for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging and explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.<br /><br />Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.<br /><br />___<br /><i>Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.</i><br />http://latticeqcd.blogspot.com/2017/01/book-review-lattice-qcd-practical.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-3553126744678770882Sat, 30 Jul 2016 20:50:00 +00002016-07-30T21:51:40.802+01:00conferencestravelLattice 2016, Day SixThe final day of the conference started with a review talk by Claudio Pica on lattice simulations trying to chart the fundamental physics beyond the Standard Model. The problem with the SM is perhaps to some extent how well it works, given that we know it must be incomplete. One of the main contenders for replacing it is the notion of strong dynamics at a higher energy scale giving rise to the Higgs boson as a composite particle. The most basic "technicolor" theories of this kind fail because they cannot account for the relatively large masses of the second- and third-generation quarks. To avoid that problem, the coupling of the technicolor gauge theory must not be running, but "walking" slowly from high to low energy scales, which has given rise to a veritable industry of lattice simulations investigating the β function of various gauge theories coupled to various numbers of fermions in various representations. The Higgs can then be either a dilaton associated with the breaking of conformal symmetry, which would naturally couple like a Standard Model Higgs, or a pseudo-Goldstone boson associated with the breaking of some global flavour symmetry. So far, nothing very conclusive has resulted, but of course the input from experiment at the moment only consists of limits ruling some models out, but not allowing for any discrimination between those models that aren't rules out.<br /><br />A specific example of BSM physics, <i>viz.</i> strongly interacting dark matter, was presented in a talk by Enrico Rinaldi. If there is a new strongly-coupled interaction, as suggested by the composite Higgs models, then besides the Higgs there will also be other bound states, some of which may be stable and provide a dark matter candidate. While the "dark" nature of dark matter requires such a bound state to be neutral, the constituents might interact with the SM sector, allowing for the production and detection of dark matter. Many different models of composite dark matter have been considered, and the main limits currently come from the non-detection of dark matter in searches, which put limits on the "hadron-structure" observables of the dark matter candidates, such as their σ-terms and charge radii).<br /><br />David Kaplan gave a talk on a new perspective on chiral gauge theories, the lattice formulation of which has always been a persistent problem, largely due to the Nielsen-Ninomiya theorem. However, the fermion determinant of chiral gauge theories is already somewhat ill-defined even in the continuum. A way to make it well-defined has been proposed by Alvarez-Gaumé <i>et al.</i> through the addition of an ungauged right-handed fermion. On the lattice, the U(1)<sub>A</sub> anomaly is found to emerge as the remnant of the explicit breaking of chiral symmetry by e.g. the Wilson term in the limit of vanishing lattice spacing. Attempts at realizing ungauged mirror fermions using domain wall fermions with a gauge field constrained to near one domain wall have failed, and a realizations using the gradient flow in the fifth dimension turns the mirror fermions into "fluff". A new realization along the lines of the overlap operator gives a lattice operator very similar to that of Alvarez-Gaumé by coupling the mirror fermion to a fixed point of the gradient flow, which is a pure gauge.<br /><br />After the coffee break, Tony Hey gave a very entertaining, if somewhat meandering, talk about "Richard Feynman, Data-Intensive Science and the Future of Computing" going all the way from Feynman's experiences at Los Alamos to AI singularity scenarios and the security aspects of self-driving cars.<br /><br />The final plenary talk was the review talk on machines and algorithms by Peter Boyle. The immediate roadmap for new computer architectures shows increases of around 400 times in the single-precision performance per node, and a two-fold increase in the bandwidth of interconnects, and this must be taken into account in algorithm design and implementation in order to achieve good scaling behaviour. Large increases in chip performance are to be expected from three-dimensional arrangement of units, which will allow thicker and shorter copper wires, although there remain engineering problems to solve, such as how to efficiently get the heat out of such chips. In terms of algorithms, multigrid solvers are now becoming available for a larger variety of fermion formulations, leading to potentially great increases in performance near the chiral and continuum limits. Multilevel integration methods, which allow for an exponential reduction of the noise, also look interesting, although at the moment these work only in the quenched theory.<br /><br />The IAC announced that Lattice 2018 will take place at Michigan State University. Elvira Gamiz as the chair of the Lattice 2017 LOC extended an invitation to the lattice community to come to Granada for <a href="http://www.lattice2017.es/">Lattice 2017</a>, which will take place in the week 18-24 June 2017. And with that, and a round of well-deserved applause for the organizers, the conference closed.<br /><br />My further travel plans are of interest only to a small subset of my readers, and need not be further elaborated upon in this venue.<br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-day-six.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-1312340437708869420Fri, 29 Jul 2016 21:26:00 +00002016-07-29T22:26:20.682+01:00conferencesLattice 2016, Day FiveToday was the day of finite temperature and density, on which the general review talk was delivered by Heng-Tong Ding. While in the meantime agreement has been reached on the transition temperature, the nature of the transition (crossover) and the equation of state at the physical quark masses, on which different formulations differed a lot in the past, the Columbia plot of the nature of the transition as a function of the light and strange quark masses still remains to be explored, and there are discrepancies between results obtained in different formulations. On the topic of U(1)<sub>A</sub> restoration (on which I do have a layman's question: to my understanding U(1)<sub>A</sub> is broken by the axial anomaly, which to my understanding arises from the path integral measure - so why should one expect the symmetry to be restored at high temperature? The situation is quite different from dynamical spontaneous symmetry breaking, as far as I understand), there is no evidence for restoration so far. A number of groups have taken to using the gradient flow as a tool to perform relatively cheap investigations of the equation of state. There are also new results from the different approaches to finite-density QCD, including cumulants from the Taylor-expansion approach, which can be related to heavy-ion observables, and new ways of stabilizing complex Langevin dynamics.<br /><br />This was followed by two topical talks. The first, by Seyong Kim, was on the subject of heavy flavours at finite temperature. Heavy flavours are one of the most important probes of the quark-gluon plasma, and J/ψ suppression has served as a diagnostic tool of QGP formation for a long time. To understand the influence of high temperatures on the survival of quarkonium states and on the transport properties of heavy flavours in the QGP, knowledge of the spectral functions is needed. Unfortunately, extracting these from a finite number of points in Euclidean point is an ill-posed problem, especially so when the time extent is small at high temperature. The methods used to get at them nevertheless, such as the maximum entropy method or Bayesian fits, need to use some kind of prior information, introducing the risk of a methodological bias leading to systematic errors that may be not only quantitative, but even qualitative; as an example, MEM shows P-wave bottomonium to melt around the transition temperature, whereas a newer Bayesian method shows it to survive, so clearly more work is needed.<br /><br />The second topical talk was Kurt Langfeld speaking about the density-of-states method. This method is based on determining a function ρ(E), which is essentially the path integral of δ(S[φ]-E), such that the partition function can be written as the Laplace transform of ρ, which can be generalized to the case of actions with a sign problem, where the partition function can then be written as the Fourier transform of a function P(s). An algorithm to compute such functions exists in the form of what looks like a sort of microcanonical simulation in a window [E-δE;E+δE] and determines the slope of ρ at E, whence ρ can be reconstructed. Ergodicity is ensured by having the different windows overlap and running in parallel, with a possibility of "replica exchange" between the processes running for neighbouring windows when configurations within the overlap between them are generated. The examples shown, e.g. for the Potts model, looked quite impressive in that the method appears able to resolve double-peak structures even when the trough between the peaks is suppressed by many orders of magnitude, such that a Markov process would have no chance of crossing between the two probability peaks.<br /><br />After the coffee break, Aleksi Kurkela reviewed the phenomenology of heavy ions. The flow properties that were originally taken as a sign of hydrodynamics having set in are now also observed in pp collisions, which seem unlikely to be hydrodynamical. In understanding and interpreting these results, the pre-equilibration evolution is an important source of uncertainty; the current understanding seems to be that the system goes from an overoccupied to an underoccupied state before thermalizing, making different descriptions necessary at different times. At early times, simulations of classical Yang-Mills theory on a lattice in proper-time/rapidity coordinates are used, whereas later a quasiparticle description and kinetic theory can be applied; all this seems to be qualitative so far.<br /><br />The energy momentum tensor, which plays an important role in thermodynamics and hydrodynamics, was the topic of the last plenary of the day, which was given by Hiroshi Suzuki. Translation invariance is broken on the lattice, so the Ward-Takahashi identity for the energy-momentum tensor picks up an O(a) violation term, which can become O(1) by radiative corrections. As a consequence, three different renormalization factors are needed to renormalize the energy-momentum tensor. One way of getting at these are the shifted boundary conditions of Giusti and Meyer, another is the use of the gradient flow at short flow times, and there are first results from both methods.<br /><br />The parallel sessions of the afternoon concluded the parallel programme.<br /><br /><br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-day-five.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-7399696396766454303Thu, 28 Jul 2016 22:59:00 +00002016-07-29T17:37:26.483+01:00conferencesLattice 2016, Days Three and FourFollowing the canonical script for lattice conferences, yesterday was the day without plenaries. Instead, the morning was dedicated to parallel sessions (including my own talk), and the afternoon was free time with the option of taking one of several arranged excursions.<br /><br />I went on the excursion to Salisbury cathedral (which is notable both for its fairly homogeneous and massive architectural ensemble, and for being home to one of four original copies of the Magna Carta) and Stonehenge (which in terms of diameter seems to be much smaller than I had expected from photos).<br /><br />Today began with the traditional non-lattice theory talk, which was given by Monika Blanke, who spoke about the impact of lattice QCD results on CKM phenomenology. Since quarks cannot be observed in isolation, the extraction of CKM matrix elements from experimental results always require knowledge of the appropriate hadronic matrix elements of the currents involved in the measured reaction. This means that lattice results for the form factors of heavy-to-light semileptonic decays and for the hadronic parameters governing neutral kaon and B meson mixing are of crucial importance to CKM phenomenology, to the extent that there is even a sort of "wish list" to the lattice. There has long been a discrepancy between the values of both |V<sub>cb</sub>| and |V<sub>ub</sub>| extracted from inclusive and exclusive decays, respectively, and the ratio |V<sub>ub</sub>/V<sub>cb</sub>| that can be extracted from decays of Λ<sub>b</sub> baryons only adds to the tension. However, this is likely to be a result of underestimated theoretical uncertainties or experimental issues, since the pattern of the discrepancies is not in agreement with that which would results from new physics effects induced by right-handed currents. General models of flavour violating new physics seems to favour the inclusive value for |V<sub>ub</sub>|. In b->s transitions, there is evidence for new physics effects at the 4σ level, but significant theoretical uncertainties remain. The B<sub>(s)</sub>->μ<sup>+</sup>μ<sup>-</sup> branching fractions are currently in agreement with the SM at the 2σ level, but new, more precise measurements are forthcoming.<br /><br />Ran Zhou complemented this with a review talk about heavy flavour results from the lattice, where there are new results from a variety of different approaches (NRQCD, HQET, Fermilab and Columbia RHQ formalisms), which can serve as useful and important cross-checks on each other's methodological uncertainties.<br /><br />Next came a talk by Amy Nicholson on neutrinoless double β decay results from the lattice. Neutrinoless double β decays are possible if neutrinos are Majorana particles, which would help to explain the small masses of the observed left-handed neutrinos through the see-saw mechanism pushing the right-handed neutrinos off to near the GUT scale. Treating the double β decay in the framework of a chiral effective theory, the leading-order matrix element required is a process π<sup>-</sup>->π<sup>+</sup>e<sup>-</sup>e<sup>-</sup>, for which there are first results in lattice QCD. The NLO process would have disconnected diagrams, but cannot contribute to the 0<sup>+</sup>->0<sup>+</sup> transitions which are experimentally studied, whereas the NNLO process involves two-nucleon operators and still remains to be studied in greater detail on the lattice.<br /><br />After the coffee break, Agostino Patella reviewed the hot topic of QED corrections to hadronic observables. There are currently two main methods for dealing with QED in the context of lattice simulations: either to simulate QCD+QED directly (usually at unphysically large electromagnetic couplings followed by an extrapolation to the physical value of α=1/137), or to expand it in powers of α and to measure only the resulting correlation functions (which will be four-point functions or higher) in lattice QCD. Both approaches have been used to obtain some already very impressive results on isospin-breaking QED effects in the hadronic spectrum, as shown already in the spectroscopy review talk. There are, however, still a number of theoretical issues connected to the regularization of IR modes that relate to the Gauss law constraint that would forbid the existence of a single charged particle (such as a proton) in a periodic box. The prescriptions to evade this problem all lead to a non-commutativity of limits requiring the infinite-volume limit to be taken before other limits (such as the continuum or chiral limits): QED<sub>TL</sub>, which omits the global zero modes of the photon field, is non-local and does not have a transfer matrix; QED<sub>L</sub>, which omits the spatial zero modes on each timeslice, has a transfer matrix, but is still non-local and renormalizes in a non-standard fashion, such that it does not have a non-relativistic limit; the use of a massive photon leads to a local theory with softly broken gauge symmetry, but still requires the infinite-volume limit to be taken before removing the photon mass. Going beyond hadron masses to decays introduces new IR problems, which need to be treated in the Bloch-Nordsieck way, leading to potentially large logarithms.<br /><br />The 2016 Ken Wilson Lattice Award was awarded to Antonin Portelli for his outstanding contributions to our understanding of electromagnetic effects on hadron properties. Antonin was one of the driving forces behind the BMW collaboration's effort to determine the proton-neutron mass difference, which resulted in a <i>Science</i> paper exhibiting one of the most frequently-shown and impressive spectrum plots at this conference.<br /><br />In the afternoon, parallel sessions took place, and in the evening there was a (very nice) conference dinner at the Southampton F.C. football stadium.<br /><br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-days-three-and-four.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-7733595512774224540Tue, 26 Jul 2016 20:32:00 +00002016-07-26T21:32:12.667+01:00conferencesLattice 2016, Day TwoHello again from Lattice 2016 at Southampton. Today's first plenary talk was the review of nuclear physics from the lattice given by Martin Savage. Doing nuclear physics from first principles in QCD is obviously very hard, but also necessary in order to truly understand nuclei in theoretical terms. Examples of needed theory predictions include the equation of state of dense nuclear matter, which is important for understanding neutron stars, and the nuclear matrix elements required to interpret future searches for neutrinoless double β decays in terms of fundamental quantities. The problems include the huge number of required quark-line contractions and the exponentially decaying signal-to-noise ratio, but there are theoretical advances that increasingly allow to bring these under control. The main competing procedures are more or less direct applications of the Lüscher method to multi-baryon systems, and the HALQCD method of computing a nuclear potential from Bethe-Salpeter amplitudes and solving the Schrödinger equation for that potential. There has been a lot of progress in this field, and there are now first results for nuclear reaction rates.<br /><br />Next, Mike Endres spoke about new simulation strategies for lattice QCD. One of the major problems in going to very fine lattice spacings is the well-known phenomenon critical slowing-down, i.e. the divergence of the autocorrelation times with some negative power of the lattice spacing, which is particularly severe for the topological charge (a quantity that cannot change at all in the continuum limit), leading to the phenomenon of "topology freezing" in simulations at fine lattice spacings. To overcome this problem, changes in the boundary conditions have been proposed: open boundary conditions that allow topological charge to move into and out of the system, and non-orientable boundary conditions that destroy the notion of an integer topological charge. An alternative route lies in algorithmic modifications such as metadynamics, where a potential bias is introduced to disfavour revisiting configurations, so as to forcibly sample across the potential wells of different topological sectors over time, or multiscale thermalization, where a Markov chain is first run at a coarse lattice spacing to obtain well-decorrelated configurations, and then each of those is subjected to a refining operation to obtain a (non-thermalized) gauge configuration at half the lattice spacing, each of which can then hopefully thermalized by a short sequence of Monte Carlo update operations.<br /><br />As another example of new algorithmic ideas, Shinji Takeda presented tensor networks, which are mathematical objects that assign a tensor to each site of a lattice, with lattice links denoting the contraction of tensor indices. An example is given by the rewriting of the partition function of the Ising model that is at the heart of the high-temperature expansion, where the sum over the spin variables is exchanged against a sum over link variables taking values of 0 or 1. One of the applications of tensor networks in field theory is that they allow for an implementation of the renormalization group based on performing a tensor decomposition along the lines of a singular value decomposition, which can be truncated, and contracting the resulting approximate tensor decomposition into new tensors living on a coarser grid. Iterating this procedure until only one lattice site remains allows the evaluation of partition functions without running into any sign problems and at only O(log <i>V</i>) effort.<br /><br />After the coffee break, Sara Collins gave the review talk on hadron structure. This is also a field in which a lot of progress has been made recently, with most of the sources of systematic error either under control (e.g. by performing simulations at or near the physical pion mass) or at least well understood (e.g. excited-state and finite-volume effects). The isovector axial charge <i>g<sub>A</sub></i> of the nucleon, which for a long time was a bit of an embarrassment to lattice practitioners, since it stubbornly refused to approach its experimental value, is now understood to be particularly severely affected by excited-state effects, and once these are well enough suppressed or properly accounted for, the situation now looks quite promising. This lends much larger credibility to lattice predictions for the scalar and tensor nucleon charges, for which little or no experimental data exists. The electromagnetic form factors are also in much better shape than one or two years ago, with the electric Sachs form factor coming out close to experiment (but still with insufficient precision to resolve the conflict between the experimental electron-proton scattering and muonic hydrogen results), while now the magnetic Sachs form factor shows a trend to undershoot experiment. Going beyond isovector quantities (in which disconnected diagrams cancel), the progress in simulation techniques for disconnected diagrams has enabled the first computation of the purely disconnected strangeness form factors. The sigma term σ<sub>πN</sub> comes out smaller on the lattice than it does in experiment, which still needs investigation, and the average momentum fraction <<i>x</i>> still needs to become the subject of a similar effort as the nucleon charges have received.<br /><br />In keeping with the pattern of having large review talks immediately followed by a related topical talk, Huey-Wen Lin was next with a talk on the Bjorken-<i>x</i> dependence of the parton distribution functions (PDFs). While the PDFs are defined on the lightcone, which is not readily accessible on the lattice, a large-momentum effective theory formulation allows to obtain them as the infinite-momentum limit of finite-momentum parton distribution amplitudes. First studies show interesting results, but renormalization still remains to be performed.<br /><br />After lunch, there were parallel sessions, of which I attended the ones into which most of the <i>(g-2)</i> talks had been collected, showing quite a rate of progress in terms of the treatment of in particular the disconnected contributions.<br /><br />In the evening, the poster session took place.<br /><br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-day-two.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-6754011095960953956Mon, 25 Jul 2016 21:58:00 +00002016-07-25T23:05:04.594+01:00conferencestravelLattice 2016, Day OneHello from Southampton, where I am attending the Lattice 2016 conference.<br /><br />I arrived yesterday safe and sound, but unfortunately too late to attend the welcome reception. Today started off early and quite well with a full English breakfast, however.<br /><br />The conference programme was opened with a short address by the university's Vicepresident of Research, who made a point of pointing out that he like 93% of UK scientists had voted to remain in the EU - an interesting testimony to the political state of affairs, I think.<br /><br />The first plenary talk of the conference was a memorial to the scientific legacy of Peter Hasenfratz, who died earlier this year, delivered by Urs Wenger. Peter Hasenfratz was one of the pioneers of lattice field theory, and hearing of his groundbreaking achievements is one of those increasingly rare occasions when I get to feel very young: when he organized the first lattice symposium in 1982, he sent out individual hand-written invitations, and the early lattice reviews he wrote were composed in a time where most results were obtained in the quenched approximation. But his achievements are still very much current, amongst other things in the form of fixed-point actions as a realization of the Ginsparg-Wilson relation, which gave rise to the booming interest in chiral fermions.<br /><br />This was followed by the review of hadron spectroscopy by Chuan Liu. The contents of the spectroscopy talks have by now shifted away from the ground-state spectrum of stable hadrons, the calculation of which has become more of a benchmark task, and towards more complex issues, such as the proton-neutron mass difference (which requires the treatment of isospin breaking effects both from QED and from the difference in bare mass of the up and down quarks) or the spectrum of resonances (which requires a thorough study of the volume dependence of excited-state energy levels via the Lüscher formalism). The former is required as part to the physics answer to the ageless question why anything exists at all, and the latter is called for in particular by the still pressing current question of the nature of the XYZ states.<br /><br />Next came a talk by David Wilson on a more specific spectroscopy topic, namely resonances in coupled-channel scattering. Getting these right requires not only extensions of the Lüscher formalism, but also the extraction of very large numbers of energy levels via the generalized eigenvalue problem.<br /><br />After the coffee break, Hartmut Wittig reviewed the lattice efforts at determining the hadronic contributions to the anomalous magnetic moment (g-2)<sub>μ</sub> of the muon from first principles. This is a very topical problem, as the next generation of muon experiments will reduce the experimental error by a factor of four or more, which will require a correspondingly large reduction in the theoretical uncertainties in order to interpret the experimental results. Getting to this level of accuracy requires getting the hadronic vacuum polarization contribution to sub-percent accuracy (which requires full control of both finite-volume and cut-off effects, and a reasonably accurate estimate for the disconnected contributions) and the hadronic light-by-light scattering contribution to an accuracy of better than 10% (which some way or another requires the calculation of a four-point function including a reasonable estimate for the disconnected contributions). There has been good progress towards both of these goals from a number of different collaborations, and the generally good overall agreement between results obtained using widely different formulations bodes well for the overall reliability of the lattice results, but there are still many obstacles to overcome.<br /><br />The last plenary talk of the day was given by Sergei Dubovsky, who spoke about efforts to derive a theory of the QCD string. As with most stringy talks, I have to confess to being far too ignorant to give a good summary; what I took home is that there is some kind of string worldsheet theory with Goldstone bosons that can be used to describe the spectrum of large-N<sub>c</sub> gauge theory, and that there are a number of theoretical surprises there.<br /><br />Since the plenary programme is being <a href="http://www.southampton.ac.uk/lattice2016/plenary-streaming/">streamed</a> on the web, by the way, even those of you who cannot attend the conference can now do without my no doubt quite biased and very limited summaries and hear and see the talks for yourselves.<br /><br />After lunch, parallel sessions took place. I found the sequence of talks by Stefan Sint, Alberto Ramos and Rainer Sommer about a precise determination of α<sub>s</sub>(M<sub>Z</sub>) using the Schrödinger functional and the gradient-flow coupling very interesting.<br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-day-one.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-2104748736122290844Tue, 15 Sep 2015 11:56:00 +00002015-10-03T20:13:12.118+01:00conferencesMITPFundamental Parameters from Lattice QCD, Last DaysThe last few days of our scientific programme were quite busy for me, since I had agreed to give the summary talk on the final day. I therefore did not get around to blogging, and will keep this much-delayed summary rather short.<br /><br />On Wednesday, we had a talk by Michele Della Morte on non-perturbatively matched HQET on the lattice and its use to extract the b quark mass, and a talk by Jeremy Green on the lattice measurement of the nucleon strange electromagnetic form factors (which are purely disconnected quantities).<br /><br />On Thursday, Sara Collins gave a review of heavy-light hadron spectra and decays, and Mike Creutz presented arguments for why the question of whether the up-quark is massless is scheme dependent (because the sum and difference of the light quark masses are protected by symmetries, but will in general renormalize differently).<br /><br />On Friday, I gave the summary of the programme. The main themes that I identified were the question of how to estimate systematic errors, and how to treat them in averaging procedures, the issues of isospin breaking and scale setting ambiguities as major obstacles on the way to sub-percent overall precision, and the need for improved communication between the "producers" and "consumers" of lattice results. In the closing discussion, the point was raised that for groups like CKMfitter and UTfit the correlations between different lattice quantities are very important, and that lattice collaborations should provide the covariance matrices of the final results for different observables that they publish wherever possible.http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_15.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-5972860561427219574Wed, 09 Sep 2015 20:00:00 +00002015-09-15T12:55:40.616+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day SevenToday's programme featured two talks about the interplay between the strong and the electroweak interactions. The first speaker was Gregorio Herdoíza, who reviewed the determination of hadronic corrections to electroweak observables. In essence these determinations are all very similar to the determination of the leading hadronic correction to (g-2)<sub>μ</sub> since they involve the lattice calculation of the hadronic vacuum polarisation. In the case of the electromagnetic coupling α, its low-energy value is known to a precision of 0.3 ppb, but the value of α(m<sub>Z</sub><sup>2</sup>) is known only to 0.1 ‰, and a larger portion of the difference in uncertainty is due to the hadronic contribution to the running of α, i.e. the hadronic vacuum polarization. Phenomenologically this can be estimated through the R-ratio, but this results in relatively large errors at low Q<sup>2</sup>. On the lattice, the hadronic vacuum polarization can be measured through the correlator of vector currents, and currently a determination of the running of α in agreement with phenomenology and with similar errors can be achieved, so that in the future lattice results are likely to take the lead here. In the case of the electroweak mixing angle, sin<sup>2</sup>θ<sub>w</sub> is known well at the Z pole, but only poorly at low energy, although a number of experiments (including the P2 experiment at Mainz) are aiming to reduce the uncertainty at lower energies. Again, the running can be determined from the Z-γ mixing through the associated current-current correlator, and current efforts are under way, including an estimation of the systematic error caused by the omission of quark-disconnected diagrams.<br /><br />The second speaker was Vittorio Lubicz, who looked at the opposite problem, i.e. the electroweak corrections to hadronic observables. Since approximately α=1/137, electromagnetic corrections at the one-loop level will become important once the 1% level of precision is being aimed for, and since the up and down quarks have different electrical charges, this is an isospin-breaking effect which also necessitates at the same time considering the strong isospin breaking caused by the difference in the up and down quark masses. There are two main methods to include QED effects into lattice simulations; the first is direct simulations of QCD+QED, and the second is the method of incorporating isospin-breaking effects in a systematic expansion pioneered by Vittorio and colleagues in Rome. Either method requires a systematic treatment of the IR divergences arising from the lack of a mass gap in QED. In the Rome approach this is done through splitting the Bloch-Nordsieck treatment of IR divergences and soft bremsstrahlung into two pieces, whose large-volume limits can be taken separately. There are many other technical issues to be dealt with, but first physical results from this method should be forthcoming soon.<br /><br />In the afternoon there was a discussion about QED effects and the range of approaches used to treat them.http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_9.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-2620838725276882962Mon, 07 Sep 2015 17:00:00 +00002015-09-07T18:00:02.460+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day SixThe second week of our Scientific Programme started with an influx of new participants.<br /><br />The first speaker of the day was Chris Kelly, who spoke about CP violation in the kaon sector from lattice QCD. As I hardly need to tell my readers, there are two sources of CP violation in the kaon system, the indirect CP-violation from neutral kaon-antikaon mixing, and the direct CP-violation from K->ππ decays. Both, however, ultimately stem from the single source of CP violation in the Standard Model, i.e. the complex phase e<sup>iδ</sup> in the CKM matrix, which gives the area of the unitarity triangle. The hadronic parameter relevant to indirect CP-violation is the kaon bag parameter B<sub>K</sub>, which is a "gold-plated" quantity that can be very well determined on the lattice; however, the error on the CP violation parameter ε<sub>K</sub> constraining the upper vertex of the unitarity triangle is dominated by the uncertainty on the CKM matrix element V<sub>cb</sub>. Direct CP-violation is particularly sensitive to possible BSM effects, and is therefore of particular interest. Chris presented the recent efforts of the RBC/UKQCD collaboration to address the extraction of the relevant parameter ε'/ε and associated phenomena such as the ΔI=1/2 rule. For the two amplitudes A<sub>0</sub> and A<sub>2</sub>, different tricks and methods were required; in particular for the isospin-zero channel, all-to-all propagators are needed. The overall errors are still large: although the systematics are dominated by the perturbative matching to the MSbar scheme, the statistical errors are very sizable, so that the 2.1σ tension with experiment observed is not particularly exciting or disturbing yet.<br /><br />The second speaker of the morning was Gunnar Bali, who spoke about the topic of renormalons. It is well known that the perturbative series for quantum field theories are in fact divergent asymptotic series, whose typical term will grow like <i>n<sup>k</sup>z<sup>n</sup>n!</i> for large orders <i>n</i>. Using the Borel transform, such series can be resummed, provided that there are no poles (IR renormalons) of the Borel transform on the positive real axis. In QCD, such poles arise from IR divergences in diagrams with chains of bubbles inserted into gluon lines, as well as from instanton-antiinstanton configurations in the path integral. The latter can be removed to infinity by considering the large-<i>N<sub>c</sub></i> limit, but the former are there to stay, making perturbatively defined quantities ambiguous at higher orders. A relevant example are heavy quark masses, where the different definitions (pole mass, MSbar mass, 1S mass, ...) are related by perturbative conversion factors; in a heavy-quark expansion, the mass of a heavy-light meson can be written as <i>M=m+Λ+O(1/m)</i>, where <i>m</i> is the heavy quark mass, and Λ a binding energy of the order of some QCD energy scale. As <i>M</i> is unambiguous, the ambiguities in <i>m</i> must correspond to ambiguities in the binding energy Λ, which can be computed to high orders in numerical stochastic perturbation theory (NSPT). After dealing with some complications arising from the fact that IR divergences cannot be probed directly in a finite volume, it is found that the minimum term in the perturbative series (which corresponds to the perturbative ambiguity) is of order 180 MeV in the quenched theory, meaning that heavy quark masses are only defined up to this accuracy. Another example is the gluon condensate (which may be of relevance to the extraction of α<sub>s</sub> from τ decays), where it is found that the ambiguity is of the same size as the typically quoted result, making the usefulness of this quantity doubtful.http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_7.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-7418913630200480815Fri, 04 Sep 2015 18:00:00 +00002015-09-07T13:18:03.740+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day FiveThe first speaker today was Martin Lüscher, who spoke about revisiting numerical stochastic perturbation theory. The idea behind numerical stochastic perturbation theory is to perform a simulation of a quantum field theory using the Langevin algorithm and to perturbatively expand the fields, which leads to a tower of coupled evolution equations, where only the lowest-order one depends explicitly on the noise, whereas the higher-order ones describe the evolution of the higher-order coefficients as a function of the lower-order ones. In Numerical Stochastic Perturbation Theory (NSPT), the resulting equations are integrated numerically (up to some, possibly rather high, finite order in the coupling), and the average over noises is replaced by a time average. The problems with this approach are that the autocorrelation time diverges as the inverse square of the lattice spacing, and that the extrapolation in the Langevin time step size is difficult to control well. An alternative approach is given by Instantaneous Stochastic Perturbation Theory (ISPT), in which the Langevin time evolution is replaced by the introduction of Gaussian noise sources at the vertices of tree diagrams describing the construction of the perturbative coefficients of the lattice fields. Since there is no free lunch, this approach suffers from power-law divergent statistical errors in the continuum limit, which arise from the way in which power-law divergences that cancel in the mean are shifted around between different orders when computing variances. This does not happen in the Langevin-based approach, because the Langevin theory is renormalizable.<br /><br />The second speaker of the morning was Siegfried Bethke of the Particle Data Group, who allowed us a glimpse at the (still preliminary) world average of α<sub>s</sub> for 2015. In 2013, there were five classes of α<sub>s</sub> determinations: from lattice QCD, τ decays, deep inelastic scattering, e<sup>+</sup>e<sup>-</sup> colliders, and global Z pole fits. Except for the lattice determinations (and the Z pole fits, where there was only one number), these were each preaveraged using the range method -- i.e. taking the mean of the highest and lowest central value as average, and assigning it an ncertainty of half the difference between them. The lattice results were averaged using a χ<sup>2</sup> weighted average. The total average (again a weighted average) was dominated by the lattice results, which in turn were dominated by the latest HPQCD result. For 2015, there have been a number of updates to most of the classes, and there is now a new class of α<sub>s</sub> determinations from the LHC (of which there is currently only one published, which lies rather low compared to other determinations, and is likely a downward fluctuation). In most cases, the new determinations have not or hardly changed the values and errors of their class. The most significant change is in the field of lattice determinations, where the PDG will change its policy and will no longer perform its own preaverages, taking instead the FLAG average as the lattice result. As a result, the error on the PDG value will increase; its value will also shift down a little, mostly due to the new LHC value.<br /><br />The afternoon discussion centered on α<sub>s</sub>. Roger Horsley gave an overview of the methods used to determine it on the lattice (ghost vertices, the Schrödinger functional, the static energy at short distances, current-current correlators, and small Wilson loops) and reviewed the criteria used by FLAG to assess the quality of a given determination, as well as the averaging procedure used (which uses a more conservative error than what a weighted average would give). In the discussion, the points were raised that in order to reliably increase the precision to the sub-percent level and beyond will likely require not only addressing the scale setting uncertainties (which is reflected in the different values for r<sub>0</sub> obtained by different collaboration and will affect the running of α<sub>s</sub>), but also the inclusion of QED effects.http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_3.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-949588993284764875Fri, 04 Sep 2015 07:40:00 +00002015-09-04T08:40:12.053+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day FourToday's first speaker was Andreas Jüttner, who reviewed the extraction of the light-quark CKM matrix elements V<sub>ud</sub> and V<sub>us</sub> from lattice simulations. Since leptonic and semileptonic decay widths of Kaons and pions are very well measured, the matrix element |V<sub>us</sub>| and the ratio |V<sub>us</sub>|/|V<sub>ud</sub>| can be precisely determined if the form factor f<sub>+</sub><sup>Kπ</sup>(0) and the ratio of decay constants f<sub>K</sub>/f<sub>π</sub> are precisely predicted from the lattice. To reach the desired level of precision, the isospin breaking effects from the difference of the up and down quark masses and from electromagnetic interactions will need to be included (they are currently treated in chiral perturbation theory, which may not apply very well in the SU(3) case). Given the required level of precision, full control of all systematics is very important, and the problem of how to properly estimate the associated errors arises, to which different collaborations are offering very different answers. To make the lattice results optimally usable for CKMfitter &Co., one should ideally provide all of the lattice inputs to the CKMfitter fit separately (and not just some combination that presents a particularly small error), as well as their correlations (as far as possible).<br /><br />Unfortunately, I had to miss the second talk of the morning, by Xavier García i Tormo on the extraction of α<sub>s</sub> from the static-quark potential, because our Sonderforschungsbereich (SFB/CRC) is currently up for review for a second funding period, and the local organizers had to be available for questioning by panel members.<br /><br />Later in the afternoon, I returned to the workshop and joined a very interesting discussion on the topic of averaging in the presence of theoretical uncertainties. The large number of possible choices to be made in that context implies that the somewhat subjective nature of systematic error estimates survives into the averages, rather than being dissolved into a consensus of some sort.<br /><br />http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_18.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-794857730062574410Fri, 04 Sep 2015 07:23:00 +00002015-09-07T13:11:58.161+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day ThreeToday, our first speaker was Jerôme Charles, who presented new ideas about how treat data with theoretical uncertainties. The best place to read about this is probably his <a href="">talk</a>, but I will try to summarize what I understood. The framework is a firmly frequentist approach to statistics, which answers the basic question of how likely the observed data are if a given null hypothesis is true. In such a context, one can consider a theoretical uncertainty as a fixed bias δ of the estimator under consideration (such as a lattice simulation) which survives the limit of infinite statistics. One can then test the null hypothesis that the true value of the observable in question is μ by constructing a test statistic for the estimator being distributed normally with mean μ+δ and standard deviation σ (the statistical error quoted for the result). The p-value of μ then depends on δ, but not on the quoted systematic error Δ. Since the true value of δ is not known, one has to perform a scan over some region Ω, for example the interval Ω<sub>n</sub>=[-nΔ;nΔ] and take the supremum over this range of δ. One possible extension is to choose Ω adaptively in that a larger range of values needs to be scanned (i.e. a larger true systematic error in comparison to the quoted systematic error is allowed for) for lower p-values; interestingly enough, the resulting curves of p-values are numerically close to what is obtained from a naive Gaussian approach treating the systematic error as a (pseudo-)random variable. For multiple systematic errors, a multidimensional Ω has to be chosen in some way; the most natural choices of a hypercube or a hyperball correspond to adding the errors linearly or in quadrature, respectively. The linear (hypercube) scheme stands out as the only one that guarantees that the systematic error of an average is no smaller than the smallest systematic error of an individual result.<br /><br />The second speaker was Patrick Fritzsch, who gave a nive review of recent lattice determinations of semileptonic heavy-light decays, both the more commonly studied B decays to πℓν and Kℓν, and the decays of the Λ<sub>b</sub> that have recently been investigated by Meinel <i>et al.</i> with the help of LHCb.<br /><br />In the afternoon, both the CKMfitter collaboration and the FLAG group held meetings.<br /><br />http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_4.htmlnoreply@blogger.com (Georg v. Hippel)2tag:blogger.com,1999:blog-8669468.post-8804916079446072933Tue, 01 Sep 2015 15:29:00 +00002015-09-01T16:29:14.736+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day TwoThis morning, we started with a talk by Taku Izubuchi, who reviewed the lattice efforts relating to the hadronic contributions to the anomalous magnetic moment (g-2) of the muon. While the QED and electroweak contributions to (g-2) are known to great precision, most of the theoretical uncertainty presently comes from the hadronic (i.e. QCD) contributions, of which there are two that are relevant at the present level of precision: the contribution from the hadronic vacuum polarization, which can be inserted into the leading-order QED correction, and the contribution from hadronic light-by-light scattering, which can be inserted between the incoming external photon and the muon line. There are a number of established methods for computing the hadronic vacuum polarization, both phenomenologically using a dispersion relation and the experimental R-ratio, and in lattice field theory by computing the correlator of two vector currents (which can, and needs to, be refined in various way in order to achieve competitive levels of precision). No such well-established methods exist yet for the light-by-light scattering, which is so far mostly described using models. There are however, now efforts from a number of different sides to tackle this contribution; Taku mainly presented the appproach by the RBC/UKQCD collaboration, which uses stochastic sampling of the internal photon propagators to explicitly compute the diagrams contributing to (g-2). Another approach would be to calculate the four-point amplitude explicitly (which has recently been done for the first time by the Mainz group) and to decompose this into form factors, which can then be integrated to yield the light-by-light scattering contribution to (g-2).<br /><br />The second talk of the day was given by Petros Dimopoulos, who reviewed lattice determinations of D and B leptonic decays and mixing. For the charm quark, cut-off effects appear to be reasonably well-controlled with present-day lattice spacings and actions, and the most precise lattice results for the D and D<sub>s</sub> decay constants claim sub-percent accuracy. For the b quark, effective field theories or extrapolation methods have to be used, which introduces a source of hard-to-assess theoretical uncertainty, but the results obtained from the different approaches generally agree very well amongst themselves. Interestingly, there does not seem to be any noticeable dependence on the number of dynamical flavours in the heavy-quark flavour observables, as N<sub>f</sub>=2 and N<sub>f</sub>=2+1+1 results agree very well to within the quoted precisions.<br /><br />In the afternoon, the CKMfitter collaboration split off to hold their own meeting, and the lattice participants met for a few one-on-one or small-group discussions of some topics of interest.<br /><br />http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-2781958310220674791Mon, 31 Aug 2015 16:33:00 +00002015-08-31T17:34:27.131+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day OneGreetings from Mainz, where I have the pleasure of covering a meeting for you without having to travel from my usual surroundings (I clocked up more miles this year already than can be good from my environmental conscience).<br /><br />Our <a href="http://indico.mitp.uni-mainz.de/conferenceDisplay.py?confId=28">Scientific Programme</a> (which is the bigger of the two formats of meetings that the <a href="http://www.mitp.uni-mainz.de/">Mainz Institute of Theoretical Physics</a> (MITP) hosts, the smaller being Topical Workshops) started off today with two keynote talks summarizing the status and expectations of the <a href="http://itpwiki.unibe.ch/flag/index.php/Review_of_lattice_results_concerning_low_energy_particle_physics">FLAG</a> (Flavour Lattice Averaging Group, presented by Tassos Vladikas) and <a href="http://ckmfitter.in2p3.fr/">CKMfitter</a> (presented by Sébastien Descotes-Genon) collaborations. Both groups are in some way in the business of performing weighted averages of flavour physics quantities, but of course their backgrounds, rationale and methods are quite different in many regards. I will no attempt to give a line-by-line summary of the talks or the afternoon discussion session here, but instead just summarize a few <br />points that caused lively discussions or seemed important in some other way.<br /><br />By now, computational resources have reached the point where we can achieve such statistics that the total error on many lattice determinations of precision quantities is completely dominated by systematics (and indeed different groups would differ at the several-σ level if one were to consider only their statistical errors). This may sound good in a way (because it is what you'd expect in the limit of infinite statistics), but it is also very problematic, because the estimation of systematic errors is in the end really more of an art than a science, having a crucial subjective component at its heart. This means not only that systematic errors quoted by different groups may not be readily comparable, but also that it become important how to treat systematic errors (which may also be correlated, if e.g. two groups use the same one-loop renormalization constants) when averaging different results. How to do this is again subject to subjective choices to some extent. FLAG imposes cuts on quantities relating to the most important sources of systematic error (lattice spacings, pion mass, spatial volume) to select acceptable ensembles, then adds the statistical and systematic errors in quadrature, before performing a weighted average and computing the overall error taking correlations between different results into account using <a href="http://iopscience.iop.org/1402-4896/51/6/002/">Schmelling's procedure</a>. CKMfitter, on the other hand, adds all systematic errors linearly, and uses the <a href="http://arxiv.org/abs/hep-ph/0104062">Rfit procedure</a> to perform a maximum likelihood fit. Either choice is equally permissible, but they are not directly compatible (so CKMfitter can't use FLAG averages as such).<br /><br />Another point raised was that it is important for lattice collaborations computing mixing parameters to not just provide products like <i>f<sub>B</sub>√B<sub>B</sub></i>, but also <i>f<sub>B</sub></i> and <i>B<sub>B</sub></i> separately (as well as information about the correlation between these quantities) in order to help making the global CKM fits easier.<br /><br />http://latticeqcd.blogspot.com/2015/08/fundamental-parameters-from-lattice-qcd.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-1782869081254270076Sat, 18 Jul 2015 13:19:00 +00002015-07-18T14:19:35.642+01:00conferencesLATTICE 2015, Day FiveIn a marked deviation from the "standard programme" of the lattice conference series, Saturday started off with parallel sessions, one of which featured my own talk.<br /><br />The lunch break was relatively early, therefore, but first we all assembled in the plenary hall for the conference group photo (a new addition to the traditions of the lattice conference), and was followed by afternoon plenary sessions. The first of these was devoted to finite temperature and density, and started with Harvey Meyer giving the review talk on finite-temperature lattice QCD. The thermodynamic properties of QCD are by now relatively well-known: the transition temperature is agreed to be around 155 MeV, chiral symmetry restoration and the deconfinement transition coincide (as well as that can defined in the case of a crossover), and the number of degrees of freedom is compatible with a plasma of quarks and gluons above the transition, but the thermodynamic potentials approach the Stefan-Boltzmann limit only slowly, indicating that there are strong correlations in the medium. Below the transition, the hadron resonance gas model describes the data well. The Columbia plot describing the nature of the transition as a function of the light and strange quark masses is being further solidified: the size of the lower-left hand corner first-order region is being measured, and the nature of the left-hand border (most likely O(4) second-order) is being explored. Beyond these static properties, real-time properties are beginning to be studied through the finite-temperature spectral functions. One interesting point was that there is a difference between the screening masses (spatial correlation lengths) and quasiparticle masses (from the spectral function) in any given channel, which may even tend in opposite directions as functions of the temperature (as seen for the pion channel).<br /><br />Next, Szabolcs Borsanyi spoke about fluctuations of conserved charges at finite temperature and density. While of course the sum of all outcoming conserved charges in a collision must equal the sum of the ingoing ones, when considering a subvolume of the fireball, this can be best described in the grand canonical ensemble, as charges can move into and out of the subvolume. The quark number susceptibilities are then related to the fluctuating phase of the fermionic determinant. The methods being used to avoid the sign problem include Taylor expansions, fugacity expansions and simulations at imaginary chemical potential, all with their own strengths and weaknesses. Fluctuations can be used as a thermometer to measure the freeze-out temperature.<br /><br />Lastly, Luigi Scorzato reviewed the Lefschetz thimble, which may be a way out of the sign problem (e.g. at finite chemical potential). The Lefschetz thimble is a higher-dimensional generalization of the concept of steepest-descent integration, in which the integral of e<sup>S(z)</sup> for complex S(z) is evaluated by finding the stationary points of S and integrating along the curves passing through them along which the imaginary part of S is constant. On such Lefschetz thimbles, a Langevin algorithm can be defined, allowing for a Monte Carlo evaluation of the path integral in terms of Lefschetz thimbles. In quantum-mechanical toy models, this seems to work already, and there appears hope that this might be a way to avoid the sign problem of finite-density QCD.<br /><br />After the coffee break, the last plenary session turned to physics beyond the Standard Model. Daisuke Kadoh reviewed the progress in putting supersymmetry onto the lattice, which is still a difficult problem due to the fact that the finite differences which replace derivatives on a lattice do not respect the Leibniz rule, introducing SUSY-breaking terms when discretizing. The ways past this are either imposing exact lattice supersymmetries or fine-tuning the theory so as to remove the SUSY-breaking in the continuum limit. Some theories in both two and four dimensions have been simulated successfully, including N=1 Super-Yang-Mills theory in four dimensions. Given that there is no evidence for SUSY in nature, lattice SUSY is of interesting especially for the purpose of verifying the ideas of gauge-dravity duality from the Super-Yang-Mills side, and in one and two dimensions, agreement with the predictions from gauge-gravity duality has been found.<br /><br />The final plenary speaker was Anna Hasenfratz, who reviewed Beyond-the-Standard-Model calculations in technicolor-like theories. If the Higgs is to be a composite particle, there must be some spontaneously broken symmetry that keeps it light, either a flavour symmetry (pions) or a scale symmetry (dilaton). There are in fact a number of models that have a light scalar particle, but the extrapolation of these theories is rendered difficult by the fact that this scalar is (and for phenomenologically interesting models would have to be) lighter than the (techni-)pion, and thus the usual formalism of chiral perturbation theory may not work. Many models of strong BSM interactions have been and are being studied using a large number of different methods, with not always conclusive results. A point raised towards the end of the talk was that for theories with a conformal IR fixed-point, universality might be violated (and there are some indications that e.g. Wilson and staggered fermions seem to give qualitatively different behaviour for the beta function in such cases).<br /><br />The conference ended with some well-deserved applause for the organizing team, who really ran the conference very smoothly even in the face of a typhoon. Next year's lattice conference will take place in Southampton (England/UK) from 24th to 30th July 2016. Lattice 2017 will take place in Granada (Spain).<br />http://latticeqcd.blogspot.com/2015/07/lattice-2015-day-five.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-1532722304931539399Fri, 17 Jul 2015 13:16:00 +00002015-07-18T14:19:53.519+01:00conferencesLATTICE 2015, Days Three and FourDue to the one-day shift of the entire conference programme relative to other years, Thursday instead of Wednesday was the short day. In the morning, there were parallel sessions. The most remarkable thing to be reported from those (from my point of view) is that MILC are generating a=0.03 fm lattices now, which handily beats the record for the finest lattice spacing; they are observing some problems with the tunnelling of the topological charge at such fine lattices, but appear hopeful that they can be useful.<br /><br />After the lunch break, excursions were offered. I took the trip to Himeji to see Himeji Castle, a very remarkable five-story wooden building that due to its white exterior is also known the "White Heron Castle". During the trip, typhoon Nangka approached, so the rains cut our enjoyment of the castle park a bit short (though seeing koi in a pond with the rain falling into it had a certain special appeal to it, the enjoyment of which I in my Western ignorance suppose might be considered a form of Japanese <i>wabi</i> aesthetics).<br /><br />As the typhoon resolved into a rainstorm, the programme wasn't cancelled or changed, and so today's plenary programme started with a talk on some formal developments in QFT by Mithat Ünsal, who reviewed trans-series, Lefschetz thimbles, and Borel summability as different sides of the same coin. I'm far too ignorant of these more formal field theory topics to do them justice, so I won't try a detailed summary. Essentially, it appears that the expansion of certain theories around the saddle points corresponding to instantons is determined by their expansion around the trivial vacuum, and the ambiguities arising in the Borel resummation of perturbative series when the Borel transform has a pole on the positive real axis can in some way be connected to this phenomenon, which may allow for a way to resolve the ambiguities.<br /><br />Next, Francesco Sannino spoke about the "bright, dark, and safe" sides of the lattice. The bright side referred to the study of visible matter, in particular to the study of technicolor models as a way of implementing the spontaneous breaking of electroweak symmetry, without the need for a fundamental scalar introducing numerous tunable parameters, and with the added benefits of removing the hierarchy problem and the problem of φ<sup>4</sup> triviality. The dark side referred to the study of dark matter in the context of composite dark matter theories, where one should remember that if the visible 5% of the mass of the universe require three gauge groups for their description, the remaining 95% are unlikely to be described by a single dark matter particle and a homogeneous dark energy. The safe side referred to the very current idea of asymptotic safety, which is of interest especially in quantum gravity, but might also apply to some extension of the Standard Model, making it valid at all energy scales.<br /><br />After the coffee break, the traditional experimental talk was given by Toru Iijima of the Belle II collaboration. The Belle II detector is now beginning commissioning at the upcoming SuperKEKB accelerator, which will greatly improved luminosity to allow for precise tests of the Standard Model in the flavour sector. In this, Belle II will be complementary to LHCb, because it will have far lower backgrounds allowing for precision measurements of rare processes, while not being able to access as high energies. Most of the measurements planned at Belle II will require lattice inputs to interpret, so there is a challenge to our community to come up with sufficiently precise and reliable predictions for all required flavour observables. Besides quark flavour physics, Belle II will also search for lepton flavour violation in τ decays, try to improve the phenomenological prediction for (g-2)<sub>μ</sub> by measuring the cross section for e<sup>+</sup>e<sup>-</sup> -> hadrons more precisely, and search for exotic charmonium- and bottomonium-like states.<br /><br />Closely related was the next talk, a review of progress in heavy flavour physics on the lattice given by Carlos Pena. While simulations of relativistic b quarks at the physical mass will become a possibility in the not-too-distant future, for the time being heavy-quark physics is still dominated by the use of effective theories (HQET and NRQCD) and methods based either on appropriate extrapolations from the charm quark mass region, or on the Fermilab formalism, which is sort of in-between. For the leptonic decay constants of heavy-light mesons, there are now results from all formalisms, which generally agree very well with each other, indicating good reliability. For the semileptonic form factors, there has been a lot of development recently, but to obtain precision at the 1% level, good control of all systematics is needed, and this includes the momentum-dependence of the form factors. The z-expansion, and extended versions thereof allowing for simultaneous extrapolation in the pion mass and lattice spacing, has the advantage of allowing for a test of its convergence properties by checking the unitarity bound on its coefficients.<br /><br />After the coffee break, there were parallel sessions again. In the evening, the conference banquet took place. Interestingly, the (excelleent) food was not Japanese, but European (albeit with a slight Japanese twist in seasoning and presentation).<br />http://latticeqcd.blogspot.com/2015/07/lattice-2015-days-three-and-four.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-3174432257854640691Wed, 15 Jul 2015 12:47:00 +00002015-07-16T03:05:20.505+01:00conferencesLATTICE 2015, Day TwoHello again from Lattice 2015 in Kobe. Today's first plenary session began with a review talk on hadronic structure calculations on the lattice given by James Zanotti. James did an excellent job summarizing the manifold activities in this core area of lattice QCD, which is also of crucial phenomenological importance given situations such as the proton radius puzzle. It is now generally agreed that excited-state effects are one of the more important issues facing hadron structure calculations, especially in the nucleon sector, and that these (possibly together with finite-volume effects) are likely responsible for the observed discrepancies between theory and experiment for quantities such as the axial charge of the nucleon. Many groups are studying the charges and form factors of the nucleon, and some have moved on to more complicated quantities, such as transverse momentum distributions. Newer ideas in the field include the use of the Feynman-Hellmann theorem to access quantities that are difficult to access through the traditional three-point-over-two-point ratio method, such as form factors at very high momentum transfer, and quantities with disconnected diagrams (such as nucleon strangeness form factors).<br /><br />Next was a review of progress in light flavour physics by Andreas Jüttner, who likewise gave an excellent overview of this also phenomenologically very important core field. Besides the "standard" quantities, such as the leptonic pion and kaon decay constants and the semileptonic K-to-pi form factors, more difficult light-flavour quantities are now being calculated, including the bag parameter B<sub>K</sub> and other quantities related to both Standard Model and BSM neutral kaon mixing, which require the incorporation of long-distance effects, including those from charm quarks. Given the emergence of lattice ensembles at the physical pion mass, the analysis strategies of groups are beginning to change, with the importance of global ChPT fits receding. Nevertheless, the lattice remains important in determining the low-energy constants of Chiral Perturbation Theory. Some groups are also using newer theoretical developments to study quantities once believed to be outside the purview of lattice QCD, such as final-state photon corrections to meson decays, or the timelike pion form factor.<br /><br />After the coffee break, the Ken Wilson Award for Excellence in Lattice Field Theory was announced. The award goes to Stefan Meinel for his substantial and timely contributions to our understanding of the physics of the bottom quark using lattice QCD. In his acceptance talk, Stefan reviewed his recent work on determining |V<sub>ub</sub>|/|V<sub>cb</sub>| from decays of Λ<sub>b</sub> baryons measured by the LHCb collaboration. There has long been a discrepancy between the inclusive and exclusive (from B -> πlν) determinations of V<sub>ub</sub>, which might conceivably be due to a new (BSM) right-handed coupling. Since LHCb measures the decay widths for Λ<sub>b</sub> to both pμν and Λ<sub>c</sub>μν, combining these with lattice determinations of the corresponding Λ<sub>b</sub> form factors allows for a precise determination of |V<sub>ub</sub>|/|V<sub>cb</sub>|. The results agree well with the exclusive determination from B -> πlν, and fully agree with CKM unitarity. There are, however, still other channels (such as b -> sμ<sup>+</sup>μ<sup>-</sup> and b -> cτν) in which there is still potential for new physics, and LHCb measurements are pending.<br /><br />This was followed by a talk by Maxwell T. Hansen (now a postdoc at Mainz) on three-body observables from lattice QCD. The well-known Lüscher method relates two-body scattering amplitudes to the two-body energy levels in a finite volume. The basic steps in the derivation are to express the full momentum-space propagator in terms of a skeleton expansion involving the two-particle irreducible Bethe-Salpeter kernel, to express the difference between the two-particle reducible loops in finite and infinite volume in terms of two-particle cuts, and to reorganize the skeleton expansion by the number of cuts to reveal that the poles of the propagator (i.e. the energy levels) in finite volume are related to the scattering matrix. For three-particle systems, the skeleton expansion becomes more complicated, since there can now be situations involving two-particle interactions and a spectator particle, and intermediate lines can go on-shell between different two-particle interactions. Treating a number of other technical issues such as cusps, Max and collaborators have been able to derive a Lüscher-like formula three-body scattering in the case of scalar particles with a Z<sub>2</sub> symmetry forbidding 2-to-3 couplings. Various generalizations remain to be explored.<br /><br />The day's plenary programme ended with a talk on the Standard Model prediction for direct CP violation in K-> ππ decays by Christopher Kelly. This has been an enormous effort by the RBC/UKQCD collaboration, who have shown that the ΔI=1/2 rule comes from low-energy QCD by way of strong cancellations between the dominant contributions, and have determined ε' from the lattice for the first time. This required the generation of ensembles with an unusual set of boundary conditions (G-parity boundary conditions on the quarks, requiring complex conjugation boundary conditions on the gauge fields) in space to enforce a moving pion ground state, as well as the precise evaluation of difficult disconnected diagrams using low modes and stochastic estimators, and treatment of finite-volume effects in the Lellouch-Lüscher formalism. Putting all of this together with the non-perturbative renormalization (in the RI-sMOM scheme) of ten operators in the electroweak Hamiltonian gives a result which currently still has three times the experimental error, but is systematically improvable, with better-than-experimental precision expected in maybe five years.<br /><br />In the afternoon there were parallel sessions again, and in the evening, the poster session took place. Food ran out early, but it was pleasant to see <a href="http://arxiv.org/abs/1306.1440">free-form smearing</a> begin improved upon and used to very good effect by Randy Lewis, Richard Woloshyn and students.<br />http://latticeqcd.blogspot.com/2015/07/lattice-2015-day-two.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-1945513826714968814Tue, 14 Jul 2015 11:30:00 +00002015-07-14T12:30:28.925+01:00conferencestravelLATTICE 2015, Day OneHello from Kobe, where I am attending the Lattice 2015 conference. The trip here was uneventful, as was the jetlag-day.<br /><br />The conference started yesterday evening with a reception in the Kobe Animal Kingdom (there were no animals when we were there, though, with the exception of some fish in a pond and some cats in a cage, but there were lot of plants).<br /><br />Today, the scientific programme began with the first plenary session. After a welcome address by Akira Ukawa, who reminded us of the previous lattice meetings held in Japan and the tremendous progress the field has made in the intervening twelve years, Leonardo Giusti gave the first plenary talk, speaking about recent progress on chiral symmetry breaking. Lattice results have confirmed the proportionality of the square of the pion mass to the quark mass (i.e. the Gell-Mann-Oakes-Renner (GMOR) relation, a hallmark of chiral symmetry breaking) very accurately for a long time. Another relation involving the chiral condensate is the Banks-Casher relation, which relates it to the eigenvalue density of the Dirac operator at zero. It can be shown that the eigenvalue density is renormalizable, and that thus the mode number in a given interval is renormalization-group invariant. Two recent lattice studies, one with twisted-mass fermions and one with O(a)-improved Wilson fermions, confirm the Banks-Casher relation, with the chiral condensates found agreeing very well with those inferred from GMOR. Another relation is the Witten-Veneziano relation, which relates the η' mass to the topological susceptibility, thus explaining how precisely the η' is not a Goldstone boson. The topological charge on the lattice can be defined through the index of the Neuberger operator or through chain of spectral porjectors, but a recently invented and much cheaper definition is through the topological charge density at finite flow time in Lüscher's Wilson flow formalism. The renormalization properties of the Wilson flow allow for a derivation of the universality of the topological susceptibility, and numerical tests using all three definitions indeed agree within errors in the continuum limit. Higher cumulants determined in the Wilson flow formalism agree with large-N<sub>c</sub> predictions in pure Yang-Mills, and the suppression of the topological susceptibility in QCD relative to the pure Yang-Mills case is in line with expectations (which in principle can be considered an <i>a posteriori</i> determination of N<sub>f</sub> in agreement with the value used in simulations).<br /><br />The next speaker was Yu Nakayama, who talked about a related topic, namely the determination of the chiral phase transition in QCD from the conformal bootstrap. The chiral phase transition can be studied in the framework of a Landau effective theory in three dimensions. While the mean-field theory predicts a second-order phase transition in the O(4) universality class, one-loop perturbation theory in 4-ε dimensions predicts a first-order phase transition at ε=1. Making use of the conformal symmetry of the effective theory, one can apply the conformal bootstrap method, which combines an OPE with crossing relations to obtain results for critical exponents, and the results from this method suggest that the phase transition is in fact of second order. This also agrees with many lattice studies, but others disagree. The role of the anomalously broken U(1)<sub>A</sub> symmetry in this analysis appears to be unclear.<br /><br />After the coffee break, Tatsumi Aoyama, a long-time collaborator in the heroic efforts of Kinoshita to calculate the four- and five-loop QED contributions to the electron and muon anomalous moments, gave a plenary talk on the determination of the QED contribution to lepton (g-2). For likely readers of this blog, the importance of (g-2) is unlikely to require an explanation: the current 3σ tension between theory and experiment for (g-2)<sub>μ</sub> is the strongest hint of physics beyond the Standard Model so far, and since the largest uncertainties on the theory side are hadronic, lattice QCD is challenged to either resolve the tension or improve the accuracy of the predictions to the point where the tension becomes an unambiguous, albeit indirect, discovery of new physics. The QED calculations are on the face of it simpler, being straightforward Feynman diagram evaluations. However, the number of Feynman diagrams grows so quickly at higher orders that automated methods are required. In fact, in a first step, the number of Feynman diagrams is reduced by using the Ward-Takahashi identity to relate the vertex diagrams relevant to (g-2) to self-energy diagrams, which are then subjected to an automated renormalization procedure using the Zimmermann forest formula. In a similar way, infrared divergences are subtracted using a more complicated "annotated forest"-formula (there are two kinds of IR subtractions needed, so the subdiagrams in a forest need to be labelled with the kind of subtraction). The resulting UV- and IR-finite integrands are then integrated using VEGAS in Feynman parameter space. In order to maintain the required precision, quadruple-precision floating-point numbers (or an emulation thereof) must be used. Whether these methods could cope with the six-loop QED contribution is not clear, but with the current and projected experimental errors, that contribution will not be required for the foreseeable future, anyway.<br /><br />This was followed by another (g-2)-related plenary, with Taku Izubichi speaking about the determination of anomalous magnetic moments and nucleon electric dipole moments in QCD. In particular the anomalous magnetic moment has become such an active topic recently that the time barely sufficed to review all of the activity in this field, which ranges from different approaches to parameterizing the momentum dependence of the hadronic vacuum polarization, through clever schemes to reduce the noise by subtracting zero-momentum contributions, to new ways of extracting the vacuum polarization through the use of background magnetic fields, as well as simulations of QCD+QED on the lattice. Among the most important problems are finite-volume effects.<br /><br />After the lunch break, there were parallel sessions in the afternoon. I got to chair the first session on hadron structure, which was devoted to determinations of hadronic contributions to (g-2)<sub>μ</sub>.<br /><br />After the coffee break, there were more parallel sessions, another complete one of which was devoted to (g-2) and closely-related topics. A talk deserving to be highlighted was given by Jeremy Green, who spoke about the first direct calculation of the hadronic light-to-light scattering amplitude from lattice QCD.<br />http://latticeqcd.blogspot.com/2015/07/lattice-2015-day-one.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-3617778499689342936Fri, 10 Apr 2015 08:19:00 +00002015-04-10T15:59:59.883+01:00conferencesWorkshop "Fundamental Parameters from Lattice QCD" at MITP (upcoming deadline)Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.<br /><br />The scientific programme "<a href="https://indico.mitp.uni-mainz.de/conferenceDisplay.py?confId=28">Fundamental Parameters from Lattice QCD</a>" at the Mainz Institute of Theoretical Physics (<a href="http://www.mitp.uni-mainz.de/">MITP</a>) is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.<br /><br />The deadline for <a href="https://indico.mitp.uni-mainz.de/confRegistrationFormDisplay.py/display?confId=28" title="Registration form">registration</a> is <b>Wednesday, 15 April 2015</b>. Please register <a href="https://indico.mitp.uni-mainz.de/confRegistrationFormDisplay.py/display?confId=28" title="Register now!">at this link</a>.http://latticeqcd.blogspot.com/2015/03/workshop-fundamental-parameters-from.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-6291076893581301399Thu, 12 Mar 2015 17:01:00 +00002015-03-12T17:07:29.688+00:00conferencestravelQNP 2015, Day Five<i>Apologies for the delay in posting this. Travel and jetlag kept me from attending to it earlier.</i><br /><br />The first talk today was by Guy de Teramond, who described applications of light-front superconformal quantum mechanics to hadronic physics. I have to admit that I couldn't fully take in all the details, but as far as I understood an isomorphy between AdS<sup>2</sup> and the conformal group in one dimension can be used to derive a form of the light-front Hamiltonian for mesons from an AdS/QCD correspondence, in which the dilaton field is fixed to be φ(z)=1/2 z<sup>2</sup> by the requirement of conformal invariance, and a similar construction in the superconformal case leads to a light-front Hamiltonian for baryons. A relationship between the Regge trajectories for mesons and baryons can then be interpreted as a form of supersymmetry in this framework.<br /><br />Next was Beatriz Gay Ducati with a review of the pheonomenology of heavy quarks in nuclear matter, a topic where there are still many open issues. The photoproduction of quarkonia on nucleons and nuclei allows to probe the gluon distribution, since the dominant production process is photon-gluon fusion, but to be able to interpret the data, many nuclear matter effects need to be understood.<br /><br />After the coffee break, this was followed by a talk by Hrayr Matevosyan on transverse momentum distributions (TMDs), which are complementary to GPDs in the sense of being obtained by integrating out other variables starting from the full Wigner distributions. Here, again, there are many open issues, such as the Sivers, Collins or Boer-Mulders effects.<br /><br />The next speaker was Raju Venugopalan, who spoke about two outstanding problems in QCD at high parton densities, namely the question of how the systems created in heavy-ion collisions thermalise, and the phenomenon of "the ridge" in proton-nucleus collisions, which would seem to suggest hydrodynamic behaviour in a system that is too small to be understood as a liquid. Both problems may have to do with the structure of the dense initial state, which is theorised to be a colour-glass condensate or "glasma", and the way in which it evolves into a more dilute system.<br /><br />After the lunch break, Sonny Mantry reviewed some recent advances made in applying Soft-Collinear Effective Theory (SCET) to a range of questions in strong-interaction physics. SCET is the effective field theory obtained when QCD fluctuations around a hard particle momentum are considered to be small and a corresponding expansion (analogous to the 1/m expansion in HQET) is made. SCET has been successfully applied to many different problems; an interesting and important one is the problem of relating the "Monte Carlo mass" usually quoted for the top quark to the top quark mass in a more well-defined scheme such as MSbar.<br /><br />The last talk in the plenary programme was a review of the Electron-Ion Collider (EIC) project by Zein-Eddine Meziani. By combining the precision obtainable using an electron beam with the access to the gluon-dominated regime provided by a havy ion beam, as well as the ability to study the nucleon spin using a polarised nucleon beam, the EIC will enable a much more in-depth study of many of the still unresolved questions in QCD, such as the nucleon spin structure and colour distributions. There are currently two competing designs, the eRHIC at Brookhaven, and the MEIC at Jefferson Lab.<br /><br />Before the conference closed, Michel Garçon announced that the next conference of the series (QNP 2018) will be held in Japan (either in Tsukuba or in Mito, Ibaraki prefecture). The local organising committee and conference office staff received some well-deserved applause for a very smoothly-run conference, and the scientific part of the conference programme was adjourned.<br /><br />As it was still in the afternoon, I went with some colleagues to visit <a href="http://es.wikipedia.org/wiki/La_Sebastiana">La Sebastiana</a>, the house of <a href="http://en.wikipedia.org/wiki/Pablo_Neruda">Pablo Neruda</a> in Valparaíso, taking one of the city's famous <i>ascensores</i> down (although up might have been more convenient, as the streets get very steep) before walking back to Viña del Mar along the sea coast.<br /><br />The next day, there was an organised excursion to a vineyard in the Casablanca valley, where we got to taste some very good Chilean wines (some of the them matured in traditional clay vats) and liqueurs with a very pleasant lunch.<br /><br />I got to spend another day in Valparaíso before travelling back (a happily uneventful, if again rather long trip).<br />http://latticeqcd.blogspot.com/2015/03/qnp-2015-day-five.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-462429565090459560Fri, 06 Mar 2015 12:13:00 +00002015-03-06T12:15:50.731+00:00conferencesQNP 2015, Day FourThe first talk today was a review of experimental results in light-baryon spectroscopy by Volker Credé. While much progress has been made in this field, in particular in the design of so-called complete experiments, which as far as I understand measure multiple observables to unambiguously extract a complete description of the amplitudes for a certain process, there still seem to be surprisingly many unknowns. In particular, the fits to pion photoproduction in doubly-polarised processes seem to disagree strongly between different descriptions (such as MAID).<br /><br />Next was Derek Leinweber with a review of light hadron spectroscopy from the lattice. The <i>de facto</i> standard method in this field is the variational method (GEVP), although there are some notable differences in how precisely different groups apply it (e.g. solving the GEVP at many times and fitting the eigenvalues vs. forming projected correlators with the eigenvectors of the GEVP solved at a single time -- there are proofs of good properties for the former that don't exist for the latter). The way in which the basis of operators for the GEVP is build is also quite different as used by different groups, ranging from simply using different levels of quark field smearing to intricate group-theoretic constructions of multi-site operators. There are also attempts to determine how much information can be extracted from a given set of correlators, e.g. recently by the <a href="http://arxiv.org/abs/1411.6765">Cyprus/Athens group</a> using Monte Carlo simulations to probe the space of fitting parameters (a loosely related older idea based on <a href="http://arxiv.org/abs/0707.2788">evolutionary fits</a> wasn't mentioned).<br /><br />This was followed by a talk by Susan Gardner about testing fundamental symmetries with quarks. While we know that there must be physics beyond the Standard Model (because the SM does not explain dark matter, nor does it provide enough CP violation to explain the observed baryon asymmetry), there is so far no direct evidence of any BSM particle. Low-energy tests of the SM fall into two broad categories: null tests (where the SM predicts an exact null result, as for violations of B-L) and precision tests (where the SM prediction can be calculated to very high accuracy, as for (g-2)<sub>μ</sub>). Null tests play an important role in so far as they can be used to impose a lower limit for the BSM mass scale, but many of them are atomic or nuclear tests, which have complicated theory errors. The currently largest tensions indicating a possible failure of the Standard Model to describe all observations are the proton radius puzzle, and (g-2)<sub>μ</sub>. A possible explanation of either or both of those in terms of a "dark photon" is on the verge of being ruled out, however, since most of the relevant part of the mass/coupling plane has already been excluded by dark photon searches, and the rest of it will soon be (or else the dark photon will be discovered). Other tests in the hadronic sector, which seem to be less advanced so far, are the search for non-(V-A) terms in β-decays, and the search for neutron-antineutron oscillations.<br /><br />After the coffee break and the official conference photo, Isaac Vidaña took the audience on a "half-hour walk through the physics of neutron stars". Neutron stars are both almost-black holes (whose gravitation must be described in General Relativity) and extremely massive nuclei (whose internal dynamics must be described using QCD). Observations of binary pulsars allow to determine the masses of neutron stars, which are found to range up to at least two solar masses. However, the Tolman-Oppenheimer-Volkov equations for the stability of neutron stars lead to a maximum mass for a neutron star that depends on the equation of state of the nuclear medium. The observed masses severely constrain the equation of state and in particular seem to exclude models in which hyperons play an important role; however, it seems to be generally agreed that hyperons must play an important role in neutron stars, leading to a "hyperon puzzle", the solution of which will require an improved understanding of the structure and interactions of hyperons.<br /><br />The last plenary speaker of the day was Stanley Brodsky with the newest developments from light-front holography. The light-front approach, which has in the past been very successful in (1+1)-dimensional QCD, is based on the front form of the Hamiltonian formalism, in which a light-like, rather than a timelike, direction is chosen as the normal defining the Cauchy surfaces on which initial data are specified. In the light-front Hamiltonian approach, the vacuum of QCD is trivial and the Hilbert space can be constructed as a straightforward Fock space. With some additional ansätze taken from AdS/CFT ideas, QCD is reduced to a Schrödinger-like equation for the light-cone wavefunctions, from which observables are extracted. Apparently, all known observations are described perfectly in this approach, but (as for the Dyson-Schwinger or straight AdS/QCD approaches) I do not understand how systematic errors are supposed to be quantified.<br /><br />In the afternoon there were parallel talks. An interesting contribution was given by Mainz PhD student Franziska Hagelstein, who demonstrated how even a very small non-monotonicity in the electric form factor at low Q<sup>2</sup> (where there are no ep scattering data) could explain the difference between the muonic and electronic hydrogen results for the proton radius.<br /><br />The conference banquet took place in the evening at a very nice restaurant, and fun was had over cocktails and an excellent dinner.<br />http://latticeqcd.blogspot.com/2015/03/qnp-2015-day-four.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-114106487443998797Thu, 05 Mar 2015 12:27:00 +00002015-03-05T12:27:54.534+00:00conferencesQNP 2015, Day ThreeToday began with a talk by Mikhail Voloshin on QCD sum rules and heavy-quark states. The idea of exploiting quark-hadron duality to link perturbatively calculable current-current correlators to hadronic obervables and extract mesonic decay constants or quark masses is quite old, but has received a boost in recent years with the advent of three- and four-loop perturbative calculations in particularly from Chetyrkin and collaborators, which have also been used in conjunction with lattice results, e.g. by the HPQCD collaboration.<br /><br />A review of hadron spectroscopy at B factories (including LHCb) by Roberto Mussa followed. The charmonium and bottomonium spectra are now measured to great detail, with recent additions being 1D and 3P states, and more states are also being discovered in the heavy-light (where the B<sub>c</sub>(2S) has recently been discovered at ATLAS) and heavy-quark baryon (where the most recent discovery was the Ξ<sub>b</sub>) sectors, and many more transitions being discovered and studied.<br /><br />The next speaker was Raphaël Dupré, who spoke about colour propagation and neutralisation in strongly interacting systems. The idea here appears to be that in hadronisation processes, quarks first loose energy by radiating gluons and thus turn into colourless pre-hadrons, which then bind into hadrons on a longer timescale, and there seems to be experimental evidence supporting this energy-loss model.<br /><br />After the coffee break, Javier Castillo reviewed quarkonium suppression and regeneration in heavy-ion collisions. Quarkonia are generally considered important probes of the quark-gluon plasma, because the production of heavy quark-antiquark pairs is a perturbative process that happens at high energies early in the collision, while their binding is non-perturbative and is expected to be suppressed by Debye screening in the coloured plasma. As a consequence, more tightly bound quarkonia, like the Y(1S), can exist at higher temperatures, while the more lightly bound charmonia or Y(3S) states will "melt" at lower temperatures. However, quarkonia can also be regenerated by thermalised heavy quarks rejoining into quarkonia at the phase boundary. Experimental data support the screening picture, with the J/ψ being more suppressed at the LHC than at STAR (because of the higher temperature), the Y(2S) more suppressed than the Y(1S), and transport models with a negligible regeneration component describing the data well. The regeneration component increases at low p<sub>T</sub>, and the elliptic flow of the charm quarks is inherited by the regenerated J/ψ mesons. Some more difficult to understand effects of the nuclear environment, called Cold Nuclear Matter (CNM) effects are beginning to be seen in the data.<br /><br />Next was Zoltan Fodor with a talk about Lattice QCD results at zero and finite temperature from the BMW collaboration. By simulating QCD+QED with 1+1+1+1 flavours of dynamical quarks, BMW have been able to determine the isospin splitting of the nucleon and other baryonic systems. This work, which appears set to become a cover story in "Science", had to overcome a number of serious obstacles, in particular long-range autocorrelations (which could cured by a Fourier-accelerated HMC variant) and power-law finite-volume effects (which had to be fitted to results obtained at a range of volumes) introduced by the massless photon. In the finite-temperature regime, the crossover temperature is now generally agreed to be around 150-160 MeV, but the position and even existence of the critical endpoint is still contentious (and any existing results are not yet continuum-extrapolated in any case).<br /><br />After the lunch break, Yiota Foka gave an overview of heavy-ion results from RHIC and the LHC. The elliptic flow is still found to be in agreement with perfect hydrodynamics, but people are now also studying higher harmonics, as well as the interplay between jets and flow, which provide important constraints on the physics of the quark-gluon plasma. At the LHC, it has been found that it is the mass, and not the valence quark content, that drives the flow behaviour of hadrons, as the φ meson has the same flow behaviour as the proton.<br /><br />The next speaker was Carl Gagliardi, who reviewed results in nucleon structure from high-energy polarised proton-proton collisions. Proton-proton scattering is complementary to DIS in that it gives access to the gluonic degrees of freedom which are invisible to electrons, and RHIC has a programme of polarised proton collisions to explore the spin structure of the nucleon. Without the RHIC data, the gluon polarisation ΔG is almost unconstrained, but with the RHIC data, it is seen to be clearly positive and contribute about 0.2 to the proton spin. Using W production, it is possible to separate polarised quark and antiquark distributions, and there is more to come in the near future.<br /><br />The last plenary speaker of the day was Craig Roberts, who reviewed the pion and nucleon structure from the point of view of the Dyson-Schwinger equations approach. In this approach, the pion is closely linked to the quark mass function, which comes out of a quark gap equation and describes how the running quark mass at high energies turns into a much larger constituent quark mass at low energies. Landau-gauge gluons also become massive at low energies, and confinement is explained as the splitting of poles into pairs of conjugate complex poles giving an exponentially damped behaviour of the position space propagator. While this approach seems to be able to readily explain every single known experimental result, I do not understand how the systematic errors from the truncation of the infinite tower of DSEs are supposed to be controlled or quantified.<br /><br />After the coffee break, there were parallel sessions. An interesting parallel talk was given by Johan Bijnens, who has determined the leading logarithms for the nucleon mass (and some other systems) to rather high orders (which also for effective theories can be done using only one-loop integrals from a consistency argument by Weinberg).<br /><br />http://latticeqcd.blogspot.com/2015/03/qnp-2015-day-three.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-3084179203009592898Wed, 04 Mar 2015 12:05:00 +00002015-03-04T12:06:26.803+00:00conferencesQNP 2015, Day TwoHello again from Valparaíso. Today's first speaker was Johan Bijnens with a review of recent results from chiral perturbation theory in the mesonic sector, including recent results for charged pion polarisabilities and for finite-volume corrections to lattice measurements. To allow others to perform their own calculations for their own specific needs (which might include technicolor-like theories, which will generally have different patterns of chiral symmetry breaking, but otherwise work just the same way), Bijnens & Co. have recently published CHIRON, a general two-loop mesonic χPT package. The leading logarithms have been determined to high orders, and it has been found that the speed of convergence depends both on the observable and on whether the leading-order or physical pion decay constant is used.<br /><br />Next was Boris Grube, who presented some recent results from light-meson spectroscopy. The light mesons are generally expected to be some kind of superpositions of quark-model states, hybrids, glueballs, tetraquark and molecular states, as may be compatible with their quantum numbers in each case. The most complex sector is the 0<sup>++</sup> sector of f<sub>0</sub> mesons, in which the lightest glueball state should lie. While the γγ width of the f<sub>0</sub>(1500) appears to be compatible with zero, which would agree with the expectations for a glueball, whereas the f<sub>0</sub>(1710) has a photonic width more in agreement with being an s-sbar state, in J/ψ -> γ (ηη), which as a gluon-rich process should couple strongly to glueball resonances, little or no f<sub>0</sub>(1500) is seen, whereas a glueball nature for the f<sub>0</sub>(1710) would be supported by these results. New data to come from GlueX, and later from PANDA, should help to clarify things.<br /><br />The next speaker was Paul Sorensen with a talk on the search for the critical point in the QCD phase diagram. The quark-gluon plasma at RHIC is not only a man-made system that is over 300 times hotter than the centre of the Sun, it is also the most perfect fluid known, as it close to saturates the viscosity bound η/s > 1/(4π). Studying it experimentally is quite difficult, however, since one must extrapolate back to a small initial fireball, or "little bang", from correlations between thousands of particle tracks in a detector, not entirely dissimilar from the situation in cosmology, where the properties of the hot big bang (and previous stages) are inferred from angular correlations in the cosmic microwave background. Beam energy scans find indications that the phase transition becomes first-order at higher densities, which would indicate the existence of a critical endpoint, but more statistics and more intermediate energies are needed.<br /><br />After the coffee break, François-Xavier Girod spoke about Generalised Parton Distributions (GPDs) and deep exclusive processes. GPDs, which reduce to form factors and to parton distributions upon integrating out the unneeded variables in each case, correspond to a three-dimensional image of the nucleon performed in the longitudinal momentum fraction and the transverse impact parameter, and their moments are related to matrix elements of the energy-momentum tensor. Experimentally, they are probed using deeply virtual Compton scattering (DVCS); the 12 GeV upgrade at Jefferson Lab will increase the coverage in both Bjørken-x and Q<sup>2</sup>, and the planned electron-ion collider is expected to allow probing the sea and gluon GPDs as well.<br /><br />After the lunch break, there were parallel sessions. I chaired the parallel session on lattice and other perturbative methods, with presentations of lattice results by Eigo Shintani and Tereza Mendes, as well as a number of AdS/QCD-related results by various others.<br />http://latticeqcd.blogspot.com/2015/03/qnp-2015-day-two.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-5268534792935554920Tue, 03 Mar 2015 14:38:00 +00002015-03-03T14:38:23.203+00:00conferencestravelQNP 2015, Day OneHello from Valparaíso, where I continue this year's hectic conference circuit at the 7th International Conference on Quarks and Nuclear Physics (QNP 2015). Except for some minor inconveniences and misunderstandings, the long trip to Valparaíso (via Madrid and Santiago de Chile) went quite smoothly, and so far, I have found Chile a country of bright sunlight and extraordinarily helpful and friendly people.<br /><br />The first speaker of the conference was Emanuele Nocera, who reviewed nucleon and nuclear parton distributions. The study of parton distributions become necessary because hadrons are really composed not simply of valence quarks, as the quark model would have it, but of an indefinite number of (sea) quarks, antiquarks and gluons, any of which can contribute to the overall momentum and spin of the hadron. In an operator product expansion framework, hadronic scattering amplitudes can then be factorised into Wilson coefficients containing short-distance (perturbative) physics and parton distribution functions containing long-distance (non-perturbative) physics. The evolution of the parton distribution functions (PDFs) with the momentum scale is given by the DGLAP equations containing the perturbatively accessible splitting functions. The PDFs are subject to a number of theoretical constraints, of which the sum rules for the total hadronic momentum and valence quark content are the most prominent. For nuclei, on can assume that a similar factorisation as for hadrons still holds, and that the nuclear PDFs are linear combinations of nucleon PDFs modified by multiplication with a binding factor; however, nuclei exhibit correlations between nucleons, which are not well-described in such an approach. Combining all available data from different sources, global fits to PDFs can be performed using either a standard χ<sup>2</sup> fit with a suitable model, or a neural network description. There are far more and better data on nucleon than nuclear PDFs, and for nucleons the amount and quality of the data also differs between unpolarised and polarised PDFs, which are needed to elucidate the "proton spin puzzle".<br /><br />Next was the first lattice talk of the meeting, given by Huey-Wen Lin, who gave a review of the progress in lattice studies of nucleon structure. I think Huey-Wen gave a very nice example by comparing the computational and algorithmic progress with that in videogames (I'm not an expert there, but I think the examples shown were screenshots of Nethack versus some modern first-person shooter), and went on to explain the importance of controlling all systematic errors, in particular excited-state effects, before reviewing recent results on the tensor, scalar and axial charges and the electromagnetic form factors of the nucleon. As an outlook towards the current frontier, she presented the inclusion of disconnected diagrams and a new idea of obtaining PDFs from the lattice more directly rather than through their moments.<br /><br />The next speaker was Robert D. McKeown with a review of JLab's Nuclear Science Programme. The CEBAF accelerator has been upgraded to 12 GeV, and a number of experiments (GlueX to search for gluonic excitations, MOLLER to study parity violation in Møller scattering, and SoLID to study SIDIS and PVDIS) are ready to be launched. A number of the planned experiments will be active in areas that I know are also under investigation by experimental colleagues in Mainz, such as a search for the "dark photon" and a study of the running of the Weinberg angle. Longer-term plans at JLab include the design of an electron-ion collider.<br /><br />After a rather nice lunch, Tomofumi Nagae spoke about the hadron physics programme an J-PARC. In spite of major setbacks by the big earthquake and a later radiation accident, progress is being made. A search for the Θ<sup>+</sup> pentaquark did not find a signal (which I personally do not find surprising, since the whole pentaquark episode is probably of more immediate long-term interest to historians and sociologists of science than to particle physicists), but could not completely exclude all of the discovery claims.<br /><br />This was followed by a take by Jonathan Miller of the MINERνA collaboration presenting their programme of probing nuclei with neutrinos. Major complications include the limited knowledge of the incoming neutrino flux and the fact that final-state interactions on the nuclear side may lead to one process mimicking another one, making the modelling in event generators a key ingredient of understanding the data.<br /><br />Next was a talk about short-range correlations in nuclei by Or Henn. Nucleons subject to short-range correlations must have high relative momenta, but a low center-of-mass momentum. The experimental studies are based on kicking a proton out of a nucleus with an electron, such that both the momentum transfer (from the incoming and outgoing electron) and the final momentum of the proton are known, and looking for a nucleon with a momentum close to minus the difference between those two (which must be the initial momentum of the knocked-out proton) coming out. The astonishing result is that at high momenta, neutron-proton pairs dominate (meaning that protons, being the minority, have a much larger chance of having high momenta) and are linked by a tensor force. Similar results are known from other two-component Fermi systems, such as ultracold atomic gases (which are of course many, many orders of magnitude less dense than nuclei).<br /><br />After the coffee break, Heinz Clement spoke about dibaryons, specifically about the recently discovered d<sup>*</sup>(2380) resonance, which taking all experimental results into account may be interpreted as a ΔΔ bound state<br /><br />The last talk of the day was by André Walker-Loud, who reviewed the study of nucleon-nucleon interactions and nuclear structure on the lattice, starting with a very nice review of the motivations behind such studies, namely the facts that big-bang nucleosynthesis is very strongly dependent on the deuterium binding energy and the proton-neutron mass difference, and this fine-tuning problem needs to be understood from first principles. Besides, currently the best chance for discovering BSM physics seems once more to lie with low-energy high-precision experiments, and dark matter searches require good knowledge of nuclear structure to control their systematics. Scattering phase shifts are being studied through the Lüscher formula. Current state-of-the-art studies of bound multi-hadron systems are related to dibaryons, in particular the question of the existence of the H-dibaryon at the physical pion mass (note that the dineutron, certainly unbound in the real world, becomes bound at heavy enough pion masses), and three- and four-nucleon systems are beginning to become treatable, although the signal-to-noise problem gets worse as more baryons are added to a correlation function, and the number of contractions grows rapidly. Going beyond masses and binding energies, the new California Lattice Collaboration (CalLat) has preliminary results for hadronic parity violation in the two-nucleon system, albeit at a pion mass of 800 MeV.<br />http://latticeqcd.blogspot.com/2015/03/qnp-2015-day-one.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-3506938512717239565Fri, 27 Feb 2015 10:01:00 +00002015-02-27T10:01:06.061+00:00travelBack from MumbaiOn Saturday, my last day in Mumbai, a group of colleagues rented a car with a driver to take a trip to Sanjay Gandhi National Park and visit the <a href="http://en.wikipedia.org/wiki/Kanheri_caves">Kanheri caves</a>, a Buddhist site consisting of a large number of rather simple monastic cells and some worship and assembly halls with ornate reliefs and inscriptions, all carved out out of solid rock (some of the cell entrances seem to have been restored using steel-reinforced concrete, though).<br /><br />On the way back, we stopped at <a href="http://en.wikipedia.org/wiki/Mani_Bhavan">Mani Bhavan</a>, where Mahatma Gandhi lived from 1917 to 1934, and which is now a museum dedicated to his live and legacy.<br /><br />In the night, I flew back to Frankfurt, where the temperature was much lower than in Mumbai; in fact, on Monday there was snow.<br />http://latticeqcd.blogspot.com/2015/02/back-from-mumbai.htmlnoreply@blogger.com (Georg v. Hippel)0 |
e62632b32b71eecb | MOLCAS manual:
Next: 6.48 The Basis Set Up: 6. Programs Previous: 6.46 SlapAf
6.47 vibrot
The program VIBROT is used to compute a vibration-rotation spectrum for a diatomic molecule, using as input a potential computed over a grid. The grid should be dense around equilibrium (recommended spacing 0.05 au) and should extend to large distance (say 50 au) if dissociation energies are computed.
The potential is fitted to an analytical form using cubic splines. The ro-vibrational Schrödinger equation is then solved numerically (using Numerov's method) for one vibrational state at a time and for a number of rotational quantum numbers as specified by input. The corresponding wave functions are stored on file VIBWVS for later use. The ro-vibrational energies are analyzed in terms of spectroscopic constants. Weakly bound potentials can be scaled for better numerical precision.
The program can also be fed with property functions, such as a dipole moment curve. Matrix elements over the ro-vib wave functions for the property in question are then computed. These results can be used to compute IR intensities and vibrational averages of different properties.
VIBROT can also be used to compute transition properties between different electronic states. The program is then run twice to produce two files of wave functions. These files are used as input in a third run, which will then compute transition matrices for input properties. The main use is to compute transition moments, oscillator strengths, and lifetimes for ro-vib levels of electronically excited states. The asymptotic energy difference between the two electronic states must be provided using the ASYMptotic keyword.
6.47.1 Dependencies
The VIBROT is free-standing and does not depend on any other program.
6.47.2 Files Input files
The calculation of vibrational wave functions and spectroscopic constants uses no input files (except for the standard input). The calculation of transition properties uses VIBWVS files from two preceding VIBROT runs, redefined as VIBWVS1 and VIBWVS2. Output files
VIBROT generates the file VIBWVS with vibrational wave functions for each v and J quantum number, when run in the wave function mode. If requested VIBROT can also produce files VIBPLT with the fitted potential and property functions for later plotting.
6.47.3 Input
This section describes the input to the VIBROT program in the MOLCAS program system. The program name is
&VIBROT Keywords
The first keyword to VIBROT is an indicator for the type of calculation that is to be performed. Two possibilities exist:
ROVIbrational spectrumVIBROT will perform a vib-rot analysis and compute spectroscopic constants.
TRANsition momentsVIBROT will compute transition moment integrals using results from two previous calculations of the vib-rot wave functions. In this case the keyword Observable should be included, and it will be interpreted as the transition dipole moment.
Note that only one of the above keywords can be used in a single calculation. If none is given the program will only process the input section.
After this first keyword follows a set of keywords, which are used to specify the run. Most of them are optional.
The compulsory keywords are:
ATOMsGives the mass of the two atoms. Write mass number (an integer) and the chemical symbol Xx, in this order, for each of the two atoms in free format. If the mass numbers is zero for any atom, the mass of the most abundant isotope will be used. All isotope masses are stored in the program. You may introduce your own masses by giving a negative integer value to the mass number (one of them or both). The masses (in unified atomic mass units, or Da) are then read on the next (or next two) entry(ies). The isotopes of hydrogen can be given as H, D, or T.
POTEntialGives the potential as an arbitrary number of lines. Each line contains a bond distance (in au) and an energy value (in au). A plot file of the potential is generated if the keyword Plot is added after the last energy input. One more entry should then follow with three numbers specifying the start and end value for the internuclear distance and the distance between adjacent plot points. This input must only be given together with the keyword RoVibrational spectrum.
In addition you may want to specify some of the following optional input:
TITLeOne single title line
GRIDThe next entries give the number of grid points used in the numerical solution of the radial Schrödinger equation. The default value is 199. The maximum value that can be used is 4999.
RANGeThe next entry contains two distances Rmin and Rmax (in au) specifying the range in which the vibrational wave functions will be computed. The default values are 1.0 and 5.0 au. Note that these values most often have to be given as input since they vary considerably from one case to another. If the range specified is too small, the program will give a message informing the user that the vibrational wave function is large outside the integration range.
VIBRationalThe next entry specifies the number of vibrational quanta for which the wave functions and energies are computed. Default value is 3.
ROTAtionalThe next entry specifies the range of rotational quantum numbers. Default values are 0 to 5. If the orbital angular momentum quantum number ($m_\ell$) is non zero, the lower value will be adjusted to $m_\ell$ if the start value given in input is smaller than $m_\ell$.
ORBItalThe next entry specifies the value of the orbital angular momentum (0,1,2, etc). Default value is zero.
SCALeThis keyword is used to scale the potential, such that the binding energy is 0.1 au. This leads to better precision in the numerical procedure and is strongly advised for weakly bound potentials.
NOSPectroscopicOnly the wave function analysis will be carried out but not the calculation of spectroscopic constants.
OBSErvableThis keyword indicates the start of input for radial functions of observables other than the energy, for example the dipole moment function. The next line gives a title for this observable. An arbitrary number of input lines follows. Each line contains a distance and the corresponding value for the observable. As for the potential, this input can also end with the keyword Plot, to indicate that a file of the function for later plotting is to be constructed. The next line then contains the minimum and maximum R-values and the distance between adjacent points. When this input is given with the top keyword RoVibrational spectrum the program will compute matrix elements for vibrational wave functions of the current electronic state. Transition moment integrals are instead obtained when the top keyword is Transition moments. In the latter case the calculation becomes rather meaningless if this input is not provided. The program will then only compute the overlap integrals between the vibrational wave functions of the two states. The keyword Observable can be repeated up to ten times in a single run. All observables should be given in atomic units.
TEMPeratureThe next entry gives the temperature (in K) at which the vibrational averaging of observables will be computed. The default is 300 K.
STEPThe next entry gives the starting value for the energy step used in the bracketing of the eigenvalues. The default value is 0.004 au (88 cm-1). This value must be smaller than the zero-point vibrational energy of the molecule.
ASYMptoticThe next entries specifies the asymptotic energy difference between two potential curves in a calculation of transition matrix elements. The default value is zero atomic units.
ALLRotationalBy default, when the Transition moments keyword is given, only the transitions between the lowest rotational level in each vibrational state are computed. The keyword AllRotational specifies that the transitions between all the rotational levels are to be included. Note that this may result in a very large output file.
PRWFRequests the vibrational wave functions to be printed in the output file. Input example
RoVibrational spectrum
Title = Vib-Rot spectrum for FeNi
Atoms = 0 Fe 0 Ni
1.0 -0.516768
1.1 -0.554562
Plot = 1.0 10.0 0.1
Grid = 150
Range = 1.0 10.0
Vibrations = 10
Rotations = 2 10
Orbital = 2
Dipole Moment
1.0 0.102354
1.1 0.112898
Plot = 1.0 10.0 0.1
Comments: The vibrational-rotation spectrum for FeNi will be computed using the potential curve given in input. The 10 lowest vibrational levels will be obtained and for each level the rotational states in the range J=2 to 10. The vib-rot matrix elements of the dipole function will also be computed. A plot file of the potential and the dipole function will be generated. The masses for the most abundant isotopes of Fe and Ni will be selected.
next up previous contents |
2c70d0b46289aee6 |
Entanglement (physics)
From Citizendium, the Citizens' Compendium
Revision as of 18:37, 9 November 2010 by Boris Tsirelson (Talk | contribs) (more)
Jump to: navigation, search
This article is developing and not approved.
Main Article
Related Articles [?]
Bibliography [?]
External Links [?]
Citable Version [?]
(CC) Photo: Mike Seyfang
Photonics is widely used when creating entanglement.
There are three interrelated meanings of the word entanglement in physics. They are listed below and then discussed, both separately and in relation to each other.
• A combination of empirical facts, observed or only hypothetical, incompatible with the conjunction of three fundamental assumptions about nature, called "counterfactual definiteness", "relativistic local causality" and "no-conspiracy" (see below), but compatible with the conjunction of the last two of them ("relativistic local causality" and "no-conspiracy"). Such a combination will be called "empirical entanglement" (which is not a standard terminology[1]).
• A prediction of the quantum theory stating that the empirical entanglement must occur in appropriate physical experiments (called "quantum entanglement").
• In quantum theory there is a technical notion of "entangled state".
Entanglement cannot be reduced to shared randomness, and does not imply faster-than-light communication.
Due to quantum entanglement, quantum information is different from classical information, which leads to quantum communication, quantum games, quantum cryptography and quantum computation.
Empirical entanglement
Some people understand it easily, others find it difficult and confusing.
It is easy, since no physical or mathematical prerequisites are needed. Nothing like Newton laws, Schrödinger equation, conservation laws, nor even particles or waves. Nothing like differentiation or integration, nor even linear equations.
It is difficult and confusing for the very same reason! It is highly abstract. Many people feel uncomfortable in such a vacuum of concepts and rush to return to the particles and waves.
The framework, and local causality
The following concepts are essential here.
• A physical apparatus that has a switch and several lights. The switch can be set to one of several possible positions. A little after that the apparatus flashes one of its lights.
• "Local causality": widely separated apparata are incapable of signaling to each other.
Otherwise the apparata are not restricted; they may use all kinds of physical phenomena. In particular, they may receive any kind of information that reaches them. We treat each apparatus as a black box: the switch position is its input, the light flashed is its output; we need not ask about its internal structure.
However, not knowing what is inside the black boxes, can we know that they do not signal to each other? There are two approaches, non-relativistic ("loose") and relativistic ("strict").
The loose approach: we open the black boxes, look, see nothing like mobile phones and rely on our knowledge and intuition.
The strict approach: we do not open the black boxes. Rather, we place them, say, 360,000 km apart (the least Earth-Moon distance) and restrict the experiment to a time interval of, say, 1 sec. Relativity theory states that they cannot signal to each other, for a good reason: a faster-than-light communication in one inertial reference frame would be a backwards-in-time communication in another inertial reference frame!
Below, the strict approach is used (unless explicitly stated otherwise). Thus, the apparata are not restricted. They may contain mobile phones or whatever. They may interact with any external equipment, be it cell sites or whatever.
Falsifiabilty, and no-conspiracy assumption
A claim is called falsifiable (or refutable) if it has observable implications. If some of these implications contradict some observed facts then the claim is falsified (refuted). Otherwise it is corroborated.
The relativistic local causality was never falsified; that is, a faster-than-light signaling was never observed. Does it mean that local causality is corroborated? This question is more intricate than it may seem.
Let A, B be two widely separated apparata, xA the input (the switch position) of A, and yB the output (the light flashed) of B. (For now we do not need yA and xB.) Local causality claims that xA has no influence on yB.
An experiment consisting of n trials is described by xA(i), yB(i) for i = 1,2,...,n. Imagine that n = 4 and
xA(1) = 1, xA(2) = 2, xA(3) = 1, xA(4) = 2,
yB(1) = 1, yB(2) = 2, yB(3) = 1, yB(4) = 2.
The data suggest that xA influences yB, but do not prove it. Two alternative explanations are possible:
• the apparatus B chooses yB at random (say, tossing a coin); the four observed equalities yB(i) = xA(i) are just a coincidence (of probability 1/16);
• the apparatus B alternates 1 and 2, that is, yB(i) = 1 for all odd i but yB(i) = 2 for all even i.
Consider a more thorough experiment: n = 1000, and xA(i) are chosen at random, say, tossing a coin. Imagine that yB(i) = xA(i) for all i = 1,2,...,n. The influence of xA on yB is shown very convincingly! But still, an alternative explanation is possible.
For choosing xA, the coin must be tossed within the time interval scheduled for the trial, since otherwise a slower-than-light signal can transmit the result to the apparatus B before the end of the trial. However, is the result really unpredictable in principle (not just in practice)? Not necessarily so. Moreover, according to classical mechanics, the future is uniquely determined by the past! In particular, the result of the coin tossing exists in the past as a complicated function of a huge number of coordinates and momenta of micro particles.
It is logically possible, but quite unbelievable that the future result of coin tossing is somehow spontaneously singled out in the microscopic chaos and transmitted to the apparatus B in order to influence yB. The no-conspiracy assumption claims that such exotic scenarios may be safely neglected.
The conjunction of the two assumptions, relativistic local causality and no-conspiracy, is falsifiable, but was never falsified; thus, both assumptions are corroborated.
Below, the no-conspiracy is always assumed (unless explicitly stated otherwise).
Counterfactual definiteness
In this section a single apparatus is considered.
A trial is described by a pair (x,y) where x is the input (the switch position) and y is the output (the light flashed). Is y a function of x? We may repeat the trial with the same x and get a different y (especially if the apparatus tosses a coin). We can set the switch to x again, but we cannot set all molecules to the same microstate. Still, we may try to imagine the past changed, asking a counterfactual question:[2]
• Which outcome the experimenter would have received (in the same trial) if he/she did set the switch to another position?
It is meant that only the input x is changed in the past, nothing else. The question may seem futile, since an answer cannot be verified empirically. Strangely enough, the question will appear to be very useful in the next section.
Classical physics can interpret the question as a change of external forces acting on a mechanical system of a large number of microscopic particles. It is unfeasible to calculate the answer, but anyway, the question makes sense, and the answer exists in principle:
for some function f : XY, where X is the finite set of all possible inputs, and Y is the finite set of all possible outputs. Existence of this function f is called "counterfactual definiteness".
Repeating the experiment we get
y(i) = fi(x(i))
for i = 1,2,... Each time a new function fi appears; thus x(i)=x(j) does not imply y(i)=y(j). In the case of a single apparatus, counterfactual definiteness is not falsifiable, that is, has no observable implications. Surprisingly, for two (and more) apparata the situation changes dramatically.
Local causality and counterfactual definiteness
For two apparata, A and B, an experiment is described by two pairs, (xA,yA) and (xB,yB) or, equivalently, by a combined pair ((xA,xB), (yA,yB)). Counterfactual definiteness alone (without local causality) takes the form
or, equivalently,
Assume in addition that A and B are widely separated and the local causality applies. Then xA cannot influence yB, and xB cannot influence yA, therefore
These fA, fB are one-time functions; another trial may involve different functions.
An alternative language is logically equivalent, but makes the presentation more vivid. Imagine an experimenter, Alice, near the apparatus A, and another experimenter, Bob, near the apparatus B. Alice is given some input xA and must provide an output yA. The same holds for Bob, xB and yB. Once the inputs are received, no communication is permitted between Alice and Bob until the outputs are provided. The input xA is an element of a prescribed finite set XA (not necessarily a number); the same holds for yA and YA, xB and XB, yB and YB.
It may seem that the apparata A, B are of no use for Alice and Bob. Significantly, this is an illusion.
The simplest example of empirical entanglement is presented here. First, its idea is explained informally.
Alice and Bob pretend that they know a 2×2 matrix
consisting of numbers 0 and 1 only, satisfying four conditions:
a = b, c = d, a = c, but bd.
Surely they lie; these four conditions are evidently incompatible. Nevertheless Alice commits herself to show on request any row of the matrix, and Bob commits himself to show on request any column. We expect the lie to manifest itself on the intersection of the row and the column (not always but sometimes). However, Alice and Bob promise to always agree on the intersection!
More formally, xA=1 requests from Alice the first row, xA=2 the second; in every case yA must be either or . From Bob, xB=1 requests the first column, in which case yB must be or ; and xB=2 requests the second column, in which case yB must be or . The agreement on the intersection means that, for example, if xA=2 and xB=1 then the first element of the row yA must be equal to the second element of the column yB.
Without special apparata (A and B), Alice and Bob surely cannot fulfill their promise. Can the apparata help? This crucial question is postponed to the section "Quantum entanglement". Here we consider a different question: is it logically possible, under given assumptions, that Alice and Bob fulfill their promise?
Under all the three assumptions (counterfactual definiteness, local causality and no-conspiracy) we have yA = fA(xA) and yB = fB(xB) for some functions fA, fB. (These functions may change from one trial to another.) Specifically, fA(1) and fA(2), being two rows, form a 2×2 matrix satisfying the conditions a=b, c=d. Also fB(1) and fB(2), being two columns, form a 2×2 matrix satisfying the conditions a=c, bd. These two matrices necessarily differ at least in one of the four elements (since the four conditions are incompatible). Therefore it can happen that Alice and Bob disagree on the intersection, and moreover, it happens with the probability at least 0.25. In the long run, Alice and Bob cannot fulfill their promise.
Waiving the counterfactual definiteness (but retaining local causality and no-conspiracy) we get the opposite result: Alice and Bob can fulfill their promise. Here is how.
Given xA and xB, there are two allowed yA and two allowed yB, thus, 4 allowed combinations (yA, yB). Two of them agree on the intersection of the row and the column; the other two disagree. Imagine that the apparata A, B choose at random (with equal probabilities 0.5, 0.5) one of the two combinations (yA, yB) that agree on the intersection. For example, given xA=2 and xB=1, we get either yA = and yB = , or yA = and yB = .
This situation is compatible with local causality, since yB gives no information about xA; also yA gives no information about xB. For example, given xA=2 and xB=1, we get either yB = or yB = , with probabilities 0.5, 0.5; exactly the same holds given xA=1 and xB=1.
Thus, empirical entanglement is logically possible. The question of its existence in the nature is addressed in the section "Quantum entanglement".
Entanglement is not just shared randomness
Widely separated apparata, unable to signal to each other, can be correlated. Correlations are established routinely by communication. For example, Alice and Bob, reading their copies of a newspaper, learn the result of yesterday's lottery drawing. This is called shared randomness. Likewise, the apparata A, B can share randomness by receiving signals from some external common source. However, shared randomness obeys the three assumptions (counterfactual definiteness, local causality and no-conspiracy) and therefore cannot produce entanglement. In other words, entanglement as a resource is substantially stronger than shared randomness.
Quantum entanglement
Classical bounds and quantum bounds
Classical physics obeys the counterfactual definiteness and therefore negates entanglement. Classical apparata A, B cannot help Alice and Bob to always win (that is, agree on the intersection). What about quantum apparata? The answer is quite unexpected.
First, quantum apparata cannot ensure that Alice and Bob win always. Moreover, the winning probability does not exceed
no matter which quantum apparata are used.
Second, there exist quantum apparata that ensure a winning probability higher than 3/4 = 0.75. This is a manifestation of entanglement, since under the three classical assumptions (counterfactual definiteness, local causality and no-conspiracy) the winning probability cannot exceed 3/4 (the classical bound). But moreover, ideal quantum apparata can reach the winning probability (the quantum bound), and non-ideal quantum apparata can get arbitrarily close to this bound.
Third, a modification of the game, called "magic square game", makes it possible to win always. To this end we replace 2×2 matrices with 3×3 matrices, still of numbers 0 and 1 only, with the following conditions:
• the parity of each row is even,
• the parity of each column is odd.
The classical bound is equal to 8/9; the quantum bound is equal to 1.
Experimental status
Many amazing entanglement-related predictions of the quantum theory were tested in ingenious experiments using high-tech equipment. All tested predictions are confirmed. Still, each one of these experiments has a "loophole", that is, admits alternative, entanglement-free explanations. Such explanations are highly contrived. They would be rejected as unbelievable in a routine development of science. However, the entanglement problem is exceptional: fundamental properties of nature are at stake! Entanglement is also unbelievable for many people. Thus, the problem is still open; finer experiments will follow, until an unambiguous result will be achieved.
Communication channels
According to the quantum theory, quantum objects manifest themselves via their influence on classical objects (more exactly, on classically described degrees of freedom). Every object admits a quantum description, but some objects may be described classically for all practical purposes, since their thermal fluctuations hide their quantal properties. These are called classical objects. Macroscopic bodies (more exactly, their coordinates) under usual conditions are classical. Digital information in computers is also classical.
A communication channel may be thought of as a chain of physical objects and physical interactions between adjacent objects. If all objects in the chain are quantal, the channel is called quantal. If at least one object in the chain is classical, the channel is called classical.
For example, newspapers, television, mobile phones and the Internet implement only classical channels. Quantum channels are usually implemented by sending a particle (photon, electron) or another microscopic object (ion) from a nonclassical source to a nonclassical detector through a low-noise medium.
Classical communication (that is, communication through a classical channel) can create shared randomness, but cannot create entanglement. Moreover, entanglement creation is impossible when Alice's apparatus A is connected to a source S by a quantum channel but Bob's apparatus B is connected to S by a classical channel. Here is an explanation.
The classical channel S-B is a chain containing a classical object C. By assumption, no chain of interactions connects A and B (via S, or otherwise) bypassing C. Therefore A and B are conditionally independent given a possible state c of C. The response yA of A to xA given c need not be a function gA(c,xA) of c and xA (uniqueness is not guaranteed), but still, we may choose one of possible responses yA and let gA(c,xA) = yA (so-called uniformization). Similarly, gB(c,xB) = yB. Now, given c, the two one-time functions fA(xA) = gA(c,xA) and fB(xB) = gB(c,xB) lead to a possible disagreement of Alice and Bob (on the intersection of the row and the column) by the argument used before (in the section "Example"). A more thorough analysis shows that the classical bound on the winning probability, deduced before from the counterfactual definiteness, holds also in the case treated here.
Entangled quantum states
A bipartite or multipartite quantum state, pure or mixed, is called entangled, if it cannot be prepared by means of shared randomness and local quantum operations. A quantum state that can be used for violating classical bounds, that is, for producing empirical entanglement, is necessarily entangled. It is unclear whether the converse implication holds, or not. Some entangled mixed states, so-called Werner states, obey classical bounds for all one-stage experiments. But multi-stage experiments in general are still far from being well understood.
Nonlocality and entanglement
In general
The words "nonlocal" and "nonlocality" occur frequently in the literature on entanglement, which creates a lot of confusion: it seems that entanglement means nonlocality! This situation has two causes, pragmatical and philosophical.
Here is the pragmatical cause. The word "nonlocal" sounds good. The phrase "non-CFD" (where CFD denotes counterfactual definiteness) sounds much worse, but is also incorrect; the correct phrase, involving both CFD and locality (and no-conspiracy, see the lead) is prohibitively cumbersome. Thus, "nonlocal" is often used as a conventional substitute for "able to produce empirical entanglement".[3]
The philosophical cause. Many people feel that CFD is more trustworthy than RLC (relativistic local causality), and NC (no-conspiracy) is even more trustworthy. Being forced to abandon one of them, these people are inclined to retain NC and CFD at the expence of abandoning RLC.
However, the quantum theory is compatible with RLC+NC. A violation of RLC+NC is called faster-than-light communication (rather than entanglement); it was never observed, and never predicted by the quantum theory. Thus RLC and NC are corroborated, while CFD is not. In this sense CFD is less trustworthy than RLC and NC.
For quantum states
Quantitative measures for entanglement are scantily explored in general. However, for pure bipartite quantum states the amount of entanglement is usually measured by the so-called entropy of entanglement. On the other hand, several natural measures of nonlocality are invented (see above about the meaning of "nonlocality"). Strangely enough, non-maximally entangled states appear to be more nonlocal than maximally entangled states, which is known as "anomaly of nonlocality"; nonlocality and entanglement are not only different concepts, but are really quantitatively different resources.[4] According to the asymptotic theory of Bell inequalities, even though entanglement is necessary to obtain violation of Bell inequalities, the entropy of entanglement is essentially irrelevant in obtaining large violation.[5]
1. Experts often call it "nonlocality", thus confusing non-experts; see Sect. 4.1.
2. "Die Geschichte kennt kein Wenn" (Karl Hampe). Whether physics has subjunctive mood or not, this is the question of counterfactual definiteness.
3. Physical terminology can mislead non-experts. Some examples: "quantum telepathy"; "quantum teleportation"; "Schrödinger cat state"; "charmed particle".
4. A.A. Methot and V. Scarani, "An anomaly of non-locality" (2007), Quantum Information and Computation, 7:1/2, 157-170; also arXiv:quant-ph/0601210.
5. M. Junge and C. Palazuelos, "Large violation of Bell inequalities with low entanglement" (2010), arXiv:1007.3043. |
c9be6d5439b1a2b3 | Picture of wind turbine against blue sky
Open Access research with a real impact...
Explore wind turbine research in Strathprints
Explore all of Strathclyde's Open Access research content
Wave packet in a two-dimensional hexagonal crystal
Duan, Wen-shan and Parkes, John and Lin, Mai-mai (2005) Wave packet in a two-dimensional hexagonal crystal. Physics of Plasmas, 12 (2). 022106-1. ISSN 1070-664X
The propagation of a nonlinear wave packet of dust lattice waves (DLW) in a two-dimensional hexagonal crystal is investigated. The dispersion relation and the group velocity for DLW are found for longitudinal m and transverse n propagation directions. The reductive perturbation method is used to derive a (2 + 1)-dimensional nonlinear Schrödinger equation (NLSE) that governs the weakly nonlinear propagation of the wave packet. This NLSE is used to investigate the modulational instability of the packet of DLW. It is found that the instability region is different for different propagation directions. |
4b36b7f3dd26daf6 | Airy function
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
In the physical sciences, the Airy function (or Airy function of the first kind) Ai(x) is a special function named after the British astronomer George Biddell Airy (1801–1892). The function Ai(x) and the related function Bi(x), are linearly independent solutions to the differential equation
Plot of Ai(x) in red and Bi(x) in blue
For real values of x, the Airy function of the first kind can be defined by the improper Riemann integral:
which converges by Dirichlet's test. For any real number there is positive real number such that function is increasing, unbounded and convex with continuous and unbounded derivative on interval . The convergence of the integral on this interval can be proven by Dirichlet's test after substitution .
y = Ai(x) satisfies the Airy equation
This equation has two linearly independent solutions. Up to scalar multiplication, Ai(x) is the solution subject to the condition y → 0 as x → ∞. The standard choice for the other solution is the Airy function of the second kind, denoted Bi(x). It is defined as the solution with the same amplitude of oscillation as Ai(x) as x → −∞ which differs in phase by π/2:
The values of Ai(x) and Bi(x) and their derivatives at x = 0 are given by
Here, Γ denotes the Gamma function. It follows that the Wronskian of Ai(x) and Bi(x) is 1/π.
When x is positive, Ai(x) is positive, convex, and decreasing exponentially to zero, while Bi(x) is positive, convex, and increasing exponentially. When x is negative, Ai(x) and Bi(x) oscillate around zero with ever-increasing frequency and ever-decreasing amplitude. This is supported by the asymptotic formulae below for the Airy functions.
The Airy functions are orthogonal[1] in the sense that
again using an improper Riemann integral.
Asymptotic formulae[edit]
Ai(blue) and sinusoidal/exponential asymptotic form of Ai(magenta)
Bi(blue) and sinusoidal/exponential asymptotic form of Bi(magenta)
As explained below, the Airy functions can be extended to the complex plane, giving entire functions. The asymptotic behaviour of the Airy functions as |z| goes to infinity at a constant value of arg(z) depends on arg(z): this is called the Stokes phenomenon. For |arg(z)| < π we have the following asymptotic formula for Ai(z):[2]
and a similar one for Bi(z), but only applicable when |arg(z)| < π/3:
A more accurate formula for Ai(z) and a formula for Bi(z) when π/3 < |arg(z)| < π or, equivalently, for Ai(−z) and Bi(−z) when |arg(z)| < 2π/3 but not zero, are:[2][3]
When |arg(z)| = 0 these are good approximations but are not asymptotic because the ratio between Ai(−z) or Bi(−z) and the above approximation goes to infinity whenever the sine or cosine goes to zero. Asymptotic expansions for these limits are also available. These are listed in (Abramowitz and Stegun, 1983) and (Olver, 1974).
One is also able to obtain asymptotic expressions for the derivatives Ai'(z) and Bi'(z). Similarly to before, when |arg(z)| < π:[3]
When |arg(z)| < π/3 we have:[3]
Similarly, an expression for Ai'(−z) and Bi'(−z) when |arg(z)| < 2π/3 but not zero, are[3]
Complex arguments[edit]
We can extend the definition of the Airy function to the complex plane by
where the integral is over a path C starting at the point at infinity with argument π/3 and ending at the point at infinity with argument π/3. Alternatively, we can use the differential equation y′′ − xy = 0 to extend Ai(x) and Bi(x) to entire functions on the complex plane.
The asymptotic formula for Ai(x) is still valid in the complex plane if the principal value of x2/3 is taken and x is bounded away from the negative real axis. The formula for Bi(x) is valid provided x is in the sector {xC : |arg(x)| < (π/3) − δ} for some positive δ. Finally, the formulae for Ai(−x) and Bi(−x) are valid if x is in the sector {xC : |arg(x)| < (2π/3) − δ}.
It follows from the asymptotic behaviour of the Airy functions that both Ai(x) and Bi(x) have an infinity of zeros on the negative real axis. The function Ai(x) has no other zeros in the complex plane, while the function Bi(x) also has infinitely many zeros in the sector {zC : π/3 < |arg(z)| < π/2}.
AiryAi Real Surface.png AiryAi Imag Surface.png AiryAi Abs Surface.png AiryAi Arg Surface.png
AiryAi Real Contour.svg AiryAi Imag Contour.svg AiryAi Abs Contour.svg AiryAi Arg Contour.svg
AiryBi Real Surface.png AiryBi Imag Surface.png AiryBi Abs Surface.png AiryBi Arg Surface.png
AiryBi Real Contour.svg AiryBi Imag Contour.svg AiryBi Abs Contour.svg AiryBi Arg Contour.svg
Relation to other special functions[edit]
For positive arguments, the Airy functions are related to the modified Bessel functions:
Here, I±1/3 and K1/3 are solutions of
The first derivative of the Airy function is
Functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals[4] (see also modified Bessel functions )
For negative arguments, the Airy function are related to the Bessel functions:
Here, J±1/3 are solutions of
The Scorer's functions Hi(x) and -Gi(x) solve the equation y′′ − xy = 1/π. They can also be expressed in terms of the Airy functions:
Fourier transform[edit]
Using the definition of the Airy function Ai(x), it is straightforward to show its Fourier transform is given by
Quantum mechanics[edit]
The Airy function is the solution to the time-independent Schrödinger equation for a particle confined within a triangular potential well and for a particle in a one-dimensional constant force field. For the same reason, it also serves to provide uniform semiclassical approximations near a turning point in the WKB approximation, when the potential may be locally approximated by a linear function of position. The triangular potential well solution is directly relevant for the understanding of electrons trapped in semiconductor heterojunctions.
A transversally asymmetric optical beam, where the electric field profile is given by Airy function, has the interesting property that of its maximum intensity accelerates towards one side instead of propagating over straight line as is the case in symmetric beams. This is at expense of the low-intensity tail being spread in the opposite direction, so the overall momentum of the beam is of course conserved.
The Airy function is named after the British astronomer and physicist George Biddell Airy (1801–1892), who encountered it in his early study of optics in physics (Airy 1838). The notation Ai(x) was introduced by Harold Jeffreys. Airy had become the British Astronomer Royal in 1835, and he held that post until his retirement in 1881.
See also[edit]
1. ^ David E. Aspnes, Physical Review, 147, 554 (1966)
2. ^ a b Abramowitz & Stegun (1983, p. 448), Eqns 10.4.59, 10.4.61
3. ^ a b c d Abramowitz & Stegun (1983, p. 448), Eqns 10.4.60 and 10.4.64
4. ^ M.Kh.Khokonov. Cascade Processes of Energy Loss by Emission of Hard Photons // JETP, V.99, No.4, pp. 690-707 \ (2004).
• Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 10". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 448. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
• Airy (1838), "On the intensity of light in the neighbourhood of a caustic", Transactions of the Cambridge Philosophical Society, University Press, 6: 379–402, Bibcode:1838TCaPS...6..379A
• Frank William John Olver (1974). Asymptotics and Special Functions, Chapter 11. Academic Press, New York.
• Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 6.6.3. Airy Functions", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
• Vallée, Olivier; Soares, Manuel (2004), Airy functions and applications to physics, London: Imperial College Press, ISBN 978-1-86094-478-9, MR 2114198, archived from the original on 2010-01-13, retrieved 2010-05-14
External links[edit] |
d6954cba2b58671e | Skip to main content
Atom lasers
06 Aug 1999
The exotic quantum phenomenon of Bose–Einstein condensation is the key ingredient in a new type of laser that emits atoms rather than photons, and that promises to revolutionize atom optics
Soon after the invention of the laser in the late 1950s many dubbed the discovery as “a solution in search of a problem”. Nowadays lasers are used in an enormous range of scientific and technological applications, thanks to their high intensity and their ability to emit light in an extremely narrow range of wavelengths. Indeed, the laser has revolutionized the whole field of optics and now plays a central role in the world’s communications networks. The market for lasers, optical amplifiers and other optoelectronic components is worth billions of dollars every year.
Atomic physicists are hoping that the invention of the “atom laser” will spark a similar revolution in the field of atom optics, or matter-wave optics as it is also known. Researchers in this field have relied heavily on the analogy between light and the wave-like nature of atoms. Lenses, mirrors and beam splitters have all been developed to control atomic beams just as their optical counterparts manipulate light. But what has been lacking in atom optics, until recently, is a source capable of producing an intense, highly directional, coherent beam in which all the atoms have the same wavelength just like the photons in a laser beam. This would be an atom laser.
Bose-Einstein basics
The idea for an atom laser predates the demonstration of the exotic quantum phenomenon of Bose-Einstein condensation in dilute atomic gases. But it was only after the first such condensate was produced in 1995 that the pursuit to create a laser-like source of atomic de Broglie waves became intense.
Figure 1
In a Bose condensate all the atoms occupy the same quantum state and can be described by the same wavefunction. The condensate therefore has many unusual properties not found in other states of matter. So why can we think of a Bose condensate as a coherent source of matter waves? To address this crucial point we have to remind ourselves of some of the physics behind the properties of laser light.
In a laser all the photons share the same wavefunction. This is possible because photons have an intrinsic angular momentum, or “spin”, of the Planck constant h divided by 2p. Particles that have a spin that is an integer multiple of h/2p obey Bose-Einstein statistics. This means that more than one so-called boson can occupy the same quantum state. Particles with half-integer spin – such as electrons, neutrons and protons, which all have spin h/4p – obey Fermi-Dirac statistics. Only one fermion can occupy a given quantum state.
A composite particle, such as an atom, is a boson if the sum of its protons, neutrons and electrons is an even number; the composite particle is a fermion if this sum is an odd number. Sodium-23 atoms, for example, are bosons, so a large number of them can be forced to occupy the same quantum state and therefore have the same wavefunction. To achieve this, a large number of atoms must be confined within a tiny trap and cooled to submillikelvin temperatures using a combination of optical and magnetic techniques (see “Bose condensates make quantum leaps and bounds” and Townsend, Ketterle and Stringari in further reading).
In this article we will concentrate on the properties of the condensates rather than on their creation. An important property of laser light is that it is monochromatic. By analogy, in a Bose-Einstein condensate all the atoms have the same energy and hence the same de Broglie wavelength. If this property can be maintained when the atoms are released from the condensate, we will have a highly monochromatic source of matter waves.
Another important property of laser light is that its intensity remains very stable. This property (called second-order coherence, which is discussed below) is rarely exploited in practical applications, but it is a feature that distinguishes a laser from a thermal light source, such as a filament lamp.
Output coupling
Bose-Einstein condensates are produced in confining potentials such as magnetic or optical traps by exploiting either the atoms’ magnetic moment or an electric dipole moment induced by lasers. In a magnetic trap, for instance, once the atoms have been cooled and trapped by lasers, the light is switched off and an inhomogeneous magnetic field provides a confining potential around the atoms. The trap is analogous to the optical cavity formed by the mirrors in a conventional laser (see figure 1).
But to make a laser we need to extract the coherent field from the optical cavity in a controlled way. This technique is known as “output coupling”. In the case of a conventional laser the output coupler is a partially transmitting mirror. Output coupling for atoms can be achieved by transferring them from states that are confined to ones that are not, typically by changing an internal degree of freedom, such as the magnetic states of the atoms.
Figure 2
The first demonstration of atomic output coupling from a Bose-Einstein condensate was performed with sodium atoms in a magnetic trap by Chris Townsend, Wolfgang Ketterle and co-workers at the Massachusetts Institute of Technology (MIT) in 1997. Only the atoms that had their magnetic moments pointing in the opposite direction to the magnetic field were trapped. The MIT researchers applied short radio-frequency pulses to “flip” the spins of some of the atoms and therefore release them from the trap (see figure 2c). The extracted atoms then accelerated away from the trap under the force of gravity. The output from this rudimentary atom laser was a series of pulses that expanded as they fell due to repulsive interactions between the ejected atoms and those inside the trap (see figure 3a).
In the MIT experiment, fluctuations in the confining magnetic field caused variations in the frequency needed to flip the spins of the atoms in the trap. To get round this problem, the radio-frequency field was pulsed on a timescale that was short compared with the period over which the magnetic field changed. The short, pulsed nature of the output coupling meant that the coherence length (i.e. the distance over which the matter waves remained “in step”) was limited to the length of the condensate itself. However, the coherence length can be increased by essentially “dribbling” the condensate out of the trap using a low rate of continuous output coupling.
Recently Theodor Hänsch and colleagues at the Max Planck Institute for Quantum Optics in Munich extracted a continuous atom beam that lasted for 0.1 s. The Munich team used radio-frequency output coupling in an experimental set-up that was similar to the one at MIT but used more stable magnetic fields (see figure 3b).
Figure 3
In 1998 Brian Anderson and Mark Kasevich at Yale University in the US demonstrated how to extract atoms from a one-dimensional, periodic optical trap, known as an optical lattice. Two laser beams pointing in opposite directions interacted to form a standing wave that trapped the atoms in the vertical direction. The optical potential was sufficiently shallow that the atoms were able to both tunnel out of the traps and accelerate away via gravity (see figure 2b). The Yale team loaded the optical lattice with atoms from a Bose-Einstein condensate. This meant that the atoms that escaped from each trap were “phase coherent” i.e. the phases of the matter waves were correlated. The series of downward-falling output pulses was akin to the output of a mode-locked, pulsed laser (see figure 3c).
A new output-coupling technique was recently demonstrated by two of the authors (William Phillips and Kristian Helmerson) and colleagues at the US National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. The atoms were extracted from a magnetic trap, using an optical technique known as stimulated Raman scattering, to change the quantum state of the condensate atoms from one that is confined to one that is not. The condensate atoms absorb photons at one wavelength and emit photons at a slightly different wavelength (figure 2c). This gives the atoms a “momentum kick” away from the remaining trapped atoms. Both the magnitude and direction of the output beam can be selected by varying the orientation of the laser beams. Unlike the other output-coupling schemes, the NIST technique produces an atom laser that does not rely on gravity.
This output-coupling scheme has an additional advantage: the atoms in the beam have a transverse-momentum spread that is greatly reduced. In the other atom lasers repulsive interactions between the atoms cause a large momentum spread. In the NIST experiment, however, the atoms receive a large kick in the selected direction and spend little time in the presence of the other atoms, thereby reducing the problem.
The laser light used to release the atoms from the trap can be pulsed or continuous, just like the radio-frequency radiation in the other experiments. In the NIST experiments the magnetic field used to trap the atoms had a rotating component. The repetition rate of the output-coupling pulses was essentially synchronized to this rotating field. The rate was fast enough so that the clouds of atoms leaving the trap overlapped in a horizontal and essentially continuous beam (see figure 3d).
The coherence length of the various atom-laser beams has not been measured directly. Nevertheless, the NIST team has shown that two successive pulses are fully coherent over their pulse length, which is the size of the condensate. Because the output-coupling methods are coherent, it is likely that the coherence length of the lasers in figure 3 is in fact much larger than the size of the condensate.
Atom-laser theory
Theoretical studies of atom lasers have been stimulated by the increasing availability of experimental data. The challenge is to develop a quantum field theory of output coupling and then use it to optimize the design of practical systems. In 1998 Bernhard Kneer and colleagues at the University of Ulm in Germany developed a theoretical model that combined various aspects of optical-laser theory and the theory that describes static Bose-Einstein condensates. These treatments, however, do not fully address the question of the coherence of the output beam.
In reality, light and matter waves are not completely coherent – a fact that greatly complicates any theoretical description. The coherence of a condensate or output beam can be characterized using the theory of “partially” coherent matter-wave fields. The theory incorporates various so-called coherence functions that provide quantitative information about the coherence. The simplest of these, first-order coherence, tells us whether we can see interference fringes formed by two overlapping fields. Higher-order coherence functions can represent intensity correlations between fields, for example.
According to the theory, the matter waves output by an atom laser are coherent in two respects: they contain a narrow range of wavelengths and they are much more stable in intensity than thermal beams. In 1997 Robert Dodd and co-workers at NIST calculated the intensity stability for a trapped condensate within the framework of quantum field theory. We now need to do this for the flowing output from a condensate. Researchers at Oxford University are aiming to calculate the coherence functions for the output-coupling schemes mentioned earlier using so-called finite-temperature field theory. The aim of the work is to tune the output-coupler parameters to maximize the coherence of the atom-laser output.
An important challenge in building a continuous atom laser is to design an output-coupling scheme that keeps the phase of the matter waves “in step” over a long period of time. Suppose that we describe the matter wave by I(x,t) = A(x,t) cos(wt + f(x,t)) and we can keep the intensity, A, constant. The overall coherence of the output is then limited by the stability of the phase, f, over time and in space. If the phase wanders over time then it will also limit the line width of the atom laser via the uncertainty principle. Several theoretical studies to date have addressed the question of phase stability, while the stability of the condensate in real systems remains an area of active research.
Atom-laser applications
The possibility of producing a coherent beam of atoms that could be collimated to travel long distances, or brought to a tiny focus like an optical laser, opens up a whole host of applications. Although it would be imprudent to try to predict all of the applications that will arise, there are reasons to believe that the atom laser will be a significant scientific tool in the future. Atom lasers may have a major impact on the fields of atom optics, atom lithography, precision atomic clocks and other measurements of fundamental standards.
One application for which the coherence of an atom laser is critical is atom holography. Just as conventional holography uses the diffraction of a photon beam to reconstruct a 3-D image, atom holography uses the diffraction of atoms. As the de Broglie wavelength of the atoms is much smaller than the wavelength of light, an atom laser could create much higher resolution holographic images. Atom holography might be used to project complex integrated-circuit patterns, just a few nanometres in scale, onto semiconductors.
The first atom holograms were demonstrated in 1996 by Fujio Shimuzu and colleagues at the University of Tokyo, in collaboration with Jun-ichi Fujita at NEC Research, using laser-cooled atoms. However, the matter waves in these experiments were only partially coherent because the atoms were not all in the same quantum state.
In the case of laser-cooled gases, the level of coherence needed to create a hologram is achieved by selecting a small portion of the atoms. Although this increases the spatial coherence of the matter waves, it is at the expense of the flux or number of atoms available for the duration of the experiment (which is often determined by the overstretched patience of the graduate student). The problem would be alleviated by having a source in which most of the atoms are in the same quantum state. An atom laser is such a source and could provide a much more intense and fully coherent beam of atoms.
Holography is a two-step process. First, a hologram – a sort of diffraction grating containing information about the object – is produced. Then a beam of light (or atoms) is diffracted by the hologram to form the image. In optical holography, the hologram is often made by interfering a laser beam with light that has been reflected from an object. The resulting diffraction pattern is recorded on photographic film. However, the diffraction pattern can also be generated by computer, so that an image can be formed without ever actually using an object.
In the atom-holography experiments, an image has been created by diffracting a coherent beam of atoms through a grating that was manufactured using electron-beam lithography. The image was recorded on a “microchannel plate” – a detector that is sensitive to atoms. So far, atom holography has been able to produce 2-D images, and not the familiar 3-D ones of optical holography.
A related application, which might also benefit from a source of coherent matter waves, is atom interferometry. In an atom interferometer an atomic wave packet is coherently split into two wave packets that follow different paths before recombining. The interference pattern created when the two wave packets recombine tells us something about the phase difference between the two paths. Atom interferometers that are more sensitive than optical interferometers could be used to test quantum theory, and may even be able to detect changes in space-time (see Physics World March 1997 pp43-48). This is because the de Broglie wavelength of the atoms is smaller than the wavelength of light, the atoms have mass, and because the internal structure of the atom can also be exploited.
Until now, all atom-interferometry experiments have used thermal atomic beams, analogous to the lamps that were used in the early days of optics experiments. Filtering is typically used to reduce the energy spread of the beam and achieve the degree of coherence needed to see the interference effects. The coherence length, however, is short, limiting the use of atom beams to interferometers that have “arms” of equal length – otherwise the interference pattern would be washed out.
Atom lasers would allow the use of devices with unequal path lengths, such as Michelson interferometers. Such devices may provide a way to measure lengths over large distances with unprecedented precision.
Nonlinear atom optics
Just as the invention of the laser enabled the field of nonlinear optics to flourish, intense sources of coherent matter waves have opened up a similar field in atom optics. Until recently, most atom-optics experiments could be thought of as single-particle phenomena where the interactions between particles could be neglected. In 1993 Pierre Meystre and colleagues at the University of Arizona in the US considered a new kind of atom-optics experiment where interactions between the atoms were crucial. They called this new field “nonlinear atom optics”.
Figure 4
In conventional nonlinear optics, photons interact with each other through some mediating material, such as transparent crystal. A common nonlinear optical phenomenon is “four-wave mixing”. Here, three waves are sent into a nonlinear medium and the exchange of energy and momentum between the waves results in the production of a fourth wave (see figure 4). A quantum mechanical description of this process involves two photons from separate beams annihilating while two other photons are created. One of these photons adds to the third beam and amplifies it, while the other represents a new, fourth beam.
In 1998 Paul Julienne from NIST, and Marek Trippenbach and Yehuda Band from Ben-Gurion University of the Negev in Israel predicted that the nonlinear mean-field interactions between atoms in a Bose-Einstein condensate, like the one at NIST, could lead to four-wave mixing of matter waves. They predicted that if three condensates of appropriate momenta collided, the term in the nonlinear Schrödinger equation that describes the interactions between the atoms would give rise to a fourth. At the atomic level, this process can be described as a collision between two atoms from separate matter-wave beams. One of the atoms is stimulated so that it scatters in the direction of the third incident matter-wave beam. By the conservation of momentum, the other atom goes off to make a fourth, separate beam.
The actual experiment performed at NIST did not use three separate condensates. Instead, lasers were used to divide a single condensate into three different momentum states via a process called Bragg diffraction. This is similar to the stimulated Raman process described earlier, except that the atoms return to the same magnetic sublevel. However, the absorption and emission of photons gives the atom a momentum kick in a well defined direction.
Starting with a condensate at rest, two separate pulses of interfering laser beams were applied to create the Bragg “diffraction grating” that divided the atoms roughly equally into three different momentum states, including the state of the initial condensate. When these pulses were applied fast enough – that is before the different momentum states had a chance to separate – atoms in a fourth momentum state were produced (see figure 4c).
The future of atom lasers
Although we have touched on some of the future applications of atom lasers, most of them (and even those that we cannot predict) are likely to depend on future developments in atom lasers themselves. Again, using developments in optical laser technology as a guide, we speculate on some of the possible future directions of atom-laser research.
Most continuous-wave optical lasers are truly continuous in the sense that they are continually fed energy or “pumped” so that they can supply photons indefinitely. A truly continuous source of coherent matter waves could be similarly achieved only if the condensate could be replenished continually. Schemes for steady-state condensate formation have been devised but so far none has been demonstrated in the laboratory.
The cavity in an optical laser is typically many wavelengths long and as a result can support several different frequencies or modes. In a Bose-Einstein condensate, however, the atoms typically occupy the lowest energy state of the trap. It remains to be investigated whether condensates can be produced in higher-energy states and then extracted from the trap to give a higher-energy beam of atoms.
In recent years laser cooling and neutral-atom manipulation have developed into mature areas of atomic physics and have boosted the development of matter-wave optics. More recently the discovery of Bose-Einstein condensation has heralded the new field of atom lasers, one that promises more exciting developments as the range of systems and degree of control over them increases.
Related events
Copyright © 2022 by IOP Publishing Ltd and individual contributors
bright-rec iop pub iop-science physcis connect |
21307cf0db8cd0f1 | Home 1 2 3 24 25
LOG#249. Basic twistor theory.
This my last normal post. Welcome to those who read me. TSOR is ending and from its ashes will arise another project. That is inevitable. I want to use new TeX packages, and that is not easy here, to simplify things, and to write better things I wish to tell the world.
1. Twistor: the introduction
Roger Penrose formulated twistor theory in the hope of making complex geometry and not the real geometry the fundamental arena of geometric theoretical physics, and a better way to understand quantum mechanics. Quantum Mechanics(QM) is baed on complex structure of Hilbert space of physical states (both of finite or infinite dimensions!). The probability amplitudes are the complex numbers. That is, complex numbers describe oscillations interpreted as probability waves! By the other hand, relativity implies that spacetime points is a real four dimensional (in general D-dimensional) vectors. But this is again a restricted option determined by experiments. That coordinates of spacetime are real numbers is just an hypothesis of our mathematical models, despite the fact it is well supported by experiments! The main difficulty is a consistent formulation of special relativistic quantum theory. Even when possible, that it is in the form of quantum field theory, many questions arise: entanglement, the measurement problem, the collapse of wave functions/state vector reduction and the quantum gravity issue.
Twistor theory was created with the idea of treating real coordinates of spacetime points as composed quantities of more general complex objects called twistors. Therefore, in twistor theory, the most fundamental object are twistors instead of spacetime points. Twistor theory is pointless (in the real sense) geometry.
Mathematically, a first approximation to what a twistor is comes from the conformal O(4,2) spinors. Complex 4-vectors in the fundamental representation of a covering conformal group called SU(2,2) is isomorphic to \overline{O(4,2)}. A correspondence between twistors and the spacetime points is given by the so-called incidence equation or Penrose relation.
The twistor formalism originally introduced by Penrose for 4D spacetime can be extended in two or three ways:
• Extending the Penrose-relation in a supersymmetric way one obtains a correspondence between the supertwistors and the points of D=4 superspaces.
• Replacing the complex numbers by quaternions (or octonions, Clifford-numbers) in the Penrose relation, one can bring the quaternionic (octonionic, cliffordonic) twistors into connection with the points of the D=6 (D=10, D=2^n) spacetime (superspace, C-(super)space). Even more, one can extend this quaternionic twistor formalism in a SUSY fashion introducting quaternionic fermionic degrees of freedom.
• Introducing hypertwistors, and likely hypersuperspace or C-hypersuperspace.
2. 4D spacetime as twistor composite
There is a very pedagogical introduction to 4D twistor theory and the fundamental incidence equation and the Penrose relation. It is well known that ANY spacetime point can be described by the 4D (ND) vector X^\mu=(X^0,X^1,X^2,X^3), or X^\mu=(X^0,\ldots,X^{D-1}). This can be connected and linked to a 2\times 2 hermitean matrix, using the Pauli matrics as follows:
(1) \begin{equation*}X\rightarrow X=\begin{pmatrix} X^0+X^3 & X^1-iX^2\\ X^1+iX^2 & X^0-X^3\end{pmatrix}=X^\mu \sigma_\mu\end{equation*}
This map is one-to-one. We can also consider the complex 4d vector
(2) \begin{equation*}Z=(Z^0,Z^1,Z^2,Z^3)\end{equation*}
instead of real 4d vectors. The complex 4-vector Z describes a point of the complexified Minkovski spacetime C\mathbb{R}^{4}. A similar relation to the previous equation gives us the correspondence between the points in this complexified Minkovski spacetime and the 2d complex matrices via Z=Z^\mu \sigma_\mu. You can get the real Minkovski spacetime R\mathbb{R}^{4} by putting the reality condition onto the complex matrix Z as Z=Z^+. A point in the twistor construction or model is the use of the isomorphism between complex 2d matrices Z and the Z-plane in 4d complex vector \mathbb{C}: this is the twistor space \mathbb{T}=\mathbb{C}^4. This isomorphism is given by the following correspondence, called Penrose relation:
(3) \begin{equation*}Z:\mbox{Subspace spanned by columns of $4\times 2$ matrix} \begin{bmatrix}iZ\\ I_2\end{bmatrix}\end{equation*}
More explicitly, the 4\times 2 matrix are identified with a bitwistor, a couple of twistors (T_1,T_2)\in\mathbb{T}:
(4) \begin{equation*}\begin{bmatrix}iZ\\ I_2\end{bmatrix}=\begin{bmatrix} iZ^0+iZ^3 & Z^2+iZ^1\\ iZ^1-Z^2 & iZ^0-iZ^3\\ 1 & 0\\ 0 & 1\end{bmatrix}\end{equation*}
From a mathematical viewpoint, this gives us an affine system of coordinates for the Z-plane in the twistor space \mathbb{T}. This subspace is a complex Grassmann manifold G_{2,4}(\mathbb{C}). In other words, the Z-plane is given by the two linearly independent twistors (T_1,T_2), the bitwistor in twistor space! This is also a correspondence between the complexified spacetime point Z\in \mathbb{C}^4 and a complex Z-plane in the twistor space. By the other hand, there is NOT a unique relation between the pair of twistors (T_1,T_2) and the Z-plane generated by this pair. It is clear, that every pair of twistors or bitwistor (T_1',T_2') is related to nonsingular matrices 2\times 2 by (T_1',T_2')=(T_1,T_2)M, and that that gives the same Z-plane in the twistor space \mathbb{T}.
Let the pair (T_1', T_2') has the form of previous matrix, then any equivalent pair of twistors satisfy
(5) \begin{equation*}\begin{bmatrix}iZ\\ I_2\end{bmatrix}=(T_1,T_2)M=\begin{bmatrix}\Omega & M\\ \Pi & M\end{bmatrix}\leftrightarrow iZ=\Omega M, \;\; I_2=\Pi M\end{equation*}
and where the 2\times 2 complex matrices \Omega, \Pi are constructed of the coordinates of the twistors (T_1,T_2). Thus, we have
\[\tcboxmath{iZ=\Omega\Pi^{-1}\leftrightarrow \Omega=iZ\Pi}\]
This is the Penrose relation in matrix form! If we write
(6) \begin{equation*}(T_1,T_2)=\begin{pmatrix}\omega^{\dot{1}1} &\omega^{\dot{1}2}\\ \omega^{\dot{2}1} & \omega^{\dot{2}2}\\ \pi_{11} &\pi_{12}\\ \pi_{21} &\pi_{22}\end{pmatrix}\end{equation*}
now we get
(7) \begin{align*}\omega^{\alpha\dot{1}}=iZ^{\dot{\alpha}\beta}\pi_{\beta 1}\\ \omega^{\dot{\alpha}2}=iZ^{\dot{\alpha}\beta}\pi_{\beta 2}\end{align*}
and where \alpha, \beta=1,2. In short hand notation, we get
\[\tcboxmath{\omega^{\dot{\alpha}}=iZ^{\dot{\alpha}\beta}\pi_{\beta},\;\; T=\begin{pmatrix}\omega^{\dot{\alpha}}\\ \pi_\beta\end{pmatrix}}\]
This is the celebrated incidence equation postulated firstly by R. Penrose, also named Penrose relation in his honor. It has a simple physical (geometrical) meaning: the pint Z\in \mathbb{C}^4 corresponds to the twistor
\[T\leftrightarrow \omega^{\dot{\alpha}}=iZ^{\dot{\alpha}\beta}\pi_\beta\]
It is evident that all the twistors lying on the Z-plane given in the same Penrose relation corresponds to a given Z\in\mathbb{C}^4 point and for a given twistor T satisfying the incidence equation, only one complex spacetime point Z is assigned! If anyone need to describe the real spacetime point X\in \mathbb{R}^4, one should require the matrix Z to be hermitian, i.e., to satisfy
(8) \begin{equation*} Z=Z^+\leftrightarrow Z=-i\Omega\Pi^{-1}=i(\Pi^{-1})^+\Omega^+\end{equation*}
and then
(9) \begin{equation*}\Pi^+\Omega+\Omega^+\Pi=0\end{equation*}
Using the notation we introduced above, equivalently
(10) \begin{align*}\overline{\pi}_{\dot{\alpha}1}\omega^{\dot{\alpha}1}+\overline{\omega}^{\alpha 1}\pi_{\alpha 1}=0\\ \overline{\pi}_{\dot{\alpha}2}\omega^{\dot{\alpha}2}+\overline{\omega}^{\alpha 2}\pi_{\alpha 2}=0\\ \overline{\pi}_{\dot{\alpha}1}\omega^{\dot{\alpha} 2}+\overline{\omega}^{\alpha 2}\pi_{\alpha 1}=0\end{align*}
and where
denotes the complex conjugation. In the twistor framework, these equations say that the twistors (T_1,T_2), the bitwistor, are null-twistors with respect to the U(2,2) norm
\[\langle T_1,T_2\rangle=\langle T_1,T_1\rangle=\langle T_2,T_2\rangle=0\]
(11) \begin{equation*}\langle T, T\rangle=T^+GT=\begin{pmatrix} \overline{\omega}^\alpha & \overline{\pi}_{\dot{\beta}}\end{pmatrix}\begin{pmatrix}0 & I_2\\ I_2 & 0\end{pmatrix}\begin{pmatrix} \omega^{\dot{\alpha}}\\ \pi_\beta\end{pmatrix}\end{equation*}
Therefore, the reality condition is equivalent to the zero condition for twistors, i.e., to the vanishing of the U(2,2) norm of the bitwistor and couple of twistors. The Z-planes generated by the null twistor (or congruence) are called totally null planes or congruence relation. In this way, we obtain a set of correspondences:
• Complex planes in twistor space are related one-to-one to points in complexified Minkovski spacetime.
• Complex planes in twistor space are related to totally null planes in twistor space, not one-to-one.
• Totally null planes in twistor space are related one-to-one to points in real Minkovski spacetime.
• Points of complexified spacetime are related to real spacetime, not one to one, to points of real spacetime.
Remark: from the viewpoint of twistor theory, it is more natural to use twistors (couple of twistors indeed, via a bitwistor), for the description of the complex Minkovski spacetime or the null twistor for the description of the real spacetime!
3. SUSY and Penrose relation
The plan of SUSY is to give a unified mathematical description ob bosonic and fermionic fields. Therefore, one can consider bosons and fermions using the same theoretical background. SUSY or supersymmetry allows us to transorm the descriptions of bosonic fields into fermionic fields and vice versa. In order to have a possible description of bosonic and fermionic fields using the twistor theory, we has to extend it using SUSY. What is SUSY? Surprise…
SUSY replaces the notation of any space-time point X=X^\mu=(X^0,X^1,X^2,X^3) by an appropiate superpoint
(12) \begin{equation*}Y=(X,\Theta)=(X^0,\ldots,X^3;\theta_1,\ldots,\theta_N)\end{equation*}
Here, the superspace point extends spacetime with a new class of numbers, \theta_i, \theta_i^2=0, \theta_i\theta_j=-\theta_j\theta_i, for all i,j=1,\ldots,N. These numbers are called Grassmann numbers. These numbers allow us to handle fermions, since they anticommute themselves. We can define a supervector representing the D=4 N-extended superspace as follows:
(13) \begin{equation*} Y=(X,\Theta)\end{equation*}
(14) \begin{equation*} Y=(X^0,\ldots,X^3;\theta_1,\ldots,\theta_N)=(X^\mu;\Theta_A)\end{equation*}
in such a way
(15) \begin{equation*}\left[X^\mu,X^\nu\right]=X^\mu X^\nu-X^\nu X^\mu=0\end{equation*}
(16) \begin{equation*}\{\theta_A,\theta_B\}=\theta_A\theta_B+\theta_B\theta_A=0\end{equation*}
(17) \begin{equation*}\left[X^\mu,\theta_A\right]=X^\mu\theta_A-\theta_A X^\mu=0\end{equation*}
Commuting coordinates of any supervector are called bosonic coordinates, anticommuting coordinates (c-numbers) are called fermionic coordinatese. In the same spirit, we could generalize twistor theory and the twistor approach introducing N-extended bosonic supertwistors
(18) \begin{equation*} T^{(n)}=\left(\omega^{\dot{\alpha}},\pi_\beta;\xi_1,\cdots,\xi_n\right)\in \mathbb{C}^{4\vert N}\end{equation*}
and the fermionic N-extended supertwistors
(19) \begin{equation*}\tilde{T}^{(n)}=\left(\eta_1,\ldots,\eta_4;u_1,\ldots,u_N\right)\in \mathbb{C}^{N\vert 4}\end{equation*}
where the \eta_i quantities are fermionic coordinates ant the u_A quantities are the bosonic degrees of freedom. We will discuss the N=1 case, simple supersymmetry, for simplicity. Firstly, two linearly independent supertwistors (T_1^{(1)},T_2^{(1)}) span (2,0)-superplane int he superspace \mathbb{C}^{4\vert 1}. In analogy with the no superspace case, we define and get
(20) \begin{equation*}\left(T_1^{(1)},T_2^{(1)}\right)=\begin{bmatrix} \omega^{\dot{1}1} & \omega^{\dot{1}2}\\ \omega^{\dot{2}1} &\omega^{\dot{2}2}\\ \xi_1 & \xi_2 \\ \pi_{11} & \pi_{12}\\ \pi_{21} & \pi_{22}\end{bmatrix}=\begin{bmatrix} iZ\\ \theta^1 & \theta^2\\ 1 & 0\\ 0 & 1\end{bmatrix}\Pi\end{equation*}
Here, Z, \Pi are complex matrices 2\times 2 made up of bosonic elements. This can also be expressed using equations
(21) \begin{align*}\omega^{\alpha\dot{1}}=iZ^{\dot{\alpha}\beta}\pi_{\beta 1}\\ \omega^{\dot{\alpha} 2}=iZ^{\dot{\alpha}\beta}\pi_{\beta 2}\\ \xi_1=\theta^1\pi_{11}+\theta^2\pi_{21}\\ \xi_2=\theta^1\pi_{12}+\theta^2\pi_{22}\end{align*}
Then, the supersymmetric extension of Penrose relation reads off
(22) \begin{align*}\omega^{\dot{\alpha}}=iZ^{\dot{\alpha}\beta}\pi_{\beta}\\ \xi=\theta^\alpha\pi_\alpha\end{align*}
These equations mean that every T^{i}=(\omega^{\dot{\alpha}},\pi_\beta,\xi) supertwistor corresponds to a Y=(Z,\Theta)=(z^\mu, \theta^\alpha) superspace point or superpoint. However, note that it is not the only option to generalize the Penrose relation!
Apply three linearly independent supertwistors T^{(1)}_1, T^{(1)}_2, \tilde{T}^{(1)}, where the latter is a fermionic twistor, such as \mathbb{C}^{4\vert 1} is our superspace.
(23) \begin{equation*}\left(T_1^{(1)},T_2^{(1)},\tilde{T}^{(1)}\right)=\begin{bmatrix}\omega^{\dot{1}1} & \omega^{\dot{1}2} & \rho^{\dot{1}}\\ \omega^{\dot{2}1} & \omega^{\dot{2}2} & \rho^{\dot{2}}\\ \pi_{11} & \pi_{12} & \eta_1\\ \pi_{11} & \pi_{12} & \eta_2\\ \xi^1 & \xi^2 & u\end{bmatrix}=\begin{bmatrix}iZ^{\dot{1}1} & iZ^{\dot{1}2} & \theta^1\\ iZ^{\dot{2}1} & iZ^{\dot{2}2} & \theta^2\\ 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\pi_{11} & \pi_{12} & \eta_1\\ \pi_{21} & \pi_{22} & \eta_2\\ \xi^1 & \xi^2 & u\end{bmatrix}\end{equation*}
where the fermionic supertwistor includes the four fermionic components
and also the bosonic u. The (2;1)-superplane is parametrized by a (Z,\Theta) matrix of 2\times 3 type with elements satisfying the following generalized incidence relations:
(24) \begin{align*}\omega^{\dot{\alpha}}=iZ^{\dot{\alpha}\beta}\pi_\beta+\theta^{\dot{\alpha}}\xi\\ \rho^{\dot{\alpha}}=iZ^{\dot{\alpha}\beta}\eta_\beta+\theta^{\dot{\alpha}}u\end{align*}
and where the first equation is the bosonic incidence relation, and the second one is the fermionic incidence relation. Thus, this is a different generalization of Penrose relation. For N=1 suspersymmetry, there are 2 possible extensions of Penrose’s relation/incidence equations. In case of the N-extended SUSY, one can generalize those equations in N+1 different ways!
4. Quaternionic extension of Penrose incidence in D=6 spacetime
Taking into account the previous section, there are 2 possible approaches to 6D twistors:
• Extend Penrose relation from D=4 to D=6 as it has been done in bibligraphy or following these lines.
• Replace the complex 2\times 2 matrices Z by quaternionic ones. Use quaternionic 2\times 2 matrices describing naturally a 6d real Minkovski spacetime. The previous approach is equivalent to this one if the description of 6d spacetime is careful.
Consider the first case at the moment. 6d twistors are objects
(25) \begin{equation*} T=\left(\omega^\alpha,\pi_\alpha\right)\in \mathbb{C}^8\end{equation*}
whose structure is determined by the norm of the spinors for 8d complex orthogonal group O(8;\mathbb{C}) given by:
(26) \begin{equation*}\langle t, t'\rangle=\omega^\alpha\pi'_{\alpha}+\pi_a\omega'^a=0\end{equation*}
Points in 6d complex Minkovski spacetime are represented by a 4\times 4 antisymmetric matrix Z^{\alpha\beta}=-Z^{\beta\alpha}. The Penrose relation becomes
(27) \begin{equation*}\omega^\alpha= Z^{\alpha\beta}\pi_\beta\end{equation*}
with \alpha,\beta=1,2,3,4. This equation has a nontrivial solution if the twistors T are pure, i.e., if
(28) \begin{equation*}\langle T,T\rangle=2\omega^\alpha\pi_\alpha\end{equation*}
that is, they have vanishing O(8,\mathbb{C}) norm. The points of the real 6d spacetime are representeed by 4\times 4 complex, antisymmetric matrices Z satisfying a reality condition in the form of
\[\overline{Z}=B^{-1}Z^+B,\;\; B=\begin{bmatrix} 0 & 1& 0 & 0\\ -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & -1 & 0\end{bmatrix}\]
and Z^+ denotes the hermitian conjugated matrix. This reality condition for Z is equivalent to the following equations
(29) \begin{align*}\overline{\omega}^\alpha\pi_\alpha+\overline{\pi}_\alpha\omega^\alpha=0\\ \overline{\omega}^\alpha=\overline{\omega}^\beta(B^{-1})_\beta^{\;\;\alpha}\\ \overline{\pi}_\alpha=\overline{\pi}_\beta(B)^\beta_{\;\;\alpha}\end{align*}
The overline means complex conjugation. Indeed, this equation is in fact the condition of vanishing the V(4,4) norm. Thus, 6d twistors describe the points of the real Minkovski spacetime RM^6 if the following two norms are zero:
(30) \begin{align*}\omega^\alpha\pi_\alpha=0\\ \overline{\omega}^\alpha\pi_\alpha+\overline{\pi}_\alpha\omega^\alpha=0\end{align*}
The first equation above is the O(8,C) norm, and the second equation is the U(4,4) norm. It means that 6d twistors describing points in RM^6 are, indeed, invariant under the quaternionic orthogonal group O(4,H), covering the six dimensional group O(6,2). The chain:
\[O(4,H)\equiv U_\alpha (4,H)=O(8,C)\cap U(4,4)=\overline{O(6,2)}\]
is true as group isomorphism. Therefore, one can look for the quaternionic extension of 4D twistor formalism which can describe RM^6 Minkovski spacetime. Quaternions are algebraic objects
and i,j,k=1,2,3. Real numbers are naturally embedded in quaternions. We can also define quaternionic conjugation
(31) \begin{equation*}\overline{Q}=q_0-q_1e_1-q_2e_2-q_3e_3\end{equation*}
and the norm
(32) \begin{equation*}N(q)^2=\vert Q\vert^2=\overline{Q}Q=q_0^2+q_1^2+q_2^2+q_3^2\end{equation*}
The quaternion algebra has a natural structure of euclidean 4d spacetime. Complex numbers can be seen too as certain subset of quaternion algebras. Identifying complex numbers is easy from quaternions, if you take the couple
(33) \begin{equation*}Q=z_1+e_2z_2=(q_0+q_3e_3)+e_2(q_2+q_1e_3)\end{equation*}
In analogy to previous arguments, we can associate a Z-plane in 4D quaternionic space \mathbb{H}^4, in quaternionic twistor space as follows: take Z and associate to it the subspace by columns of 4\times 2 quaternionic matrices
(34) \begin{equation*}\begin{bmatrix} e_2Z\\ I_2\end{bmatrix}\end{equation*}
By a similar procedure, we get quaternionic Penrose relations
(35) \begin{equation*}\omega^{\dot{\alpha}}=e_2Z^{\dot{\alpha}\beta}\pi_\beta\end{equation*}
and \alpha,\beta=1,2. The quaternionic twistor will be now t=(\omega^{\dot{\alpha}},\pi_\beta). A real 6d Minkovski spacetime point is desecribed by a 6d vector
\[X^\mu=(X^0,\ldots,X^5)\in RM^6\]
which can be mapped on a quaternionic hermitian 2\times 2 matrix
\[\mathbb{X}=\begin{pmatrix}X^0+X^5 & X^4+X^ke_k\\ X_4-X^ke_k & X^0-X^5\end{pmatrix}\]
and k=1,2,3, with e_k the imaginary quaternion units. The reality condition X=X^+, with the plus meaning quaternionic conjugation and transposition, is equivalent to the following condition for quaternionic twistors t:
(36) \begin{equation*}\langle t,t\rangle=\overline{\omega}^\alpha e_2\pi_\alpha+\overline{\pi}_\alpha e_2\omega^\alpha=0\end{equation*}
Thus, twistors t describe a poitn of RM^6 if their norm, on O(4,H)=U_\alpha(4,H), vanishes. Using the decomposition of quaternionic coordinates of twistor in quaternions one can show that Penrose relations are equivalent to the incidence relations, so descriptions of RM^6 by 6d complex twistors and D=6 quaternions are equivalent.
5. Conclusions
We can summarize some simple possible definitions of twistors related to the written stuff:
• A twistor is a solution of the twistor equation \nabla_{A'}^{\;\; (A}\omega^{B)}=0, that is called twistor space.
• A spinor of the conformal group (with two elements!) is a twistor.
• Point in twistor space=null-line in Minkovski spacetimes.
• Point in Minkovski spacetime=line in twistor space.
Twistors are generally available in arbitrary complex dimension for conformal groups, BUT, there is a nice emergent relation with commuting spinors in SL(2,\mathbb{K}), where \mathbb{K} is a division algebra, special relativity in D spacetime, supersymmetry and twistors in dimensions D=3,4,6,10 for the Green-Schwarz action for the superstring, twistors in D=4,5,7,11 for the supermembrane and the Lorentz vector of those dimensions. Superspace version of these arguments are available. There is a match between the number of spacetime dimensions of the (super)p-brane embedding, the number of supersymmetries and the dimension of the p-brane.
Twistors are a powerful tool for spacetime geometry in complex manifolds or even real manifolds. We note that 2 approaches seen are equivalent only for real spacetime, though. 6D spacetime is special. 6D spacetime can be xtended in two nonequivalent ways: by complexification or quaternionization methods! The quaternionic formulation of twistor theory leads to serious issues in general, and that is why it is not popular. The main issue is the quantization of twistors because of non-commutative of quaternions. However, the description of 6d spacetime with quaternionic procedures allow us to use the same geometry as in case of the complex description of 4d spacetime! In fact, it is natural to extend this to octonions and 10d spacetime. The problem there is that octonions are generally non-associative and matrix multiplications become nasty due to that: octonionic matrices are hardly associative!
There can only be one!!!!!!
LOG#248. Basic string theory.
3 posts to finish the TSOR adventure!
This blog post will introduce you to basic string theory from my own biased viewpoint. My blog, my rules. I think you concede that!
What is the Universe made of? This really ancient question (both philosophically and scientifically addressed differently from time to time) has not an ultimate answer. Today, we believe there are atoms (elements) that make up the chemistry of life we need. Atoms are not fundamental! Since the 19th century and through the 20th century we discovered lots of particles: electrons, protons, neutrons…An even worst, protons an neutrons are now believed not to be fundamental but made up from quarks. There are 6 quarks (6 flavors or types: up, down, charm, strange, top, bottom). There are 6 leptons (electron, muon, tau, electron neutrino, muon neutrino, tau neutrino). Moreover, there are gauge fields: photon, gluons, W bosons and Z bosons, plus the Higgs bosons found in 2012, 8 years ago. BUT, particles are not fundamental either! They are excitations of quantum fields. Quantum fields are fluid-like stuff permeating the whole Universe. Missing something? Of course: gravity. The Standard Model does NOT contain gravitational fields as gauge fields. Gravity is described not by a Yang-Mills theory but with General Relativity, a minimal theory treating spacetime as the field potential. The metric of spacetime is somehow the field of gravity (more precisely, the metric is the gravitational potential function). Using a torsion-less theory of gravity, you find out that the so called affine connection (the Christoffel symbols) are the equivalent of the classical gravitational field. \Gamma\sim \partial g. However, the nature of the gravitational field is not the affine connection, but the spacetime curvature. The tensor field defined by
(1) \begin{equation*} G_{\mu\nu}+\Lambda g_{\mu\nu}=f(g,\partial g)\end{equation*}
describes the free source theory of relativistic gravity known as general relativity. Gravitons are not included in principle in the theory, gravitational waves are derived in the weak limit of perturbations of flat spacetime g_{\mu\nu}=\eta_{\mu\nu}+\varepsilon h_{\mu\nu}. Thus, gravitons are hypothetical transmitters of gravity for quantum gravity. Indeed, if you believe in quantum mechanics, gravitons are inevitable quanta behind the gravitational waves. In summary, we get bosons and fermions. We have reduced the ancient periodic table to a new set of fundamental ingredients. Not counting colors, or helicities or antiparticles, we have 6 quarks, 6 leptons, the photon, the gluon, the W, the Z and the H. Thats all, folks. 17 particles. 17 fields. Why to stop there? 17 are too many. What if some of them are again NOT fundamental and made of other stuff? Are they truly point-like?
Purely point-like particles provide some known issues. The first issue is the divergence of electric (or gravitational or alike) potential energy. Suppose the electron were a uniform sphere with radius r_e (density \rho_e). The electric energy of such an sphere is
(2) \begin{equation*} U_e=m_ec^2=\dfrac{3K_Ce^2}{5r_e}\end{equation*}
If purely point-like, you see there is an infinity electric energy! Giving up the 3/5 factor arising from spherical symmetry, the classical electron radius is
(3) \begin{equation*} r_e=\dfrac{K_Ce^2}{m_ec^2}\sim 10^{-15}m\end{equation*}
Electrons are known to be fundamental much below this scale. In conclusion: a pure point-particle is meaningless at the end, because of the divergence in classical electrodynamics of the electromagnetic energy. But, as you know since you read me here, electromagnetism is not classical at the current state-of-art of the theory. Quantum electrodynamics (QED) is the theory we should use to answer the above questions. Well, even when it is a disaster as a single theory (it produces a so called Landau Pole at very very high energies), we know QED is an approximation to electroweak theory. However, a mystery is the final destiny of the divergence above even in the quantum regime!!!!!!! Yes, for all practical purposes, the QED is an effective theory that, in the end, neglect the infinities due to refined versions of the above ultraviolet (short scale, high energy) catastrophe. The fact that calculations work fine even neglecting those infinities is puzzling. An unsolved theoretical problem to understand why those infinities can be ignored without (excepting the vacuum energy problem) leaving us a completely nonsense mess of theory. The problem of quantum radiative self-corrections to energy-mass of particles is even worse with gravity. Using similar techniques to those used in the Standard Model do NOT work. Then, what is a particle or graviton? Two main alternative paths are possible:
• 1) Save gravity and pointlike particles, but change quantization. This path is the one donde in Loop Quantum Gravity (formerly non-perturbative canonical quantum gravity) or LQQ. Also, this method is followed up by the approach called Asymptotic Safety, and some minor approaches.
• 2) Save usual quantisation, but give-up a purely point-like nature of particles. That’s string theory, or now p-brane theory (even when a first quantized theory of p-branes is not yet available for p>1).
Adopting the second approach, if not a point, how do we describe strings or stuff? After all, the simplest object beyond structureless points are strings. We need some kinematical and dynamical basics for doing the math. We should describe strings in a natural invariant way, sticking to special relativity and quantum rules, even trying to extend that to general relativity. It is a surprising historical remark that strings do describe not only strong forces, but also have spin two excitations, aka gravitons! The miracle of string theory, beyond point particle theories, is that it allows us to describe a consistent theory of gauge interactions including the gravitational field. Plus a bonus: string theory (modulo some details concerning the critical dimension and the number of quantum fields) is free of UV divergences in perturbation theory. It is a finite theory of quantum gravity from the beginning. There is no free dimensionless parameters though. There is a link between the string coupling g_s, the string tension \alpha' and the string length L_s=\sqrt{\alpha'}, dependent of the type of string theory, the spacetime dimensions and the nature of the fundamental objects (not only string-like!) in the spectrum of the theory.
String theory fundamental object is a single tiny string, generally speaking L_s\sim L_p in old string theory, but massive states can mismatch that. For instance, if L_s=g_s^2L_p, so you could get objects and string greater than Planck length in non-perturbative fashion with g_s>>1. Or, you can have things below the Planck scale if soft enough. Particles or field are really excitation modes of fundamental strings (in critical string theory). Different modes correspond to different fields or particles.
1. Main mathematics
Classical points are given a a line of world in spacetime, i.e., x^\mu (\tau). Strings should be described by a surface area in spacetime, i.e., X^\mu(\tau,\sigma). Here, 0\leq \sigma\leq L_s and strings can be open or closed strings. Open strings have X^\mu(\tau,0)=f^\mu(\tau,0) and X^\mu(\tau,L). Closed strings have periodicity in the space-like worldsheet, such as X^\mu(\tau,\sigma)=X^\mu(\tau,\sigma+L_x). You see that X^\mu(\sigma,\tau) depends on the target spacetime time, as any string in D-spacetime, D=d+1 minkovskian spacetime is the usual selection, consists really of a set of D (generally scalar) fields. The next question is: what is the equation of motion of a FREE string? Well, without giving more advanced details, and noting the similarity between strings and waves, recall that free point particles have the equation of motion (in newtonian mechanics, but also single special relativistic case):
(4) \begin{equation*} \dfrac{d^2x^i}{dt^2}=\ddot{x}^i=0\rightarrow \dfrac{d^2x^\mu}{d\tau^2}=\ddot{x}^\mu=0\end{equation*}
(5) \begin{equation*} \dfrac{\partial^2}{\partial \tau^2}X^\mu(\tau)=\partial_{\tau\tau}X^\mu(\tau)=\partial^2 X=0\end{equation*}
Then, it would be natural for free strings to have the following equation of motion
(6) \begin{equation*}\left[\dfrac{\partial^2}{\partial \tau^2}-\dfrac{\partial^2}{\partial\sigma^2}\right]X^\mu(\tau,\sigma)=\left(\partial_{\tau\tau}-\partial_{\sigma\sigma}\right)X^\mu(\tau,\sigma)=\overleftrightarrow{\partial}X=0\end{equation*}
You can see that D-dimensional free strings are only a set of D-dimensional 2d wave equations. Strings carry energy and momentum, but also spin degrees of freeedom. Generally speaking, we have that the string tension and the string length are related via L_s=2\pi\sqrt{\alpha'} (some people usually prefers L_s=\sqrt{\alpha'} as normalized string length \overline{L_s}=L_s/2\pi. The string also owns a typical energy scale:
(7) \begin{equation*}M_s=L_s^{-1}\end{equation*}
Experimentally, circa 2020, we know that the string mass scale is in a range
\[3.5TeV\leq M_s\leq M_P\leq 10^{15}TeV\]
You could try to generalize the above for classical p-branes (p is the number of space-like dimensions) in D=p+1 spacetime, and classical (p,q)-branes (p is the number of space-like dimensions, q is the number of time-like dimensions) in D=p+q spacetime as follows:
(8) \begin{equation*}\displaystyle{\left[\sum_{i=1}^q\dfrac{\partial^2}{\partial \tau^2_i}-\sum_{j=1}^p\dfrac{\partial^2}{\partial\sigma^2_j}\right]X^\mu(\vec{\tau},\vec{\sigma})=\left(\vec{\partial}_{\tau\tau}-\vec{\partial}_{\sigma\sigma}\right)X^\mu(\vec{\tau},\vec{\sigma})}\end{equation*}
and where
(9) \begin{equation*}X^\mu(\vec{\tau},\vec{\sigma})=X^\mu(\tau^a,\sigma^b)=X^\mu(\tau^1,\cdots,\tau^q,\sigma^1,\cdots,\sigma^p)\end{equation*}
Thus, general p-branes /(p,q)-branes or extended objects are described by hyperbolic/ultrahyperbolic wave equations in D-dimensional target space-time. Only the first quantized version of strings is known at current time.
2. Dynamics of strings
General solution for 2D dimensional wave equations are available:
(10) \begin{equation*}X^\mu(\tau,\sigma)=X^\mu_R(\tau-\sigma)+X^\mu_L(\tau+\sigma)\end{equation*}
This describes a traveler string wave as the sum of a stringy right-wave/oscillation plus a stringy left-wave/oscillation. Assuming periodic boundary conditions, the most general solution is a Fourier expansion
(11) \begin{equation*}X^\mu(\tau,\sigma)_{R,L}=\dfrac{x^\mu}{2}+\dfrac{\pi \alpha' p^\mu_{R,L}(\tau\pm\sigma)}{L_s}+i\sqrt{\dfrac{\alpha'}{2}}\displaystyle{\sum_{k\in\mathbb{Z}\neq 0}\dfrac{\alpha^\mu_{k(L,R)}}{k}e^{-i\frac{2\pi k}{L_s}(\tau\pm\sigma)}\end{equation*}
Here, x^\mu, p^\mu are the center of mass and center of momentum of the string. The first quantization of strings is simple and well understood (unlike general p-branes!). Quantization of every wave oscillation in a single string is a mode. Every mode is quantum harmonic oscillator. A string is secretly a field or infinite number of harmonic oscillators.
Every excitation mode \alpha^\mu(L)_k, \alpha^\mu(R)_k represents a harmonic oscillator. States in vacuum are labeled by center of mass momentum \vert 0,p_i\rangle. Excitations of L/R type gives a frequency 2\pi k/L_s. And finally, the quantum string state is a ket:
(12) \begin{equation*}\vert Q_s\rangle=\displaystyle{\prod_{k>0,\mu}\left(\alpha^\mu_{-k}(R)\right)^{n_{k,\mu}(L)}\prod_{k>0,\mu}\left(\alpha^\mu_{-k}(R)\right)^{n_{k,\mu}(R)}\vert 0,p\rangle}\end{equation*}
Remark: consider equal number of left/right moving stringy quanta. Then, there is a Tower of String EXcitations characterized by oscillation number N_L=N_R. The first levels of this quantization provides:
(13) \begin{align*}N_L=N_R=0,\mbox{vacuum state},\;\;\vert Q_s\rangle=\vert 0,p\rangle\\ N_L=N_R=1,\mbox{first excited level},\;\;\vert Q_s\rangle=\varepsilon_{\mu\nu}\alpha^\mu_{-1}(L)\alpha^\nu_{-1}(R)\vert 0,p\rangle\\ \vdots\end{align*}
For bosonic strings, this represents a spectrum spin-dependent
(14) \begin{equation*}M^2=4M_s^2\cdot (N-a)\end{equation*}
with a=1, N_L=N_R=N. The tachyon mode N=0=N_L=N_R is erased with supersymmetry (SUSY) in superstring theory (in 10d spacetime!). M_s sets the string scale, and N_L=N_R=1 represents massless tensorial states (gravitons!). In the higher energy regime, we expect resonances depending of mass and spin, roughly M^2\simeq NM_s^2, with N\simeq J. Just a further comment: closed strings give sense with the above argument to graviton-like excitations, where do photon-like degrees of freedom come from? From open strings! Copy-cat the same program of classical solutions and quantisation with the right suitable boundary conditions. The result is that free endpoints can move freely along an object called a Dp-brane, or (p+1)-dimensional hypersurface of spacetime (cf. Polchinski 1996). With boundary conditions assumed, the massless gauge U(1)-like fields will be
(15) \begin{equation*}\vert Q_s\rangle=\varepsilon_{\mu}\alpha^\mu_{-1}(L,R)\vert 0,p\rangle\end{equation*}
String excitations along 1 Dp-brane: U(1) gauge fields A^i, i = 0,\cdots,p, with N-coincident Dp-branes is promoted to U ( N ) gauge symmetry N\times N gauge bosons! Moreover, Dp-branes at intersection provides matter fields (chiral fermions) in bifundamental reprentations (\overline{N_a}, N_b). Thus, we have a stringy/Dp-brane machine to generate \Prod_i U(N_i) gauge field theory and derive the Standard Model. The problem is: there are too many ways to do it! Even if with string theory, gauge theory implies gravity, or gravity and gauge theory are included in the same set-up, we do not know how to generate uniquely the Standard Model, and the vacuum we call our Universe… However:
• Strings interact by joining and splitting. Open string endpoints can join to form a stable closed string. (The converse is not always true).
• Behaviour consistent with universality of gravity: photons provide gravity, and somehow, gravity is the square of a gauge theory. That is the motto gravity=YM^2 popular theses days.
• In string theory, gauge interactions and gravity are not independent. They are linked by the internal consistency of the theory. String theory is the only known theory with this property. Even more, consistency implies critical dimensions: 26d the original bosonic string theory, 10d the superstring, and 11d M-theory (12d F-theory, 13d S-theory,\ldots).
• UV finiteness and the end of divergences. The general picture is that string theory has an intrinsic UV regulator (the string length). High energy scattering probes that lenght and non-local behaviour is obtained. Point-like interaction vertices are smothened/erased. Quantitatively precise, loop diagrams in perturbative string theory can be checked to be FINITE. No more UV divergences.
• Strings are special? Can a particle have even higher-dimensional substructure? Model particle as a membrane (Dirac pioneered this with the electron-membrane model): 2 spatial dimensions .Tubes of length L and radius R have spatial V=LR. Quantum fluctuations of p-branes: Long, thin tubes can form without energy cost and that is an issue. Membranes automatically describe multi-particle states. No first quantisation of higher-branes à la strings possible. Quantum membranes have continuous spectrum no one know how to discretize like strings.
However, string theory puzzles further. It implies:
• Internal consistency conditions make further predictions: spacetime is not 4-dimensional, but 10-dimensional (26d in bosonic string theory with no supersymmetry).
• In 10 dimensions there is only one unique type of string theory. It has many equivalent formulations which are dual to each other.
• Witten 1995 showed that there is a single theory, dubbed M-theory, with six duality-related limites: 11d SUGRA, Heterotic SO(32), Heterotic E_8\times E_8, Type IIA, Type IIB, and Type I. These 6 theories are related with a web of T-dualities and S-dualities, and a general U-duality group much more general.
• The 10-dim. theory/11d maximal SUGRA/11d M-theory is supersymmetric, and every boson has a fermionic superpartner. This does NOT imply that supersymmetry must be found at LHC. SUSY energy scale can be in any point between tested energies and Planck energy.
• Superstring theory is well-defined and unique (up to dualities) in 10d/11d and lower/higher dimensions(higher dimensions are usually neglected due to higher spins or extra time-like dimensions).
• The low energy regime E<<M_s os superstring predicts indeed Einstein general relativity plus stringy corrections with several gauge fields.
• Within the full 10d bulk a graviton propagates, and along lower dimensional D-branes a gauge boson propagates.
• Within the high energy regime E\geq M_s, characteristic tower of massive string excitations provide measurable (in principle) as resonances (Kaluza-Klein states)! Energy dependence of interactions differs from field theory.
• The scattering amplitudes are ultra-violet finite without the need for renormalisation. It is believed that string theory interactions represent the fundamental (as opposed to effective) theory, but heavy Dp-brane states also arised in the second string revolution.
3. Issues with contemporary string theory
Our world/Universe is apparently 4d (3+1, minkovskian metric). We need to compactify extra unobserved string theory dimensions, with or without brane world metrics, to derive our Universe (SM plus gravity in the form of General Relativity). The problem is, as I told you before, there is no a unique way to do it. Thus, the model building to get the Universe as a single solution is doomed in current string theory! The set of every possible string theory compatification providing a Universe like ours has a name: the string landscape.
Superstring theory is well-defined only if spacetime is 10 d/11 d as M-theory. It is thus an example of a theory of extra dimensions. You can build up string theories having point particles 0-branes, strings 1-branes, 2-branes (M-theory, 11d SUGRA), and so on. Extra dimensions are compact and very small. For instance, pick a 5d world with coordinates X^M=(x^\mu, x^4)=(x^0,x^1,x^2,x^3;x^4). The extra invisible dimension is folded/wrapped a tiny circle S^1 with radius R_4. If this radius becomes very tiny, the world will appear to be 4d, but, with enough energy, you could reach that dimension. To arrive at 4 large extra dimensions we need to compactify 6 dimensions (or 7 in M-theory). The simplest solution (of course, not the only solution), every dimension is a circle , i.e. internal space is a six-dimensional torus :
(16) \begin{equation*}T^6=S^1\times \cdots\times S^1\end{equation*}
More general 6-dimensional/7-dimensional spaces allowed (Calabi-Yau manifolds were popular in the past). Every consistent compactification yields a solution to string equation of motions with specific physics in 4D. This gives you a landscape of theories. Configuration of multiple branes are related to gauge groups. The intersection pattern is related to charged matter and specifics of geometry is related to interactions (computable!).The field of model building or String phenomenology is to explore interplay of string geometry and physics in 4 dimensions.
The landscape of string vacua biggest puzzle: every consistent compactification is a solution to string equations of motion. Every 4d solution is called a 4d string vacuum. In 10d: All interactions uniquely determined. In 4d: Plethora of consistent solutions exists – the landscape of string vacua Existence of many solutions is typical in physics: Einstein gravity is one theory with many solutions! Pressing question: Consequences for physics in 4D physics? Solution to fine-tuning problems (Higgs, Cosmological Constant)? Harder in the Landscape!
Even worse is the more recent ideas of Swampland versus Landscape: Which EFT(effective field theories) can be coupled to a fundamental theory of QG? There is also a Swampland of inconsistent EFTs related to the Landscape of consistent quantum gravitational theories. Swampland conjectures of general scope, but not sharply proven. The Weak Gravity Conjecture is also analyzed from the viewpoint of the string theory landscape these times.
Is string theory as a framework for QG allows to test explicit conjectures? More ideas (for quantitative check of swampland conjectures and sharper formulation):
• Study manifestations of swampland conjectures in string geometry.
• String Geometry Geometry of compactification space involves Physics in 4d (or higher). Holographic principle is tested as well.
• Strings as extended objects probe geometry differently than points. it opens door for fascinating interplay between mathematics and physics: new physics ways to think about geometry by translating into physics. For instance: classification of singularities in geometry, singularities occur when submanifolds shrink to zero size and branes can wrap these vanishing cycles and give rise to massless particles in effective theory.
• String theories and the Landscape/Swampland give interpretation for classification of singularities in mathematics and guidelines for new situations unknown to mathematicians.
• String theory is a maximally economic quantum theory of gravity, gauge interactions and matter.
• Assumption of stringlike nature of particles leads to calculable theory without UV divergences.
• Challenge for String Phenomenology: understanding the vacuum of this theory
• String Theory as modern mathematical physics: deep interplay with sophisticated mathematics(e.g.: Mirror symmetry, D-brane categories,. . . ).
• String Theory as a tool: Holographic principle like the AdS/CFT, or Kerr dS/dS correspondences. String Theory is a framework for modern physics.
4. Moduli and fluxes issues
String theories have extra field-theoretic degrees of freedom. Consider firstly the next four dimensional action
(17) \begin{equation*}S=\int d^4x\sqrt{-g}\left(\dfrac{1}{2\kappa^2}R-\Lambda_{bare}-\dfrac{Z}{48}F_4^2\right)\end{equation*}
where F_4 is a four-form with solutions to the EOM(equations of motion)
(18) \begin{equation*}F^{\mu\nu\rho\sigma}=c\epsilon^{\mu\nu\rho\sigma}\end{equation*}
It is easily probed that it gives a contribution to the cosmological constant/vacuum energy
(19) \begin{equation*}\Lambda=\Lambda_{bare}+\dfrac{1}{2}\dfrac{Zc^2}{2}\end{equation*}
This gives rise to the moduli (flux compatifications) problem in string theory. In string theory, c is quantized, but you are provided many of such four-form (and even other grade forms!) contributions:
If you have MANY y_i and N_{flux} is arbitrary, \Lambda can be tuned to a very somall value under VERY special conditions, but not all clear! You can try to get how many values you need of these vacua, to get N_{V}=N_{values}^{N_{flux}}, and see how many string theory solutions give you the SM plus Gravity Universe we live in! Terrible result: usual string theory gives you 10^{500} possible Universes in 10d/11d, or even worst, using F-theory technology, you guess an upper bound about 10^{272000} (Vafa)…
4.1. The string spectrum and M-atrix models
As strings need extra dimensions, they also have different quantum numbers in addition to common particle quantum numbers. The dimensional compactification provides the level n of Kaluza-Klein resonance. BUT, the winding number w around the extra dimension/s is also a purely stringy quantum number. For KK-modes:
(20) \begin{equation*} m_{KK}c^2=E_{KK}=\dfrac{\hbar c}{R} \end{equation*}
and for a the winding w-modes
(21) \begin{equation*} m_{W}c^2=\dfrac{\hbar c w R}{L_s^2} \end{equation*}
so finally, including the R/L excitation modes and the continous part of the quantized string, we get with c=\hbar=1 units:
(22) \begin{equation*} \tcboxmath{E^2=m_{s,0}^2+p^2+\dfrac{n^2}{R^2}+\dfrac{w^2R^2}{L_s^4}+\dfrac{2}{L_s^2}\left(N_L+N_R-2\right)} \end{equation*}
Note the symmetry under n\leftrightarrow w and R\leftrightarrow L_s^2/R, known as T-duality. In non-pertubative settings, we also get a symmetry between the strong and weak coupling g_s\leftrightarrow 1/g_s (S-duality). It anticipated the Dp-brane revolution with monopoles in a famous Montonen-Olive conjecture.
Even when a general formulation of what M-theory is, one proposal was made called M-atrix theory. This M-atrix theory reveals that M-theory is a emergent model from the dynamics of a matrix model of D0-branes. Pick up a very large set of N\times N matrices X^a. These matrices (one for each space dimension, a=1,2,\ldots,D_1) represent the position of N-pointlike D0-branes. The energy is a hamiltonian object, formally
(23) \begin{equation*}\displaystyle{H=\sum_{a=1}^{D-1}\sum_{i,j=1}^N\left(P^a_{ij}\right)^2+\sum_{a,b=1}^{D-1}\sum_{i,j=1}^{N}\left(\left[X^a,X^b\right]_{ij}\right)^2+\cdots}\end{equation*}
At low energies, these matrices all commute, their eigenvalues behave like normal spatial coordinates. Thus, ordinary spacetime is emergent from the M-atrix. But, in the regime where quantum fluctuations become large or strong, the full M-atrix structure, non-commutative and (sometimes non-associative in strings!) highly non-linear must be considered. M-theory is a highly non-local theory seen in this way. If it can be simulated with quantum computing is something to be tested in the future! Maybe, even M-theory tools will be more powerful than usual quantum computational tools.
5. Epilogue: a Multiverse of Madness and Nightmare?
Even when string theory or SUSY are very powerful, the duality revolution has touched two angular pieces they have left unanswered:
• The selection of our vacuum or Universe. There are too many possible solutions, and that leaves us with the option of the Multiverse or that our vacuum could be not stable but metastable.
• What is the fundamental theory/degrees of freeedom of superstring/M-theory. Duality maps change and challenge what is the fundamental entity in a dual theory. Holographic maps included, you can have a field theory with no gravity and change into a higher-dimensional gravitational theory and vice versa. You can calculate with magnetic branes instead electric branes. What is string theory? After all, we have no first quantized theory of membranes like those in M-theory yet.
Parallel to all this, Nima-Arkami Hamed discovered the amplituhedron: a new tool to simplify Feynman diagram computations based on higher-dimensional entities of polytopal class. They are intrinsically non-local. He has envisioned a future in which locality, SR, GR, and QFT are derived from a new set of structures. Long ago, when the 26d four-point function for the scattering of four tachyons is the Shapiro-Virasoro amplitude was derived to be:
\[A_4 \propto (2\pi)^{26} \delta^{26}(k) \dfrac{\Gamma(-1-s/2) \Gamma(-1-t/2) \Gamma(-1-u/2)}{\Gamma(2+s/2) \Gamma(2+t/2) \Gamma(2+u/2)}\]
people wondered what was behind that. It showed to be the string…What is the p-brane generalization of this amplitude, if it exists?
In the end, the problem with String Theory is that we are not sure of what the symmetry of the whole theory is. We lack an invariant notion/relativity+equivalence principle for string/p-branes! There are only a few ideas circulating about what these new relativity principle/new equivalence principle are! But that is the subject of my final TSOR blog post, after a twistor interlude!
See you in my next blog post!
LOG#247. Seesawlogy.
One of the big issues of Standard Model (SM) is the origin of mass (OM). Usually, the electroweak
sector implements mass in the gauge and matter sector through the well known Higgs mechanism. However, the Higgs mechanism is not free of its own problems. It is quite hard to assume that the same mechanism can provide the precise mass and couplings to every quark and lepton. Neutrinos, originally massless in the old-fashioned SM, have been proved to be massive. The phenomenon of neutrino mixing , a hint of beyond the SM physics(BSM), has been confirmed and established it, through the design and performing of different nice neutrino oscillation experiments in the last 20 years( firstly from solar neutrinos). The nature of the tiny neutrino masses in comparison with the remaining SM particles is obscure. Never a small piece of matter has been so puzzling, important and surprising, even mysterious. The little hierarchy problem in the SM is simply why neutrinos are lighter than the rest of subatomic particles. The SM can not answer that in a self-consistent way. If one applies the same Higgs mechanism to neutrinos than the one that is applied to quarks and massive gauge bosons, one obtains that their Yukawa couplings would be surprisingly small, many orders of magnitudes than the others. Thus, the SM with massive neutrinos is unnatural (In the sense of ‘t Hooft’s naturalness,i.e., at any energy scale \mu, a setof parameters, \alpha _i(\mu) describing a system can be small (natural), iff, in the limit \alpha _i(\mu)\rightarrow 0 for each of these parameters, the system exhibits an enhanced symmetry.).
The common, somewhat minimal, solution is to postulate that the origin of neutrino mass is different and some new mechanism has to be added to complete the global view. This new mechanism is usually argued to come from new physics (NP). This paper is devoted to the review of the most popular (and somewhat natural) neutrino mass generation mechanism the seesaw, and the physics behind of it, the seesawlogy[1](SEE). It is organized as follows: in section~??, we review the main concepts and formulae of basic seesaws; next, in section~?? we study other kind of no so simple seesaws, usually with a more complex structure; in section~??, we discuss the some generalized seesaws called multiple seesaws; then, in section~??, we study how some kind of seesaw arises in theories with extra dimensions, and finally, we summarize and comment the some important key points relative to the the seesaws and their associated phenomenology in the conclusion.
Basic seesawlogy
The elementary idea behind the seesaw technology (seesawlogy) is to generate Weinberg’s dimension-5 operator \mathcal{O}_5=gL \Phi L\Phi, where L represent a lepton doublet, using some tree-level heavy-state exchange particle that varies in the particular kind of the seesaw gadget implementation. Generally, then:
• Seesaw generates some Weinberg’s dimension-5 operator \mathcal{O}_5, like the one above.
• The strength g is usually small. This is due to lepton number violation at certain high energy scale.
• The high energy scale, say \Lambda _s, can be lowered, though, assuming Dirac Yukawa couplings are small.
• The most general seesaw gadget \textit{is} is through a set of n lefthanded (LH) neutrinos \nu _L plus any number m of righthanded (RH) neutrinos \nu _R written as Majorana particles in such a way that \nu _R=\nu _L^c.
• Using a basis (\nu _L,\nu_L^c) we obtain what we call the general (n+m)\times(n+m) SEE matrix (SEX):
(1) \begin{equation*} M_\nu =\begin{pmatrix} M_L & M_D \\ M_D^T & M_R \end{pmatrix} \end{equation*}
Here, M_L is a SU(2) triplet, M_D is a SU(2) doublet and M_R a SU(2) singlet. Every basic seesaw has a realization in terms of some kind of seesalogy matrix.
We have now several important particular cases to study, depending on the values that block matrices we select.
Type I Seesaw
This realization correspond to the following matrix pieces:
• M_L=0.
• M_D is a (n\times m) Dirac mass matrix.
• M_N is a (m\times m) Majorana mass matrix.
• Type I SEE lagrangian is given by ( up to numerical prefactors)
(2) \begin{equation*} \mathcal{L}_S^{I}=\mathcal{Y}_{ij}^{Dirac}\bar{l}_{L_{i}}\tilde{\phi} \nu _{R_{i}}+M_{N_{ij}}\bar{\nu} _{R_{i}}\nu _{R_{j}}^{c} \end{equation*}
with \phi=(\phi ^+,\phi ^0)^T being the SM scalar doublet, and \tilde{\phi}=\sigma _2 \phi \sigma _2. Moreover,
\left\langle \phi ^0 \right\rangle =v_2 is the vacuum expectation value (vev) and we write M_D=\mathcal{Y}_Dv_2.
Now, the SEX M_\nu is, generally, symmetric and complex. It can be diagonalized by a unitary transformation matrix (n+m)\times (n+m) so U^TMU=diag(m_i,M_j), providing us n light mass eigenstates (eigenvalues m_i,i=1,...,n) and m heavy eigenstates (eigenvalues M_j,j=1,...,m). The effective light n \times n neutrino mass submatrix will be after diagonalization:
(3) \begin{equation*} m_\nu = -M_DM_N^{-1}M_D^T \end{equation*}
This is the basic matrix structure relationship for type I seesaw. Usually one gets commonly, if M_D\sim 100\GeV,and M_N=M_R\sim10^{16}\sim M_{GUT} , i.e., plugging these values in the previous formula we obtain a tipically small LH neutrino mass about m_\nu\sim\meV. The main lecture is that in order to get a small neutrino mass, we need either a very small Yukawa coupling or a very large isosinglet RH neutrino mass.
The general phenomenology of this seesaw can substancially vary. In order to get, for instance, a \TeV RH neutrino, one is forced to tune the Yukawa coupling to an astonishing tiny value, typically \mathcal{Y}_D\sim 10^{-5}-10^{-6}.
The result is that neutrino CS would be unobservable ( at least in LHC or similar colliders). However, some more elaborated type I models prevent this to happen including new particles, mainly through extra intermediate gauge bosons w',Z'. This type I modified models are usually common in left-right (LR) symmetric models or some Gran Unified Theories (GUT) with SO(10) or E_6 gauge symmetries, motivated due to the fact we \textit{can not} identify the seesaw fundamental scale with Planck scale. Supposing the SM holds up to Planck scale with this kind of seesaw would mean a microelectronvolt neutrino mass, but we do know from neutrino oscillation experiments that the difference mass squared are well above the microelectronvolt scale. Therefore, with additional gauge bosons, RH neutrinos would be created by reactions q\bar{q}'\rightarrow W'^{\pm}\rightarrow l^{\pm}N or q\bar{q}\rightarrow Z'^{0}\rightarrow NN(or\;\;\nu N). Thus, searching for heavy neutrino decay modes is the usual technique that has to be accomplished in the collider. Note, that the phenomenology of the model depends on the concrete form gauge symmetry is implemented. In summary, we can say that in order to observe type I seesaw at collider we need the RH neutrino mass scale to be around the TeV scale or below and a strong enough Yukawa coupling. Some heavy neutrino signals would hint in a clean way, e.g., in double W’ production and lepton number violating processes like pp\rightarrow W'^{\pm}W'^{\pm}\rightarrow l^{\pm}l^{\pm}jj or the resonant channel pp\rightarrow W'^{\pm}\rightarrow l^{\pm}N^*\rightarrow l^{\pm}l^{\pm}jj.
Type II Seesaw
The model building of this alternative seesaw is different. One invokes the following elements:
• A complex SU(2) triplet of (heavy) Higgs scalar bosons, usually represented as \Delta =(H^{++},H^+,H^0).
• Effective lagrangian SEE type II
(4) \begin{equation*} \mathcal{L}_S^{II}=\mathcal{Y}_{L_{ij}}l_i^ T\Delta C^{-1}l_j \end{equation*}
where C stands for the charge conjugation operator and the SU(2) structure has been omitted. Indeed, the mass terms for this seesaw can be read from the full lagrangian terms with the flavor SU(2) structure present:
(5) \begin{equation*} \mathcal{L}_S^{II}=-Y_\nu l^T_LCi\sigma _2\Delta l_L+\mu _DH^Ti\sigma _2\Delta ^+ H+ h.c. \end{equation*}
Moreover, we have also the minimal type II seesawlogy matrix made of a scalar triplet:
(6) \begin{equation*} \Delta =\begin{pmatrix} \Delta ^+ /\sqrt{2} & \Delta ^{++} \\ \Delta ^0 & -\Delta ^+/\sqrt{2} \end{pmatrix} \end{equation*}
• M_L=\mathcal{Y}_Lv_3, with v_3=\left\langle H^0 \right\rangle the vev rising the neutral Higgs a mass. Remarkably, one should remember that non-zero vev of SU(2) scalar triplet has an effect on the \rho parameter in the SM, so we get a bound v_3 \aplt 1 \GeV.
• In this class of seesaw, the role of seesawlogy matrix is played by the Yukawa matrix \mathcal{Y}_\nu, a 3\times3 complex and symmetric matrix, we also get the total leptonic number broken by two units(\Delta L=2) like the previous seesaw and we have an interesting coupling constant \mu _D in the effective scalar potential. Minimization produces the vev value for \Delta v_3=\mu_Dv_2^2/\sqrt{2}M^2_\Delta and v_2 is give as before.
Then, diagonalization of Yukawa coupling produces:
(7) \begin{equation*} M_\nu = \sqrt{2}\mathcal{Y}_\nu v_3=\dfrac{\mathcal{Y}_\nu \mu _D v_2^2}{M_\Delta ^2} \end{equation*}
This seesawlogy matrix scenario is induced, then, by electroweak symmetry breaking and its small scale is associated with a large mass M_\Delta. Again, a juidicious choice of Yukawa matrix elements can accomodate the present neutrino mass phenomenology. From the experimental viewpoint, the most promising signature of this kind of seesawlogy matrix is, therefore, the doubly charged Higgs. This is interesting, since this kind of models naturally give rise to M_\Delta=M_{H^{++}}, and with suitable mass, reactions like H^{\pm\pm}\rightarrow l^{\pm}l^{\pm},H^{\pm\pm}\rightarrow W^{\pm}W^{\pm},H^{\pm}\rightarrow W^{\pm}Z or H^{+}\rightarrow l^{+}\bar{\nu}.
Type III Seesaw
This last basic seesaw tool is similar to the type I. Type II model building seesaw is given by the following recipe:
• We replace the RH neutrinos in type I seesaw by the neutral component of an SU(2)_L fermionic triplet called \sigma, with zero hypercharge ( Y_\Sigma=0), given by the matrix
(8) \begin{equation*} \Sigma = \begin{pmatrix} \Sigma ^0 /\sqrt{2} & \Sigma ^{+} \\ \Sigma ^- & -\Sigma ^0/\sqrt{2} \end{pmatrix} \end{equation*}
• Picking out m different fermion triplets, the minimal elements of seesaw type III are coded into an effective lagrangian:
(9) \begin{equation*} \mathcal{L}_S^{III}=\mathcal{Y}_{ij}^{Dirac}\phi ^T\bar{\Sigma}_i^cL_j-\dfrac{1}{2}M_{\Sigma_{ij}}\mbox{Tr}(\bar{\Sigma}_{i}\Sigma _j^c)+h.c. \end{equation*}
• Effective seesawlogy matrix, size (n+m)\times (n+m), for type III seesaw is given by:
(10) \begin{equation*} M_\nu =\begin{pmatrix} 0 & M_D\\ M_D ^T & M_\Sigma \end{pmatrix} \end{equation*}
Diagonalization of seesawlogy matrix gives
(11) \begin{equation*} m_\nu =-M_D^TM_\Sigma ^{-1}M_D \end{equation*}
As before, we also get M_D=\mathcal{Y}_Dv_2 and similar estimates for the small neutrino masses, changing the RH neutrino by the fermion triplet. Neutrino masses are explained, thus, by either a large isotriplet fermion mass M_\Sigma or a tiny Yukawa \mathcal{Y}_D. The phenomenology of this seesawlogy matrix scheme is based on the observation of the fermion triplet, generically referred as E^{\pm}\equiv\Sigma^\pm,N\equiv\Sigma^0, and their couplings to the SM fields. Some GUT arguments can make this observation plausible in the TeV scale (specially some coming from SU(5) or larger groups whose symmetry is broken into it). Interesting searches can use the reactions q\bar{q}\rightarrow Z^*/\gamma ^* \rightarrow E^+E^-, q\bar{q}'\rightarrow W^* \rightarrow E^\pm N. The kinematical and branching ratios are very different from type II.
The 3 basic seesaw mechanisms are in the figure above. a) Type I. On the left. Heavy Majorana neutrino exchange. b) Type II. In the center. Heavy SU(2) scalar triplet exchange. c) Type III. Heavy SU(2) fermion triplet exchange.}
Combined seesaws
Different seesaw can be combined or the concept extended. This section explains how to get bigger SEE schemes.
a) Type I+II Seesaw
The lagrangian for this seesaw reads:
(12) \begin{equation*} -\mathcal{L}_m=\dfrac{1}{2}\overline{\left( \nu _L\; N_R^c\right) } \begin{pmatrix} M_L & M_D \\ M_D^T & M_R \end{pmatrix} \begin{pmatrix} \nu _L ^c\\ N_R \end{pmatrix}+h.c. \end{equation*}
where M_D=\mathcal{Y}_\nu v/\sqrt{2}, M_L=\mathcal{Y}_\Delta v_\Delta and <H>=v/\sqrt{2}. Standard diagonalization procedure gives:
(13) \begin{equation*} M_\nu =\begin{pmatrix} \hat{M}_\nu & 0 \\ 0 & \hat{M}_N \end{pmatrix} \end{equation*}
If we consider a general 3+3 flavor example, \hat{M}_\nu=diag(m_1,m_2,m_3) and also \hat{M}_N=diag(M_1,M_2,M_3).
In the so-called leading order approximation, the leading order seesaw mass formula for I+II seesawlogy matrix type is:
(14) \begin{equation*} m_\nu = M_L - M_DM_R^{-1}M^T_D \end{equation*}
Type I and type II seesaw matrix formulae can be obtained as limit cases of this combined case. Some further remarks:
• Both terms in the I+II formulae can be comparable in magnitude.
• If both terms are small, their values to the seesawlogy matrix may experiment significant interference effects and make them impossible to distinguish between a II type and I+II type.
• If both terms are large, interference can be destructive. It is unnatural since we obtain a small quantity from two big numbers. However, from phenomenology this is interesting since it could provide some observable signatures for the heavy Majorana neutrinos.
b) Double Seesaw
A somewhat different seesaw structure in order to understand the small neutrino masses is got adding additional fermionic singlets to the SM. This is also interesting in the context of GUT or left-right models. Consider the simple case with one extra singlet( left-right or scalar under the gauge group, unlike the RH neutrino!). Then we obtain a 9\times9 seesaw matrix structure as follows:
(15) \begin{equation*} M_\nu =\begin{pmatrix} 0 & M_D & 0 \\ M_D^T & 0 & M_S \\ 0 & M_S^T & \mu \end{pmatrix} \end{equation*}
The lagrangian, after adding 3 RH neutrinos, 3 singlets S_R and one Higgs singlet \Phi follows:
(16) \begin{equation*} \mathcal{L}_{double}=\bar{l}_L\mathcal{Y}_lHE_R+\bar{l}_L \mathcal{Y}_\nu \bar{H}N_R+ \bar{N}^c_R\mathcal{Y}_S\Phi S_R+\dfrac{1}{2}\bar{S}_L^c M_\mu S_R+h.c. \end{equation*}
The mass matrix term can be read from
(17) \begin{equation*} -\mathcal{L}_m=\dfrac{1}{2}\overline{\left( \nu _L \; N_R^c \; S^c_R\right) }\begin{pmatrix} 0 & M_D & 0 \\ M_D^T & 0 & M_S \\ 0 & M_S^T & \mu \end{pmatrix} \begin{pmatrix} \nu ^c_L \\ N_R \\ S_R \end{pmatrix} \end{equation*}
and where M_D=\mathcal{Y}_\nu<H>, and M_S=\mathcal{Y}_S<\Phi>. The zero/null entries can be justified in some models (like strings or GUTs) and, taking M_S>>M_D the effective mass, after diagonalization, provides a light spectrum
(18) \begin{equation*} m_\nu = M_DM^{T^{-1}}_S\mu M_S^{-1}M_D^T \end{equation*}
When \mu >>M_S the extra singlet decouples and show a mass structure m_S=M_S\mu ^{-1}M^T_S, and it can be seen as an effective RH neutrino mass ruling a type I seesaw in the \nu_L -\nu ^c_L sector. Then, this singlet can be used as a “phenomenological bridge” between the GUT scale and the B-L usual scale ( 3 orders below the GUT scale in general). This double structure of the spectrum in the sense it is doubly suppressed by singlet masses and its double interesting limits justifies the name “double” seesaw.
The \textit{inverse type I} is a usual name for the double seesaw too in some special parameter values. Setting \mu=0, the global lepton number U(1)_L is conserved and the neutrino are massless. Neutrino masses go to zero values, reflecting the restoration of global lepton number conservation. The heavy sector would be 3 pairs of pseudo-Dirac neutrinos, with CP-conjugated Majorana components and tiny mass splittings aroung \mu scale. This particular model is very interesting since it satisfies the naturalness in the sense of ‘t Hooft.
c) Inverse type III Seesaw
It is a inverse plus type III seesawlogy matrix combination. We use a (\nu _L, \Sigma, S) basis, and we find the matrix
(19) \begin{equation*} M_\nu =\begin{pmatrix} 0 & M_D & 0 \\ M_D^T & M_\Sigma & M_S \\ 0 & M_S^T & \mu \end{pmatrix} \end{equation*}
Like the previous inverse seesaw, in the limit \mu \rightarrow 0, the neutrino mass is small and suppressed. The Dirac Yukuwa coupling strength may be adjusted to order one, in contrast to the normal type III seesawlogy matrix. This mechanism has some curious additional properties:
• The charged lepton mass read off from the lagrangian is:
(20) \begin{equation*} M_{lep}=\begin{pmatrix} M_l & M_D \\ 0 & M_\Sigma \end{pmatrix}\end{equation*}
• After diagonalization of M_{lep}, size (n+m)\times (n+m), the n\times n coupling matrix provide a neutral current (NC) lagrangian, and since the matrix shows to be nonunitary, this violates the Glashow-Iliopoulos-Maiani (GIM) mechanism and sizeable tree level flavor-changing neutral currents appear in the charged lepton sector.
d) Linear Seesaw
Other well known low-scale SEE variant is the so-called linear seesaw. It uses to arise from SO(10) GUT and similar models. In the common (\nu , \nu ^c, S) basis, the seesawlogy matrix can be written as follows:
(21) \begin{equation*} M_\nu =\begin{pmatrix} 0 & M_D & M_L\\ M_D^T & 0 & M_S \\ M_L^T & M_S^T & 0 \end{pmatrix} \end{equation*}
The lepton number conservation is broken by the term M_L\nu S, and the effective light neutrino mass, after diagonalization, can be read from the next expression
(22) \begin{equation*} M_\nu= M_D(M_LM^{-1})^T+(M_LM^{-1})M_D^T \end{equation*}
This model also suffers the same effect than the one in the inverse seesaw. That is, in the limit M_L\rightarrow 0, neutrino mass goes to zero and the theory exhibit naturalness. The name linear is due to the fact that the mass dependence on M_D is linear and not quadratic, like other seesaw.
Multiple seesaws
In ( see book 2011) and references therein, a big class of multiple seesaw models were introduced. Here we review the basic concepts and facts, before introduce the general formulae for multiple seesaws(MUSE):
• Main motivation: MUSEs try to satisfy both naturalness and testability at TeV scale, in contrast with other basic seesaw. Usually, a terrible fine-tuning is required to implement seesaw, so that the ratio M_D/M_R and the Yukawa couplings can be all suitable for experimental observation, such as new particles or symmetries. This fine-tuning between M_D and M_R is aimed to be solved with MUSEs.
• Assuming a naive electroweak seesaw so that m \sim (\lambda \Lambda_{EW})^{n+1}/\Lambda ^n _S, where \lambda is a Yukawa coupling and n is an arbitrary integer larger than the unit, without any fine-tuning, one easily guesses:
(23) \begin{equation*} \Lambda _S\sim \lambda ^{\frac{n+1}{n}}\left[ \dfrac{\Lambda _{EW}}{100\GeV}\right] ^{\frac{n+1}{n}}\left[ \dfrac{0.1\eV}{m_\nu}\right] ^{1/n}10^{\frac{2(n+6)}{n}}\GeV \end{equation*}
Thus, MUSEs provide a broad clase of parameter ranges in which a TeV scale seesaw could be natural and testable.
• The most simple MUSE model at TeV scale is to to introduce some singlet of fermions S^i_{nR} and scalars \Phi_n, with i=1,2,3 and n=1,2,\cdots. This field content can be realized with the implementation of global
U(1)\times Z_{2N} gauge symmetry leads to two large classes of MUSEs with nearest-neighbours interaction matrix pattern. The first class owns an even number of S^i_{nR} and \Phi_n and corresponds to a straightforward extension of the basic seesaw. The second class has an odd number of S^i_{nR} and \Phi_n, and it is indeed a natural extension of the inverse seesaw.
• The phenomenological lagrangian giving rise to MUSEs is:
(24) \begin{eqnarray*} -\mathcal{L}_\nu =\bar{l}_L\mathcal{Y}_\nu \tilde{H}N_R+ \bar{N}^c_R\mathcal{Y}_{S_1}S_{1R}\Phi _1+ \sum _{i=2}^{n}\overline{S^c_{(i-1)R}}\mathcal{Y}_{S_i}S_{iR}\Phi _i+\nonumber \\ +\dfrac{1}{2}\overline{S^c_{nR}}M_\mu S_{nR}+h.c. \end{eqnarray*}
Here \mathcal{Y}_\nu and \mathcal{Y}_{S_i} are the 3×3 Yukawa coupling matrices, and M_\mu is a symmetric Majorana mass matrix. After spontaneous symmetry breaking(SSB), we get a 3(n+2)\times 3(n+2) neutrino mass matrix \mathcal{M} in the flavor bases (\nu _L,N_R^c,S_{1R}^c,...S_{nR}^c) and their respective charge-conjugated states, being
(25) \begin{equation*} \mathcal{M}=\begin{pmatrix} 0 & M_D & 0 & 0 & 0 & \cdots & 0 \\ M_D^T & 0 & M_{S_1} & 0 & 0 & \cdots & 0 \\ 0 & M_{S_1}^T & 0 & M_{S_2} & 0 & \cdots & 0 \\ 0 & 0 & M_{S_2}^T & 0 & \cdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & \cdots & M_{S_{n-1}} & 0 \\ \vdots & \vdots & \vdots & \cdots & M_{S_{n-1}}^T & 0 & M_{S_n} \\ 0 & 0 & 0 & \cdots & 0 & M_{S_{n}}^T & M_\mu \end{pmatrix} \end{equation*}
where we have defined M_D=\mathcal{Y}_\nu<H> and M_{S_i}=\mathcal{Y}_{S_i}<\Phi _i>, \forall i, i=1,...,n, and they are 3\times 3 matrices each of them. Note that Yukawa terms exist only if \vert i - j\vert =1,\forall i,j=0,1,...,n and that \mathcal{M} can be written in block-form before diagonalization as
(26) \begin{equation*} \mathcal{M}=\begin{pmatrix} 0 & \tilde{M}_D \\ \tilde{M}_D^T & \tilde{M}_\mu \end{pmatrix} \end{equation*}
with \tilde{M}_D=(M_D \; 0) a 3\times 3(n+1) and \tilde{M}_\mu a symmetric 3(n+1)\times 3(n+1) mass matrix.
• \textbf{General phenomenological features}:\textit{ non-unitary neutrino mixing} ( in the submatrix boxes) and CP violation (novel effects due to non-unitarity or enhanced CP-phases), \textit{collider signatures of heavy Majorana neutrinos} ( class A MUSEs preferred channel pp\rightarrow l_\alpha^\pm l_\beta^\pm X, i.e., the dilepton mode; class B MUSEs, with M_\mu <<M_{EW}, favourite channel is pp\rightarrow l_\alpha^\pm l_\beta^\pm l_\gamma ^\pm X, i.e., the trilepton mode and the mass spectrum of heavy Majorana would consist on pairing phenomenon, showing nearly degenerate masses than can be combined in the so-called pseudo-Dirac particles).
• \textbf{Dark matter particles}. One or more of the heavy Majorana neutrinos and gauge-singlet scalars in our MUSE could last almost forever, that is, it could have a very long timelife and become a good DM candidate. It could be fitted to some kind of weakly interacting massive particle (WIMP) to build the cold DM we observe.
Class A Seesaws
This MUSE is a genaralization of canonical SEE. MUSE A composition:
• Even number of gauge singlet fermion fields S^i_{nR}, n=2k, \;\; k=1,2,...,.
• Even number of scalar fields \Phi_n, n=2k, \;\; k=1,2,...,.
• Effective mass matrix of the 3 light Majorana neutrinos in the leading approximation:
(27) \begin{equation*} M_\nu =-M_D\left[ \prod_{i=1}^{k}\left( M^T_{S_{2i-1}}\right)^{-1}M_{S_{2i}} \right] M_\mu ^{-1} \left[ \prod_{i=1}^{k}\left( M^T_{S_{2i-1}}\right)^{-1} M_{S_{2i}}\right] ^T M_D^T \end{equation*}
When k=0, we obviously recover the traditional SEE M_\nu=-M_D ^TM^{-1}_RM_D if we set S_{0R}=N_R and M_\mu = M_R. Note that since the plugging of M_{S_{2i}}\sim M_D \sim \mathcal{O}(\Lambda _{EW}) and M_{S_{2i-1}}\sim M_\mu \sim \mathcal{O}(\Lambda _{SEE}), then M_\nu \sim \Lambda_{EW}^{2(k+1)}/\Lambda _{SEE}^{2k+1}, and hence we can effectively lower the usual SEE scale to the TeV without lacking testability.
Class B Seesaws
This MUSE is a generalization of inverse seesaw. MUSE B composition:
• Odd number of gauge singlet fermion fields S^i_{nR}, n=2k+1, \;\; k=1,2,...,.
• Odd number of scalar fields \Phi_n, n=2k+1, \;\; k=1,2,...,.
(28) \begin{eqnarray*} M_\nu =M_D\left[ \prod_{i=1}^{k}\left( M^T_{S_{2i-1}}\right)^{-1}M_{S_{2i}} \right] \left( M^T_{S_{2k+1}}\right)^{-1} \nonumber \\ \times M_\mu \left( M^T_{S_{2k+1}}\right)^{-1} \left[ \prod_{i=1}^{k}\left( M^T_{S_{2i-1}}\right)^{-1} M_{S_{2i}}\right] ^T M_D^T \end{eqnarray*}
When k=0, we evidently recover the traditional inverse SEE but with a low mass scale M_\mu:
M_\nu =M_D ^T(M^T_{S_1})^{-1}M_\mu (M^T_{S_1})^{-1}M_D^T. Remarkably, if M_{S_{2i}}\sim M_D \sim \mathcal{O}(\Lambda _{EW}) and M_{S_{2i-1}}\sim \mathcal{O}(\Lambda _{SEE}) hold \forall i,i=1,2,...k, the mass scale M_\mu is not necessary to be at all as small as the inverse SEE. Taking, for instance, n=3, the double suppressed M_\nu provides the ratios M_D/M_{S_1}\sim\Lambda _{EW}/\Lambda_{SEE} and M_{S_2}/M_{S_3}\sim \Lambda _{EW}/\Lambda_{SEE}, i.e., M_\nu \sim 0.1 \eV results from Y_{\nu}\sim Y_{S_1}\sim Y_{S_2}\sim Y_{S_3}\sim \mathcal{O}(1) and M_\mu \sim 1 \keV at \Lambda_{SEE}\sim 1\TeV.
Extra dimensional relatives: higher dimensional Seesaws
Several authors have introduced and studied a higher-dimensional cousin of the seesaw and seesaw matrix. We consider a brane world theory with a 5d-bulk (volume), where the SM particles are confined to the brane. We also introduce 3 SM singlet fermions \Psi _i with i=1,2,3. Being singlets, they are not restricted to the brane and can scape into the extra spacetime dimensions(EDs). The action responsible for the neutrino masses is given by
(29) \begin{equation*} S=S_{bulk,5d}+S_{brane,4d} \end{equation*}
(30) \begin{equation*} S_{bulk,5d}=\int d^4xdy\left[ i\overline{\Psi}\slashed{D}\Psi - \dfrac{1}{2}\left(\overline{\Psi^c}M_R\Psi +h.c. \right) \right] \end{equation*}
(31) \begin{equation*} S_{brane,4d}=\int _{y=0}d^4x \left[-\dfrac{1}{\sqrt{M_S}}\overline{\nu _L} m^c\Psi -\dfrac{1}{\sqrt{M_S}} \overline{\nu _L^c} m^c\Psi +h.c. \right] \end{equation*}
After a KK procedure on a circle with radius R, we get the mass matrix for the n-th KK level
(32) \begin{equation*} \mathcal{M}_n= \begin{pmatrix} M_R & n/R\\ n/R & M_R \end{pmatrix} \end{equation*}
and a Dirac mass term with m_D=m/\sqrt{(2\pi M_S R)}. The KK tower is truncated at the level N, and we write the mass matrix in the suitable KK basis, to obtain:
(33) \begin{equation*} \mathcal{M}=\begin{pmatrix} 0 & m_D & m_D & m_D & m_D & \cdots & m_D \\ m_D^T & M_R & 0 & 0 & 0 & \cdots & 0 \\ m_D^T & 0 & M_R-\dfrac{1}{R} & 0 & 0 & \cdots & 0 \\ m_D^T & 0 & 0 & M_R+\dfrac{1}{R} & \cdots & \vdots & \vdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ m_D^T & 0 & 0 & 0 & 0 & M_R-\dfrac{N}{R} & 0 \\ m_D^T & 0 & 0 & \cdots & 0 & 0 & M_R+\dfrac{N}{R} \end{pmatrix} \end{equation*}
Note that M_R is not assumed to be in the electroweak scale and its value is free. We diagonalize the above matrix to get the light neutrino mass matrix:
(34) \begin{equation*} m_\nu \simeq m_D\left( \sum _{n=-N}^{N}\dfrac{1}{M_R+n/R}\right) m_D^T= m_D\left( M_R^{-1}+\sum _{n=1}^N\dfrac{2M_R}{M_R^2-n^2/R^2}\right) m_D^T \end{equation*}
Already considered by some other references, the limit N\rightarrow \infty produces the spectrum
(35) \begin{equation*} m_\nu \simeq m_D\dfrac{\pi R}{\tan (\pi R M_R)}m_D^T \end{equation*}
At level of the highest KK state, say N, the light neutrino mass becomes, neglecting the influence of lower states,
(36) \begin{equation*} m_\nu \simeq m_D\left( \sum _{n=-N}^{N}\dfrac{1}{M_R+N/R}\right) m_D^T \end{equation*}
Then, irrespectively the value of M_R, if M_R<<N/R, the spectrum get masses that are suppressed by N/R, i.e., m_\nu \simeq m_Dm_D^TR/N. Some further variants of this model can be built in a similar fashion to get different mass dependences on m_D (here quadratic).
Conclusion and outlook
The seesaw has a very interesting an remarkable structure and its a remarkable neutrino mass mechanism BSM. It gives a way to obtain small masses from a high energy cut-off scale, yet to find or adjust. Neutrino oscillation experiment hints that the seesaw fundamental scale is just a bit below of GUT scale, although, as this review has shown and remembered, the nature and value of that seesaw energy scale is highly model dependent: the seesawlogy matrix is a mirror of the GUT/higher gauge-symmetry involved in the small neutrino masses, the EW SSB and the particle content of the theory. Moreover, in spite of seesaw is the more natural way to induce light masses on neutrino( or even every particle using some \textit{universal} seesaw), their realization in Nature is to be proved yet. In order to test the way, if any, in which seesaw is present experimental hints on colliders in the line of this article, DM searches and other neutrino experiments(like those in neutrino telescopes, neutrino superbeams or neutrino factories) will be pursued in present and future time. We live indeed in an exciting experimental era and the discovery of sterile neutrino is going to be, according to Mohapatra, a boost and most impactant event than the one a hypothetical Higgs particle finding will provoke. Their time is just running now.
Final note: this text have some LaTeX code errata due to WordPress. I will not correct them. I have a pdf version of this article you could buy cheap soon at my shop here. I am not expensive at all…
1. Please, do not confuse the term with Sexology!
LOG#246. GR attacks, GR effects!
Newtonian gravity is not coherent with special relativity. Einstein was well aware about it and he had to invent General Relativity (GR). Armed with the equivalence principle, a mystery since ancient times of Galileo, the equivalence between inertial and gravitational mass guided him towards a better theory of gravity. He could envision properties of space-time like geometric features. He deduced that gravity was caused by space-time curvature, and idea that was already anticipated in the XIX century by B. Riemann in this habilitation thesis and by W. K. Clifford with his geometric algebra and calculus. Finally, and rivaling D. Hilbert, he arrived to the field equations (already seen in this blog):
(1) \begin{equation*} \tcboxmath{G_{\mu\nu}+\Lambda g_{\mu\nu}=\dfrac{8\pi G_N}{c^4}T_{\mu\nu}} \end{equation*}
where G_{\mu\nu}=R_{\mu\nu}+\dfrac{1}{2}g_{\mu\nu}R is the Einstein tensor, G_N is the universal constant of gravity, and \Lambda is the cosmological constant. g_{\mu\nu} is the metric tensor (acting as gravitational potential in GR!), and the Ricci tensor, the Einstein tensor and the curvature scalar depend upon the metric and its derivatives up to second order in the derivatives. T_{\mu\nu} is the momentum-stress-energy tensor. Space-time (curvature!) says matter and energy how to move, matter-energy tells space-time how to curve!
GR has a large number of tested phenomena! A list (non-exhaustive):
Tidal forces: F_M=\dfrac{2GMm\Delta r}{r^3}. Tidal forces are consequence of the space-time curvature, as source of gravity.
Gravitational time dilation. It also affects GPS systems (so GR is important for technology):
(2) \begin{equation*} \Delta t'=\dfrac{\Delta t}{\sqrt{1-\dfrac{2G_NM}{c^2r}}} \end{equation*}
and where R_S=2G_NM/c^2 is the Schwarzschild radius. A simpler way to see the metric effect (gravitational potential) is using the expression:
(3) \begin{equation*} \Delta t=\dfrac{gh}{c^2}t \end{equation*}
\Delta t is the time of the highest clock, at height h, with respect to the deep observer measuring t. The proof of this result can be done using a simple argument. A clock is some type of oscillator with frequency \nu. In two different points, that oscillator it will have energy
As frequency and period are inversely proportional, it gives
(4) \begin{equation*} \dfrac{\nu_1}{\nu_2}=\dfrac{1+\dfrac{U_2}{c^2}}{1+\dfrac{U_1}{c^2}}=\dfrac{\Delta t_2}{\Delta t_1} \end{equation*}
If \Delta t=\Delta t_2-\Delta t_1=\Delta t, and \Delta t_1=t, we recover the first formula \Delta t=ght/c^2 if we suppose that the potential is 0 at h_1 and gh en h_2. In the case the height is not negligible with respect to the radius, the GR correction can be generalized to
(5) \begin{equation*} \delta_{GPS/GR}=\dfrac{\Delta t}{t}=\dfrac{U(r_s)-U(r_\oplus)}{c^2} \end{equation*}
This formula can be comparede to the SR correction due to motion:
(6) \begin{equation*} \gamma=\dfrac{1}{\sqrt{1-\frac{v^2}{c^2}}}=1+\delta_{SR} \end{equation*}
Exercise: make the figures for r_s= 26 561 km, y r_\oplus=6370km. The net effect is that the orbital clock is faster than the deeper one by a quantity 1+\delta_R, where \delta_T=\delta_{GR}-\delta_{SR}. We can see what clock is the dom. It is the GR clock. Estimate how many times is more powerful the GR than the SR effect. A more careful deduction, having into account the ellipticity and the orbital parameters, can be derived too:
(7) \begin{equation*} \Delta t_r(GPS)=-2\dfrac{\sqrt{GM_\odot a}}{c^2}e\sin E=-2\dfrac{\sqrt{GM_\odot a}}{c^2}(E-M) \end{equation*}
and where G, M_\odot, a, e, c, E, M are the universal gravitation constant, the Earth mass, the major semiaxis, the eccentricity, the speed of light, and eccentric anomaly and the average anomaly.
Mercury precession and general orbital objects in GR. Before Einstein, even LeVerrier speculated about Vulcan, a further inner planet of the Solar System. Vulcan does not exist: it is only a non-trivial GR effect:
(8) \begin{equation*} \Delta \phi=\dfrac{6\pi G_NM}{c^2R(1-e^2)}=\dfrac{24\pi^3 R^2}{c^2T^2(1-e^2)} \end{equation*}
Gravitational lensing (checked with success in the A. Eddington expedition in 1919):
(9) \begin{equation*} \Theta_{L}=\dfrac{4G_NM}{c^2r}=\dfrac{2R_S}{r}=\dfrac{D_S}{r} \end{equation*}
Prediction of gravitational waves and gravitational radiation (newtonian theory is eternal and dos not shrink orbits). Gravitational waves move at speed of light (or is light the wave that moves at the maximal speed allowed by space-time?) v_g=c. Every body loses energy with gravitational radiation. It was anticipated by the binary pulsar observations in the 20th century. LIGO detected the first GW in 2016. Without sources, the wave equation for gravitational waves is
(10) \begin{equation*} \square \overline{h}_{\mu\nu}=0 \end{equation*}
Gravitomagnetic effect (tested by Gravity Probe B), or Lense-Thirring. The first formula reads:
(11) \begin{equation*} \dot{\Omega}=\dfrac{R_S ac}{r^3+a^2r+R_Sa^2}\left(\dfrac{360}{2\pi}\right),\;\;\; R_S=\dfrac{2G_NM}{c^2},\;\;\; a=\dfrac{2R_\star^2}{5c}\left(\dfrac{2\pi}{T}\right) \end{equation*}
and the second formula is
(12) \begin{equation*} \dot{\Omega}=\dfrac{2GJ}{c^2a^3(1-e^2)^{3/2}}\left(\dfrac{360}{2\pi}\right)=\dfrac{2G^2M^2\chi}{c^3a^3(1-e^2)^{3/2}}\left(\dfrac{360}{2\pi}\right) \end{equation*}
Existence of black holes, objects so dense that light is trapped. Anticipated idea as dark stars by other scientists, a more precise definition of black holes is like vacuum solutions to the EFE (Einstein Field Equations)
\item Vacuum energy density, VIA \Lambda, AND it yields a density energy even for vacuum
(13) \begin{equation*} \rho_\Lambda=-\dfrac{\Lambda c^4}{8\pi G_N} \end{equation*}
Cosmic expansion, Big Bang theory, as large scale consequences of EFE. A simplification can be done using the cosmological principle: the Universe is homogeneous and isotropic at very large scales. It simplifies solutions to EFE and a special class of metrics, the so called Friedmann-Robertson-Walker metric, allow us to study the expanding universe and its history.
Other GR effects: de Sitter geodesic effect, strong and weak (or other) equivalence principles, Shapiro delay (fourth classical test of GR), no hair theorem, … Shapiro delay formula reads off
(14) \begin{equation*} \Delta t_S=-\dfrac{2GM}{c^3}\ln \left(1-\mathbf{R}\cdot \mathbf{r}\right) \end{equation*}
GR also gives the possibility of time machines, wormholes, and TARDIS-like space-times. Other solutions: regular black holes, cosmic strings, cosmic deffects, black p-branes,…Recently, it was realized that the reduction of GR to SR in weak fields is non-trivial due to asymptotic symmetries. The BMS group provides the full set of symmetries in GR, plus a new set of symmetries. Thus, supertraslations, superrotations and superboots have generalized the BMS group in the extended BMS group, a sort of conformal or superconformal symmetry. Gravitational memory effects have also been studied recently. GR is a simple gravitational theory, the simplest of a bigger set of theories.
Extensions of GR can be studied, both as alternative to GR or having GR as approximation: multimetric theories, higher derivative extensions (Finsler geometrey, Lanczos-Lovelock, torsion theories like Cartan’s, non-metric theories, tensor-scalar theories, teleparallelism,…) and many others.
At quantum level, gravitation is not completely understood. The best candidate for TOE and GR extension is string theory, a.k.a. as superstring theory or M-theory. Also supergravity theory remains as option. Other alternative theories are Loop Quantum Gravity, twistor theory and extended relativities. GR involves the existence of space-time singularities where usual physical laws do not apply. No one knows how to treat the issue of the beginning of time and space-time.
See you in other blog post!
P.S.: Off-topic, there are only 3 posts left before I will leave this type of blogging. The post 250 will be special and I will try to do it with my new “face” and interface. WordPress has been doing badly and delaying my posting these weeks (beyond other stuff). I can not loose time checking if LaTeX is encoded here OK. I will change that. I am going to post pure pdf blog posts since the number 250 and so on. I will post in pure pdf format from the 250th and beyond. Maybe I will move the site. I am not sure about that, but anyway I think I will keep this site as the previous (free) since it will be reconverted into a shop of my materials too. Consider make a donation to support my writing job here. Also, you will be able to buy my blog posts edited with LaTeX soon. Let me know if you would buy them all or only a few of them. It would help me to decide how to move forward. Furthermore, I will assist sci-fi and movie/TV-series/plot makes with Scientific Consultancy. Even, I could suggest you what kind of equations or theories could support your stories (if any!).
LOG#245. What is fundamental?
Some fundamental mottos:
Fundamental spacetime: no more?
Fundamental spacetime falls: no more?
Fundamentalness vs emergence(ness) is an old fight in Physics. Another typical mantra is not Shamballa but the old endless debate between what theory is fundamental (or basic) and what theory is effective (or derived). Dualities in superstring/M-theory changed what we usually meant by fundamental and derived, just as the AdS/CFT correspondence or map changed what we knew about holography and dimensions/forces.
Generally speaking, physics is about observables, laws, principles and theories. These entities (objects) have invariances or symmetries related to dynamics and kinematics. Changes or motions of the entities being the fundamental (derived) degrees of freedom of differente theories and models provide relationships between them, with some units, magnitudes and systems of units being more suitable for calculations. Mathematics is similar (even when is more pure). Objects are related to theories, axioms and relations (functors and so on). Numbers are the key of mathematics, just as they measure changes in forms or functions that serve us to study geometry, calculus and analysis from different abstract-like viewpoints.
The cross-over between Physics and Mathematics is called Physmatics. The merger of physics and mathematics is necessary and maybe inevitable to understand the whole picture. Observers are related to each other through transformations (symmetries) that also holds for force fields. Different frameworks are allowed in such a way that the true ideal world becomes the real world. Different universes are possible in mathematics an physics, and thus in physmatics too. Interactions between Universes are generally avoided in physics, but are a main keypoint for mathematics and the duality revolution (yet unfinished). Is SR/GR relativity fundamental? Is QM/QFT fundamental? Are fields fundamental? Are the fundamental forces fundamental? Is there a unique fundamental force and force field? Is symplectic mechanics fundamental? What about Nambu mechanics? Is the spacetime fundamental? Is momenergy fundamental?
Newtonian physics is based on the law
(1) \begin{equation*} F^i=ma_i=\dfrac{dp_i}{dt} \end{equation*}
Relativistic mechanics generalize the above equation into a 4d set-up:
(2) \begin{equation*} \mathcal{F}=\dot{\mathbcal{P}}=\dfrac{d\mathcal{P}}{d\tau} \end{equation*}
and p_i=mv_i and \mathcal{P}=M\mathcal{V}. However, why not to change newtonian law by
(3) \begin{equation*}F_i=ma_0+ma_i+\varepsilon_{ijk}b^ja^k+\varepsilon_{ijk}c^jv^k+c_iB^{jk}a_jb_k+\cdots\end{equation*}
(4) \begin{equation*}\vec{F}=m\vec{a}_0+m\vec{a}+\vec{b}\times\vec{a}+\vec{c}\times\vec{v}+\vec{c}\left(\vec{a}\cdot\overrightarrow{\overrightarrow{B}} \cdot \vec{b}\right)+\cdots\end{equation*}
Quantum mechanics is yet a mystery after a century of success! The principle of correspondence
(5) \begin{equation*} p_\mu\rightarrow -i\hbar\partial_\mu \end{equation*}
allow us to arrive to commutation relationships like
(6) \begin{align*} \left[x,p\right]=i\hbar\varepsilon^j_{\;\; k}\\ \left[L^i,L^j\right]=i\hbar\varepsilon_{k}^{\;\; ij}L^k\\ \left[x_\mu,x_\nu\right]=\Theta_{\mu\nu}=iL_p^2\theta_{\mu\nu}\\ \left[p_\mu,p_\nu\right]=K_{\mu\nu}=iL_{\Lambda}K_{\mu\nu} \end{align*}
and where the last two lines are the controversial space-time uncertainty relationships if you consider space-time is fuzzy at the fundamental level. Many quantum gravity approaches suggest it.
Let me focus now on the case of emergence and effectiveness. Thermodynamics is a macroscopic part of physics, where the state variables internal energy, free energy or entropy (U,H,S,F,G) play a big role into the knowledge of the extrinsinc behaviour of bodies and systems. BUT, statistical mechanics (pioneered by Boltzmann in the 19th century) showed us that those macroscopic quantities are derived from a microscopic formalism based on atoms and molecules. Therefore, black hole thermodynamics point out that there is a statistical physics of spacetime atoms and molecules that bring us the black hole entropy and ultimately the space-time as a fine-grained substance. The statistical physics of quanta (of action) provides the basis for field theory in the continuum. Fields are a fluid-like substance made of stuff (atoms and molecules). Dualities? Well, yet a mystery: they seem to say that forces or fields you need to describe a system are dimension dependent. Also, the fundamental degrees of freedom are entangled or mixed (perhaps we should say mapped) to one theory into another.
I will speak about some analogies:
1st. Special Relativity(SR) involves the invariance of objects under Lorentz (more generally speaking Poincaré) symmetry: X'=\Lambda X. Physical laws, electromagnetism and mechanics, should be invariant under Lorentz (Poincaré) transformations. That will be exported to strong forces and weak forces in QFT.
2nd. General Relativity(GR). Adding the equivalence principle to the picture, Einstein explained gravity as curvature of spacetime itself. His field equations for gravity can be stated into words as the motto Curvature equals Energy-Momentum, in some system of units. Thus, geometry is related to dislocations into matter and viceversa, changes in the matter-energy distribution are due to geometry or gravity. Changing our notion of geometry will change our notion of spacetime and the effect on matter-energy.
3rd. Quantum mechanics (non-relativistic). Based on the correspondence principle and the idea of matter waves, we can build up a theory in which particles and waves are related to each other. Commutation relations arise: \left[x,p\right]=i\hbar, p=h/\lambda, and the Schrödinger equation follows up H\Psi=E\Psi.
4th. Relativistic quantum mechanics, also called Quantum Field Theory(QFT). Under gauge transformations A\rightarrow A+d\varphi, wavefunctions are promoted to field operators, where particles and antiparticles are both created and destroyed, via
\[\Psi(x)=\sum a^+u+a\overline{u}\]
Fields satisfy wave equations, F(\phi)=f(\square)\Phi=0. Vacuum is the state with no particles and no antiparticles (really this is a bit more subtle, since you can have fluctuations), and the vacuum is better defined as the maximal symmetry state, \ket{\emptyset}=\sum F+F^+.
5th. Thermodynamics. The 4 or 5 thermodynamical laws follow up from state variables like U, H, G, S, F. The absolute zero can NOT be reached. Temperature is defined in the thermodynamical equilibrium. dU=\delta(Q+W), \dot{S}\geq 0. Beyond that, S=k_B\ln\Omega.
6th. Statistical mechanics. Temperature is a measure of kinetic energy of atoms an molecules. Energy is proportional to frequency (Planck). Entropy is a measure of how many different configurations have a microscopic system.
7th. Kepler problem. The two-body problem can be reduce to a single one-body one-centre problem. It has hidden symmetries that turn it integrable. In D dimensions, the Kepler problem has a hidden O(D) (SO(D) after a simplification) symmetry. Beyond energy and angular momentum, you get a vector called Laplace-Runge-Lenz-Hamilton eccentricity vector that is also conserved.
8th. Simple Harmonic Oscillator. For a single HO, you also have a hidden symmetry U(D) in D dimensions. There is an additional symmetric tensor that is conserved.
9th. Superposition and entanglement. Quantum Mechanics taught us about the weird quantum reality: quantum entities CAN exist simultaneously in several space position at the same time (thanks to quantum superposition). Separable states are not entangled. Entangled states are non-separable. Wave functions of composite systems can sometimes be entangled AND non-separated into two subsystems.
Information is related, as I said in my second log post, to the sum of signal and noise. The information flow follows from a pattern and a dissipative term in general. Classical geometry involves numbers (real), than can be related to matrices(orthogonal transformations or galilean boosts or space traslations). Finally, tensor are inevitable in gravity and riemannian geometry that follows up GR. This realness can be compared to complex geometry neceessary in Quantum Mechanics and QFT. Wavefunctions are generally complex valued functions, and they evolve unitarily in complex quantum mechanics. Quantum d-dimensional systems are qudits (quinfits, or quits for short, is an equivalent name for quantum field, infinite level quantum system):
(7) \begin{align*} \vert\Psi\rangle=\vert\emptyset\rangle=c\vert\emptyset\rangle=\mbox{Void/Vacuum}\ \langle\Psi\vert\Psi\rangle=\vert c\vert^2=1 \end{align*}
(8) \begin{align*} \vert\Psi\rangle=c_0\vert 0\rangle+c_1\vert 1\rangle=\mbox{Qubit}\\ \langle\Psi\vert\Psi\rangle=\vert c_0\vert^2+\vert c_1\vert^2=1\\ \vert\Psi\rangle=c_0\vert 0\rangle+c_1\vert 1\rangle+\cdots+c_{d-1}\vert d\rangle=\mbox{Qudit}\\ \sum_{i=0}^{d-1}\vert c_i\vert^2=1 \end{align*}
(9) \begin{align*} \vert\Psi\rangle=\sum_{n=0}^\infty c_n\vert n\rangle=\mbox{Quits}\\ \langle\Psi\vert\Psi\rangle=\sum_{i=0}^\infty \vert c_i\vert^2=1:\mbox{Quantum fields/quits} \end{align*}
(10) \begin{align*} \vert\Psi\rangle=\int_{-\infty}^\infty dx f(x)\vert x\rangle:\mbox{conquits/continuum quits}\\ \mbox{Quantum fields}: \int_{-\infty}^\infty \vert f(x)\vert^2 dx = 1\\ \sum_{i=0}^\infty\vert c_i\vert^2=1\\ L^2(\matcal{R}) \end{align*}
0.1. SUSY The Minimal Supersymmetry Standard Model has the following set of particles:
To go beyond the SM, BSM, and try to explain vacuum energy, the cosmological constant, the hierarchy problem, dark matter, dark energy, to unify radiation with matter, and other phenomena, long ago we created the framework of supersymmetry (SUSY). Essentially, SUSY is a mixed symmetry between space-time symmetries and internal symmetries. SUSY generators are spinorial (anticommuting c-numbers or Grassmann numbers). Ultimately, SUSY generators are bivectors or more generally speaking multivectors. The square of a SUSY transformation is a space-time traslation. Why SUSY anyway? There is another way, at least there were before the new cosmological constant problem (lambda is not zero but very close to zero). The alternative explanation of SUSY has to do with the vacuum energy. Indeed, originally, SUSY could explain why lambda was zero. Not anymore and we do neeed to break SUSY somehow. Otherwise, breaking SUSY introduces a vacuum energy into the theories. Any superalgebra (supersymmetric algebra) has generators P_\mu, M_{\mu\nu}, Q_\alpha. In vacuum, QFT says that fields are a set of harmonic oscillators. For sping j, the vacuum energy becomes
(52) \begin{equation*} \varepsilon_0^{(j)}=\dfrac{\hbar \omega_j}{2} \end{equation*}
(53) \begin{equation*} \omega_j=\sqft{k^2+m_j^2} \end{equation*}
Vacuum energy associated to any oscillator is
(54) \begin{equation*} E_0^{(j)}=\sum \varepsilon_0^{(j)}=\dfrac{1}{2}(-1)^{2j}(2j+1)\displaystyle{\sum_k}\hbar\sqrt{k^2+m_j^2} \end{equation*}
Taking the continuum limit, we have the vacuum master integral, the integral of cosmic energy:
(55) \begin{equation*} E_0(j)=\dfrac{1}{2}(-1)^{2j}(2j+1)\int_0^\Lambda d^3k\sqrt{k^2+m_j^2} \end{equation*}
Develop the square root in terms of m/k up to 4th order, to get
(56) \begin{equation*} E_0(j)=\dfrac{1}{2}(-1)^{2j}(2j+1)\int_0^\Lambda d^3k k\left[1+\dfrac{m_j^2}{2k^2}-\dfrac{1}{8}\left(\dfrac{m_j^2}{k^2}\right)^2+\cdots\right] \end{equation*}
(57) \begin{equation*} E_0(j)=A(j)\left[a_4\Lambda^4+a_2\Lambda^2+a_{log}\log(\Lambda)+\cdots\right] \end{equation*}
If we want absence of quadratic divergences, associated to the cosmological constant, and the UV cut-off, we require
(58) \begin{equation*} \tcboxmath{ \sum_j(-1)^{2j}(2j+1)=0} \end{equation*}
If we want absence of quadratic divergences, due to the masses of particles as quantum fields, we need
(59) \begin{equation*} \tcboxmath{\sum_j(-1)^{2j}(2j+1)m_j^2=0} \end{equation*}
Finally, if we require that there are no logarithmic divergences, associated to the behavior to long distances and renormalization, we impose that
(60) \begin{equation*} \tcboxmath{\sum_j(-1)^{2j}(2j+1)m_j^4=0} \end{equation*}
Those 3 sum rules are verified if, simultaneously, we have that
(61) \begin{equation*} N_B=N_F \end{equation*}
(62) \begin{equation*} M_B=M_F \end{equation*}
That is, equal number of bosons and fermions, and same masses of all the boson and fermion modes. These conditions are satisfied by SUSY, but the big issue is that the SEM is NOT supersymmetric and that the masses of the particles don’t seem to verify all the above sum rules, at least in a trivial fashion. These 3 relations, in fact, do appear in supergravity and maximal SUGRA in eleven dimensions. We do know that 11D supergravity is the low energy limit of M-theory. SUSY must be broken at some energy scale we don’t know where and why. In maximal SUGRA, at the level of 1-loop, we have indeed those 3 sum rules plus another one. In compact form, they read
(63) \begin{equation*} \tcboxmath{\sum_{J=0}^{2}(-1)^{2J}(2J+1)(M^{2}_J)^k=0,\;\;\; k=0,1,2,3} \end{equation*}
Furthermore, these sum rules imply, according to Scherk, that there is a non zero cosmological constant in maximal SUGRA.
\textbf{Exercise}. Prove that the photon, gluon or graviton energy density can be written in the following way
In addition to that, prove that the energy density of a fermionic massive m field is given by
Compare the physical dimensions in both cases.
0.2. Extra dimensions D-dimensional gravity in newtonian form reads:
(64) \begin{equation*} F_G=G_N(D)\dfrac{Mm}{r^{D-2}} \end{equation*}
Compatifying extra dimensions:
(65) \begin{equation*} F_G=G_N(D)\dfrac{Mm}{L^Dr^2} \end{equation*}
and then
(66) \begin{equation*} \tcboxmath{ G_4=\dfrac{G_N(D)}{L^D}} \end{equation*}
or with M_P^2=\dfrac{\hbar c}{G_N},
(67) \begin{equation*} \tcboxmath{M_P^2=V(XD)M_\star^2} \end{equation*}
Thus, weakness of gravity is explained due to dimensional dilution.
Similarly, for gauge fields:
(68) \begin{equation*} \tcboxmath{ g^2(4d)=\dfrac{g^2(XD)}{V_X}} \end{equation*}
LOG#244. Cartan calculus.
(1) \begin{align*} d\theta^1+\omega^1_{\;\;2}\wedge \theta^2=0\\ d\theta^2+\omega^2_{\;\;1}\wedge \theta^1=0 \end{align*}
The connection form reads
(2) \begin{equation*} \omega=\begin{pmatrix} \omega^1_{\;\; 1} & \omega^1_{\;\; 2}\\ \omega^2_{\;\; 1} & \omega^2_{\;\; 2} \end{pmatrix} \end{equation*}
Now, we can introduce the so-called curvature k=\Omega^1_{\;\; 2}(e_1,e_2) and the curvature 2-form, since from d\omega^1_{\;\;2}=k\theta^1\wedge\theta^2, we will get
(3) \begin{equation*} \Omega=\begin{pmatrix} \Omega^1_{\;\; 1} & \Omega^1_{\;\; 2}\\ \Omega^2_{\;\; 1} & \Omega^2_{\;\; 2}\end{pmatrix} \end{equation*}
The generalization to n-dimensional manifolds is quite straightforward. The torsion 1-forms \Theta are defined through the canonical 1-forms \theta via
(4) \begin{equation*} \theta=\begin{pmatrix}\theta^1 \\ \vdots \\ \theta^n\end{pmatrix} \end{equation*}
such as
(5) \begin{equation*} \Theta=\begin{pmatrix} \Theta^1 \\ \vdots \\ \Theta^n\end{pmatrix} \end{equation*}
With matrices \omega=\omega^i_{\;\; j} and \Omega^i_{\;\; j}, being antisymmetric n\times n, we can derive the structure equations:
(6) \begin{equation*} \tcboxmath{\Theta=d\theta+\omega\wedge \theta} \end{equation*}
(7) \begin{equation*} \tcboxmath{\Omega=d\omega+\omega\wedge\omega} \end{equation*}
Note that
(8) \begin{equation*} \Theta^k=T^k_{ij}\theta^i\wedge\theta^j \end{equation*}
The connection forms satisfy
(9) \begin{align*} \nabla_X e=e\omega(X)\\ \nabla e=e\omega \end{align*}
The gauging of the connection and curvature forms provide
(10) \begin{align*} \overline{\omega}=a^{-1}\omega a+a^{-1}da\\ \overlin{\Omega}=a^{-1}\Omega a \end{align*}
since \overline{e}=ea, and e=\overline{e}a^{-1}, as matrices. Note, as well, the characteristic classes
(11) \begin{equation*} \int_M e(M)=\int_M \mbox{Pf}\left(\dfrac{\Omega}{2\pi}\right)=\chi(M) \end{equation*}
is satisfied, with
(12) \begin{equation*} \mbox{det}\left(I+\dfrac{i\Omega}{2\pi}\right)=1+c_1(E)+\cdots+c_k(E) \end{equation*}
Now, we also have the Bianchi identities
(13) \begin{equation*} \tcboxmath{d\Theta=\Omega\wedge\theta-\omega\wedge\Theta} \end{equation*}
(14) \begin{equation*} \tcboxmath{d\Omega=\Omega\wedge\omega-\omega\wedge\Omega} \end{equation*}
Check follows easily:
(15) \begin{align*} d\theta=\Theta-\omega\wedge\theta\\ d\omega=\Omega-\omega\wedge\omega\\ d\Theta=\Omega\wedge\theta-\omega\wedge\Theta\\ d\Omega=\Omega\wedge\omega-\omega\wedge\Omega\\ d(\Omega^k)=\Omega^k\wedge\omega-\omega\wedge\Omega^k \end{align*}
From these equations:
\[d\Theta=d(d\theta)+d\omega\wedge\theta-\omega\wedge d\theta\]
and then
\[d\Theta=\Omega\wedge\omega-\omega\wedge\Theta\;\;\; Q.E.D.\]
sinde \omega\wedge\omega\wedge\theta=0. By the other hand, we also deduce the 2nd Bianchi identity
\[d\Omega=d^2\omega+d\omega\wedge\omega-\omega\wedge d\omega\]
Note that d(d\omega)=d^2\omega=0. Then,
\[d\Omega=d\omega\wedge\omega-\omega\wedge d\omega=(\Omega-\omega\wedge\omega)\wedge \omega-\omega\wedge(\Omega-\omega\wedge\omega)\]
and thus
\[d\Omega=\Omega\wedge\omega-\omega\wedge\Omega\;\; Q.E.D.\]
Remember: d\theta gives the 1st structure equation, d\omega gives the 2nd structure equation, d\Theta gives the first Bianchi identity, and d\Omega provides the 2nd Bianchi identity.
LOG#243. Elliptic trigonometry.
Jacobi elliptic functions allow to solve many physical problems. Today I will review briefly some features. Let me first highlight that the simple pendulum, Euler asymmetric top, the heavy top, the Duffing oscillator, the Seiffert spiral motion, and the Ginzburg-Landau theory of superconductivity are places where you can find Jacobi functions to arise.
Firstly, you can know there are three special Jacobi functions, named \mbox{sn}, \mbox{cn} and \mbox{dn}. The addition formulae for these 3 functions resembles those from euclidean or hyperbolic geometry:
(1) \begin{equation*} \tcboxmath{\mbox{sn}(\alpha+\beta)=\dfrac{\mbox{sn}(\alpha)\mbox{cn}(\beta)\mbox{dn}(\beta)+\mbox{sn}(\beta)\mbox{cn}(\alpha)\mbox{dn}(\alpha)}{1-k^2\mbox{sn}^2(\alpha)\mbox{sn}^2(\beta)}} \end{equation*}
(2) \begin{equation*} \tcboxmath{\mbox{cn}(\alpha+\beta)=\dfrac{\mbox{cn}(\alpha)\mbox{cn}(\beta)-\mbox{sn}(\alpha)\mbox{sn}(\beta)\mbox{dn}(\alpha)\mbox{dn}(\beta)}{1-k^2\mbox{sn}^2(\alpha)\mbox{sn}^2(\beta)}} \end{equation*}
(3) \begin{equation*} \tcboxmath{\mbox{dn}(\alpha+\beta)=\dfrac{\mbox{dn}(\alpha)\mbox{dn}(\beta)-k^2\mbox{sn}(\alpha)\mbox{sn}(\beta)\mbox{cn}(\alpha)\mbox{cn}(\beta)}{1-k^2\mbox{sn}^2(\alpha)\mbox{sn}^2(\beta)}} \end{equation*}
and where k^2=m is the modulus fo the Jacobi elliptic function. To prove these addition theorems, we can take some hard paths. Let me define the derivatives:
(4) \begin{equation*} \dfrac{d\mbox{dn(\gamma)}}{d\gamma}=\mbox{cn}(\gamma)\mbox{dn}(\gamma) \end{equation*}
(5) \begin{equation*} \dfrac{d\mbox{cn(\gamma)}}{d\gamma}=-\mbox{sn}(\gamma)\mbox{dn}(\gamma) \end{equation*}
(6) \begin{equation*} \dfrac{d\mbox{dn(\gamma)}}{d\gamma}=-k^2\mbox{sn}(\gamma)\mbox{cn}(\gamma) \end{equation*}
and where
(7) \begin{align*} \mbox{sn}^2(\alpha)+\mbox{cn}^(\gamma)=1\\ k^2\mbox{sn}^2(\gamma)+\mbox{dn}^2(\gamma)=1\\ \mbox{dn}^2(\gamma)-k^2\mbox{cn}^2(\gamma)=1-k^2 \end{align*}
and where the initial conditions \mbox{sn}(0)=0, \mbox{cn}(0)=1, \mbox{dn}(0)=1 are often assumed. A more symmetric form of these equations can be deduced (exercise!):
(8) \begin{equation*} \tcboxmath{\mbox{sn}(\alpha+\beta)=\dfrac{\mbox{sn}^2(\beta)\mbox{dn}^2(\alpha)-\mbox{sn}^2(\alpha)\mbox{dn}^2(\beta)}{\mbox{sn}(\beta)\mbox{cn}(\alpha)\mbox{dn}(\alpha)-\mbox{sn}(\alpha)\mbox{cn}(\beta)\mbox{dn}(\beta)}} \end{equation*}
Using that
(11) \begin{align*} \mbox{dn}^2(\gamma)-\mbox{cn}^2(\gamma)=(1-k^2)\mbox{sn}^2(\gamma)\\ \dfrac{\mbox{dn}(\gamma)+\mbox{cn}(\gamma)}{\mbox{sn}(\gamma)}=(1-k^2)\dfrac{\mbox{sn}(\gamma)}{\mbox{dn}(\gamma)-\mbox{cn}(\gamma)} \end{align*}
you can derive the third form of the addition theorem for Jacobi elliptic functions:
(12) \begin{equation*} \tcboxmath{\mbox{sn}(\alpha+\beta)=\dfrac{\mbox{sn}(\beta)\mbox{cn}(\alpha)\mbox{dn}(\beta)+\mbox{sn}(\alpha)\mbox{cn}(\beta)\mbox{dn}(\alpha)}{\mbox{dn}(\alpha)\mbox{dn}(\beta)+k^2\mbox{sn}(\alpha)\mbox{cn}(\alpha)\mbox{sn}(\beta)\mbox{cn}(\beta)}} \end{equation*}
(13) \begin{equation*} \tcboxmath{\mbox{cn}(\alpha+\beta)=\dfrac{\mbox{cn}(\alpha)\mbox{dn}(\alpha)\mbox{cn}(\beta)\mbox{dn}(\beta)-(1-k^2)\mbox{sn}(\alpha)\mbox{sn}(\beta)}{\mbox{dn}(\alpha)\mbox{dn}(\beta)+k^2\mbox{sn}(\alpha)\mbox{cn}(\alpha)\mbox{sn}(\beta)\mbox{cn}(\beta)}} \end{equation*}
(14) \begin{equation*} \tcboxmath{\mbox{dn}(\alpha+\beta)=\dfrac{(1-k^2)\mbox{sn}^2(\beta)+\mbox{cn}^2(\beta)\mbox{dn}^2(\alpha)}{\mbox{dn}(\alpha)\mbox{dn}(\beta)+k^2\mbox{sn}(\alpha)\mbox{cn}(\alpha)\mbox{sn}(\beta)\mbox{cn}(\beta)}} \end{equation*}
Finally the fourth form of the addition theorem for these functions can be found from algebra, to yield:
(16) \begin{equation*} \tcboxmath{\mbox{cn}(\alpha+\beta)=\dfrac{\mbox{cn}^2(\beta)\mbox{dn}^2(\alpha)-(1-k^2)\mbox{sn}^2(\alpha)}{\mbox{cn}(\alpha)\mbox{cn}(\beta)+\mbox{sn}(\alpha)\mbox{sn}(\beta)\mbox{dn}(\alpha)\mbox{dn}(\beta)}} \end{equation*}
Du Val showed long ago that these 4 forms can be derived from a language of five 4d vectors that are parallel to each other. The vectors are
(18) \begin{equation*} V_1=\begin{pmatrix} \mbox{sn}(\alpha+\beta)\\ \mbox{cn}(\alpha+\beta)\\ \mbox{dn}(\alpha+\beta)\\ 1 \end{pmatrix} \end{equation*}
(19) \begin{equation*} V_2=\begin{pmatrix} \mbox{sn}^2(\alpha)-\mbox{sn}^2(\beta)\\ \mbox{sn}(\alpha)\mbox{cn}(\alpha)\mbox{dn}(\beta)-\mbox{sn}(\beta)\mbox{cn}(\beta)\mbox{dn}(\alpha)\\ \mbox{sn}(\alpha)\mbox{cn}(\beta)\mbox{dn}(\alpha)-\mbox{sn}(\beta)\mbox{cn}(\alpha)\mbox{dn}(\beta)\\ \mbox{sn}(\alpha)\mbox{cn}(\beta)\mbox{dn}(\beta)-\mbox{sn}(\beta)\mbox{cn}(\alpha)\mbox{dn}(\alpha) \end{pmatrix} \end{equation*}
(20) \begin{equation*} V_3=\begin{pmatrix} \mbox{sn}(\alpha)\mbox{cn}(\alpha)\mbox{dn}(\beta)+\mbox{sn}(\beta)\mbox{cn}(\beta)\mbox{dn}(\alpha)\\ \mbox{cn}^2(\beta)\mbox{dn}^2(\alpha)-(1-k^2)\mbox{sn}^2(\alpha)\\ \mbox{cn}(\alpha)\mbox{cn}(\beta)\mbox{dn}(\alpha)\mbox{dn}(\beta)-(1-k^2)\mbox{sn}(\alpha)\mbox{sn}(\beta)\\ \mbox{cn}(\alpha)\mbox{cn}(\beta)+\mbox{sn}(\alpha)\mbox{sn}(\beta)\mbox{dn}(\alpha)\mbox{dn}(\beta) \end{pmatrix} \end{equation*}
(21) \begin{equation*} V_4=\begin{pmatrix} \mbox{sn}(\alpha)\mbox{cn}(\beta)\mbox{dn}(\alpha)+\mbox{sn}(\beta)\mbox{cn}(\alpha)\mbox{dn}(\beta)\\ \mbox{cn}(\alpha)\mbox{cn}(\beta)\mbox{dn}(\alpha)\mbox{dn}(\beta)-(1-k^2)\mbox{sn}(\alpha)\mbox{sn}(\beta)\\ (1-k^2)\mbox{sn}^2(\beta)+\mbox{cn}^2(\beta)\mbox{dn}^2(\alpha)\\ \mbox{dn}(\alpha)\mbox{dn}(\beta)+k^2\mbox{sn}(\alpha)\mbox{sn}(\beta)\mbox{cn}(\alpha)\mbox{cn}(\beta9 \end{pmatrix} \end{equation*}
Du Val also grouped the vectors V_2 to V_5 in a compact matrix \mathcal{A} invented by Glaisher in 1881:
\[ \begin{pmatrix} s_1^2-s_2^2 & s_1c_1d_2+s_2c_2d_1 & s_1c_2d_1+s_2c_1d_2 & s_1c_2d_2+s_2c_1d_1\\ s_1c_1d_2-s_2c_2d_1 & c_2^2d_1^2-(1-k^2)s_1^2 & c_1c_2d_1d_2-(1-k^2)s_1s_2 & c_1c_2-s_1s_2d_1d_2\\ s_1c_2d_1-s_2c_1d_2 & c_1c_2d_1d_2+(1-k^2)s_1s_2 & (1-k^2)s_2^2+c_2^2d_1^2 & d_1d_2-k^2s_1s_2c_1c_2\\ s_1c_2d_2-s_2c_1d_1 & c_1c_2+s_1s_2d_1d_2 & d_1d_2+k^2s_1s_2c_1c_2 & 1-k^2s_1^2s_2^2 \end{pmatrix} \]
This matrix has a very interesting symmetry \mathcal{A}^T(\alpha,\beta)=\mathcal{A}(\alpha,-\beta). You can also define the antisymmetric tensor F_{jk}=a_jb_k-a_kb_j from any vector pair a_i, b_j. In fact, you can prove that the tensor
(23) \begin{equation*} F_{kl}=m\varepsilon_{klpq}\mathcal{A}_{pq} \end{equation*}
where m equals to 1, k^2, 1-k^2, and the \varepsilon tensor is the Levi-Civita tensor, holds as identity between the matrix \mathcal{A}, and the division in two couples the quartets of vectors. It rocks!
How, a refresher of classical mechanics. The first order hamiltonian Mechanics reads
(24) \begin{equation*} \begin{pmatrix} \dot{q}\\ \dot{p} \end{pmatrix}=\begin{pmatrix} 0 & +1\\ -1 & 0\end{pmatrix}\begin{pmatrix}\dfrac{\partial H}{\partial q}\\ \dfrac{\partial H}{\partial p}\end{pmatrix} \end{equation*}
From these equations, you get the celebrated Hamilton equations
(25) \begin{equation*} \dot{p}^i=-\dfrac{\partial H}{\partial q_i} \end{equation*}
(26) \begin{equation*} \dot{q}^i=+\dfrac{\partial H}{\partial p_i} \end{equation*}
Strikingly similar to F_i=-\nabla_i \varphi, or \dot{p}^i=-\nabla_i U. First order lagrangian theory provides
(27) \begin{equation*} \dfrac{\partial L}{\partial q}-\dfrac{d}{dt}\left(\dfrac{\partial L}{\partial \dot q}\right)=0 \end{equation*}
Also, it mimics classical newtonian mechanics if you allow
(28) \begin{equation*} \dfrac{\partial L}{\partial q}=-\dfrac{d}{dt}p \end{equation*}
There is a relation between the lagrangian L and the hamiltonian function H via Legendre transforamations:
(29) \begin{equation*} H(q,p,t)=\sum_i p_i\dot{q}_i-L \end{equation*}
where the generalized momentum is
(30) \begin{equation*} p_i=\dfrac{\partial L}{\partial \dot{q}_i} \end{equation*}
There is also routhian mechanics, by Routh, where you have (n+s) degrees of freedom chosen to be n q_i and s p_j, such as
(31) \begin{align*} R=R(q,\zeta, p,\dot{\zeta},t)=p_i\dot{q}_i-L(q,\zeta,p,\dot{\zeta},t)\\ \dot{q}_i=\dfrac{\partial R}{\partial p_i}\\ \dot{p}_i=-\dfrac{\partial R}{\partial q_i}\\ \dfrac{d}{dt}\left(\dfrac{\partial R}{\partial \dot{\zeta}_j}\right)=\dfrac{\partial R}{\partial \zeta_j} \end{align*}
and where there are 2n routhian-ham-equations, and s routhian-lag-equations. The routhian energy reads off easily
(32) \begin{equation*} E_R=R-\sum_j^s\dot{\zeta}_j\dfrac{\partial R}{\partial\dot{\zeta}_j} \end{equation*}
(33) \begin{equation*} \dfrac{\partial R}{\partial t}=\dfrac{d}{dt}\left(R-\sum_j^s \dot{\zeta}_j\dfrac{\partial R}{\partial\dot{\zeta}_j}\right) \end{equation*}
Finally, the mysterious Nambu mechanics. Yoichiru Nambu, trying to generalize quantum mechanics and Poisson brackets, introduced the triplet mechanics (and by generalization the N-tuplet) with two hamiltonians H,G as follows. For a single N=3 (triplets):
(34) \begin{equation*} \dot{f}=\dfrac{\partial(f,G,H)}{\partial(x,y,z)}+\dfrac{\partial f }{\partial t} \end{equation*}
and for several triplets
(35) \begin{equation*} \dot{f}=\displaystyle{\sum_{a=1}^N\dfrac{\partial(f,G,H)}{\partial(x_a,y_a,z_a)}+\dfrac{\partial f }{\partial t}} \end{equation*}
and where f=f(r_1,r_2,\cdots,r_N,t). Sometimes it is written as \dot{f}=\nabla G\times \nabla H. In the case of N-n-plets, you have
(36) \begin{equation*} \tcboxmath{\dfrac{df}{dt}=\dot{f}=\{f,H_1,H_2,\ldots,H_{N-1}\}} \end{equation*}
and also you get an invariant form for the triplet Nambu mechanics
(37) \begin{equation*} \omega_3=dx_1^1\wedge dx_1^2\wedge dx_1^3+\cdots+dx_N^1\wedge dx_N^2\wedge x_N^3 \end{equation*}
This 3-form is the 3-plet analogue of the symplectic 2-form
(38) \begin{equation*} \omega_2=\displaystyle{\sum_i dq_i\wedge dp_i} \end{equation*}
The analogue for N-n-plets can be easily derived:
(39) \begin{equation*} \tcboxmath{\omega_n=\displaystyle{\sum_j dx^1_j\wedge dx^2_j\wedge\cdots\wedge dx^n_j}} \end{equation*}
The quantization of Nambu mechanics is a mystery, not to say what is its meaning or main applications. However, Nambu dynamics provides useful ways to solve some hard problems, turning them into superintegrable systems.
See you in other blog post!
LOG#242. Hyperbolic magic.
Beta-gamma fusion! Live dimension!
Do you like magic? Mathemagic and hyperbolic magic today. Master of magic creates an “illusion”. In special relativity, you can simplify calculations using hyperbolic trigonometry!
(1) \begin{equation*} E=Mc^2=m\gamma c^2=\dfrac{mc^2}{\sqrt{1-\beta^2}} \end{equation*}
(2) \begin{equation*} p=Mv=m\gamma v \end{equation*}
are common relativistc equation. Introduce now:
(3) \begin{equation*} \tcboxmath{\beta=\dfrac{v}{c}=\tanh\varphi}\;\; 0\leq \beta<1, -\infty<\varphi<\infty \end{equation*}
as the rapidity. Then:
(4) \begin{equation*} \gamma=\dfrac{E}{mc^2}=\dfrac{1}{\sqrt{1-\beta^2}}=\dfrac{1}{1-\tanh^2\varphi}}=\sqrt\dfrac{\cosh^2}{\cosh^2-\sinh^2}}=\cosh\varphi \end{equation*}
(5) \begin{equation*} \tcboxmath{\gamma=\cosh\varphi}\;\;\; \gamma\geq 1, -\infty<\varphi<\infty \end{equation*}
Similarly, you get that
(6) \begin{equation*} p=m\gamma v=mc\beta\gamma=mc\tanh\vaphi\cosh\varphi=mc\sinh\varphi \end{equation*}
and thus
(7) \begin{equation*} \tcboxmath{p=mc\sinh\varphi}\;\;\; -\infty<\varphi<\infty \end{equation*}
Also, you can write
(8) \begin{equation*} \tcboxmath{\tanh\varphi=\dfrac{pc}{E}}\;\; 0\leq \beta<1, -\infty<\varphi<\infty \end{equation*}
(9) \begin{equation*} \tcboxmath{\dfrac{vE}{mc^3}=\beta\gamma}\;\;\; 0\leq \beta<1, -\infty<\varphi<\infty, \gamma\geq 1 \end{equation*}
(10) \begin{equation*} \tcboxmath{\beta=\dfrac{pc}{E}} \;\; \;\; 0\leq \beta<1, -\infty<\varphi<\infty \end{equation*}
The above equations can be inverted, and it yields
(11) \begin{equation*} \tcboxmath{\beta=\dfrac{v}{c}=\tanh\varphi=\tanh\left(\sinh^{-1}\left(\dfrac{p}{mc}\right)\right)} \end{equation*}
(12) \begin{equation*} \tcboxmath{\beta=\dfrac{v}{c}=\tanh\left(\cosh^{-1}\left(\gamma\right)\right)=\tanh\left(\cosh^{-1}\left(\dfrac{E}{mc^2}\right)\right)=\sqrt{1-\left(\dfrac{mc^2}{E}\right)^2}} \end{equation*}
(13) \begin{equation*} \tcboxmath{\gamma=\dfrac{1}{\sqrt{1-\beta^2}}=\cosh\left(\tanh^{-1}\left(\beta\right)\right)} \end{equation*}
(14) \begin{equation*} \tcboxmath{\gamma=\dfrac{1}{\sqrt{1-\beta^2}}=\cosh\left(\tanh^{-1}\left(\dfrac{pc}{E}\right)\right)} \end{equation*}
(15) \begin{equation*} \tcboxmath{\varphi=\tanh^{-1}\left(\beta\right)=\tanh^{-1}\left(\dfrac{pc}{E}\right)} \end{equation*}
Hyperbolic functions also simply the Lorentz transformations into a more symmetric form! Consider the spacetime interval:
(16) \begin{equation*} s^2=x^\mu x_\mu=x^2-(ct)^2=x^2+(ict)^2 \end{equation*}
and a rotation matrix
(17) \begin{equation*} R(\theta)^T=R^{-1}=\begin{pmatrix}\cos \theta & \sin\theta\\ -\sin\theta & \cos\theta\end{pmatrix} \end{equation*}
Now, make a rotation with imaginary angle \varphi=i\theta and apply it to the vector X=(x,ict):
(18) \begin{equation*} \begin{pmatrix} x'\\ ict'\end{pmatrix} =\begin{pmatrix} \cos i\theta & \sin i\theta\\ -\sin i\theta & \cos i\theta\end{pmatrix}\begin{pmatrix} x\\ ict\end{pmatrix} \end{equation*}
(19) \begin{equation*} \begin{pmatrix} x'\\ ict'\end{pmatrix} =\begin{pmatrix} \cosh\theta & i\sinh \theta\\ -i\sinh \theta & \cosh\theta\end{pmatrix}\begin{pmatrix} x\\ ict\end{pmatrix} \end{equation*}
(20) \begin{equation*} \begin{pmatrix} x'\\ ict'\end{pmatrix}=\begin{pmatrix}\beta & i\beta\gamma\\ -i\beta\gamma & \beta\end{pmatrix}\begin{pmatrix} x\\ ict\end{pmatrix}=\begin{pmatrix} \beta x -\beta\gamma ct\\ i\left(-\beta\gamma x+\beta ct\right)\end{pmatrix} \end{equation*}
and thus
(21) \begin{equation*} \begin{pmatrix}x'\\ ct'\end{pmatrix}=\begin{pmatrix} \beta & -\beta\gamma\\ -\beta\gamma & \beta\end{pmatrix}\begin{pmatrix}x\\ ct\end{pmatrix} \end{equation*}
That is the Lorentz transformation! A Lorentz transformation is just a rotation matrix of an imaginary angle with imaginary time! But you can give up imaginary numbers using hyperbolic functions! Indeed, L(\varphi)=L^T for Lorentz transformations, while R(\theta)=(R^{-1})^T for rotations.
Finally, something about particle spin and “rotations”, secretly related to Lorentz transformations of spinors. Spin zero particles are the same irrespectively how you see them, so if you turn them 0 degrees (radians), spin zero particles remain invariant. Vector spin one particles like A_\mu are the same if you turn them 360^\circ=2\pi \;rad. Tensor spin two particles like h_{\mu\nu} are the same if you rotate them about 180^\circ=\pi \; rad. Now, the weird stuff. Electrons and spin one-half fermions are the same only if you rotate them…720^\circ=4\pi\;rad!!! They see a largest world than the one we observe! The hypothetical gravitino field remains invariant only when you twist it about 240^\circ=4\pi/3\; rad. You can also iterate the argument for higher spin particles. Even you could consider the case with infinite (continous) spin.
Remark(I): in natural units with c=\hbar=k_B=1 you can prove that
\[1kg=6.61\cdot 10^{35}GeV\]
\[1K=8.617\cdot 10^{-14}GeV\]
\[1m=8.07\cdot 10^{14}GeV^{-1}\]
Remark(II): in natural units with G_N=c=1 you also get
\[1kg=7.42\cdot 10^{-28}m\]
\[1kg=1.67 ZeV=8.46\cdot 10^{27}m^{-1}\]
Now, perhaps you have time for a little BIG RIP in de Sitter spacetime with phantom energy \omega<-1
(22) \begin{equation*} \tcboxmath{T_{BRip}=-\dfrac{2}{3(1+\omega)H_0\sqrt{1-\Omega_{m,0}}}} \end{equation*}
Perhaps, now you face the proton decay crisis of your life due to pandemic, any time?
Challenges for you:
Challenge 1. Some recent reviews of proton decay in higher dimensional models derive the estimate
\[\tau_{proton}\sim\left(\dfrac{M_P}{M_{proton}}\right)^D\dfrac{1}{M_{proton}}$$ For $D=4$, it yields about $\tau\sim 10^{52}s\sim 10^{45}yrs\]
However, Hawking derived a similar but not identical estimate
\[\tau_{proton}\sim\left(\dfrac{M_P}{M_{proton}}\right)^8\dfrac{1}{M_{proton}}\sim 10^{120}yrs\]
using processes with virtual black holes and spacetime foam. I want to understand this formulae better, so I need to understand the origin of the powers and the absence (or presence if generalized GUT/TOE arises) of gauge couplings. In short:
1) What is the reason of the D-dependence in the first formula and the 8th power in the second formula?
2) Should the proton decay depend as well and in which conditions of gauge (or GUT,TOE) generalized couplings too?
Challenge 2.
Derive the formulae
Finally, string theory…To crush you even more…Gravitational constant is just derived from the string coupling and the dilaton field in superstring theories. The recipe is
(23) \begin{equation*} \langle e^\phi\rangle =e^{\phi_\infty} \end{equation*}
such as
(24) \begin{equation*} g_s(d)=\langle e^\phi\rangle_0 \end{equation*}
Define the string tension \alpha'=L_s^2, and the string lenth L_s=\sqrt{\alpha'}. Then, in a 10d Universe
(25) \begin{equation*} \tcboxmath{G_N(10d)=8\pi^6g_s^2\left(\alpha')^4=8\pi^6 g_s^2 L_s^8} \end{equation*}
Furthermore, with n compactified dimensions, you get
(26) \begin{equation*} \tcboxmath{G_N(10d)=G_N(n)V_{10-n}} \end{equation*}
(27) \begin{equation*} \tcboxmath{g_s^2(10d)=\dfrac{V_{10-d}}{(2\pi L_s)^{10-n}(g_s^{(n)})^2}} \end{equation*}
In summary, you can obtain
(28) \begin{equation*} \tcboxmath{\dfrac{g_s^2(2\pi L_s)^{10-n}}{16\pi G_N(10d)}=\dfrac{g_s^2(n)}{16\pi G_N(n)}} \end{equation*}
Have I punched hard?
See you in another blog post dimension!
LOG#241. Flatland & Fracland.
Flatland is a known popular story and book. I am going to review the Bohr model in Flatland and, then, I am going to strange fractional (or fractal) dimensions, i.e., we are going to travel to Fracland via Bohrlogy today as well.
Case 1. Electric flatland and Bohrlogy.
(1) \begin{equation*} F_c(2d)=K_c(2d)\dfrac{e^2}{r} \end{equation*}
Suppose that
(2) \begin{equation*} E_p(2d)=K_c(2d)e^2\ln\left(\dfrac{r}{a_0}\right) \end{equation*}
Then, we have
(3) \begin{equation*} m\dfrac{v^2}{r}=K_c\dfrac{e^2}{r} \end{equation*}
and thus
(4) \begin{equation*} v=\sqrt{\dfrac{K_c}{m}}e \end{equation*}
Moreover, imposing Bohr quantization rule L=pr=mvr=n\hbar, then you get
(5) \begin{equation*} r=\dfrac{n\hbar}{mv} \end{equation*}
(6) \begin{equation*} \tcboxmath{r_n=na_0=n\dfrac{\hbar}{e\sqrt{mK_c}}} \end{equation*}
Total energy becomes
(7) \begin{equation*} E=E_c+E_0=E_m=\dfrac{1}{2}mv^2+K_ce^2\ln\left(\dfrac{r_n}{a_0}\right)} \end{equation*}
(8) \begin{equation*} \tcboxmath{E_m=E_0\left(\dfrac{1}{2}+\ln n\right)-E_0\ln\left(\dfrac{\hbar}{e\sqrt{mK_c}}\right)} \end{equation*}
where E_0=K_ce^2.
Case 2. Gravitational flatland and Borhlogy.
(9) \begin{equation*} F_N(2d)=G_N(2d)\dfrac{e^2}{r} \end{equation*}
Suppose that
(10) \begin{equation*} E_p(2d)=G_N(2d)m^2\ln\left(\dfrac{r}{a_0}\right) \end{equation*}
Then, we have
(11) \begin{equation*} m\dfrac{v^2}{r}=G_N(2d)\dfrac{m^2}{r} \end{equation*}
and thus
(12) \begin{equation*} v=\sqrt{G_Nm} \end{equation*}
(14) \begin{equation*} \tcboxmath{r_n=na_0=n\dfrac{\hbar}{m\sqrt{mG_N}}} \end{equation*}
Total energy becomes (up to an additive constant)
(15) \begin{equation*} E=E_c+E_0=E_m=\dfrac{1}{2}mv^2+G_Nm^2\ln\left(\dfrac{r}{a_0}\right) \end{equation*}
(16) \begin{equation*} \tcboxmath{E_m=E_0\left(\dfrac{1}{2}+\ln n\right)-E_0\ln\left(\dfrac{\hbar}{m\sqrt{G_Nm}}\right)} \end{equation*}
where E_0=G_Nm^2.
Exercise: Gravatoms.
Suppose a parallel Universe a where electrons were neutral particles and no electric charges existed. In such a Universe, 2 electrons or any electron and a proton would form a gravitational bound state called gravatom (gravitational atom for short). The force potencial would be V_g=Gm^2/r. And we could suppose that electron mass and G_N are the same as those in our Universe.
a) Calculate the ratio between the gravitational potential and the electric potential in our universe. Comment the results. 1 point.
b) Compute the analogue of Bohr radius in the gravatom. Comment the result. 1 point.
c) Compute the analogue of Rydberg constant for the gravatom. Is it large or small compared with the usual Rydberg constant? 1 point.
d) Compute the period of the electron in the lowest energy level. Compare it with the age of our Universe. 1 point.
e) Imagine a parallel Universe B, where the electrons were indeed supermassive. Higher the electron mass is, lower the size of the gravatom. If big enough, the size of the gravatom is smaller than the Compton wavelength of a free electron, measuring the size of the irreducible wave function of the electron. In that limit, there is no free electron but a bound state of a black hole. Compute the critical mass for the cross-over. Compare that scale to a human lifetime. 1 point.
Case 3. Welcome to Fracland, land of fractional Bohrlogy.
3A. Fractional H-atom.
Consider the potential energy
(17) \begin{equation*} U(r)=-\dfrac{Ze^2}{r} \end{equation*}
and the hamiltonian
(18) \begin{equation*} H_\alpha=D_\alpha\left(-\hbar^2\Delta\right)^{\alpha/2}=D_\alpha\left(-\hbar \Delta^{1/2}\right)^\alpha \end{equation*}
where, in principle, we allow only 1<\alpha\leq 2, but a suitable analytic continuation could be feasible somehow. Then, \alpha\overline{E_k}=-\overline{U} and pr_n=n\hbar provide
(19) \begin{equation*} \omega(n_1\rightarrow n_2)=\dfrac{E_2-E_1}{\hbar} \end{equation*}
such as
(20) \begin{equation*} \alpha D_\alpha\left(\dfrac{n\hbar}{r_n}\right)^\alpha=\dfrac{Ze^2}{r_n} \end{equation*}
And, finally, you get the radii and energy levels for the fractional H-atom as follows
(21) \begin{equation*} \tcboxmath{r_n=a_0 n^{\frac{\alpha}{\alpha-1}}\;\;\; a_0=\left(\dfrac{\alpha D_\alpha \hbar^\alpha}{Ze^2K_C}\right)^{\frac{1}{\alpha-1}}} \end{equation*}
(22) \begin{equation*} \tcboxmath{E_n= (1-\alpha)\overline{E_k}}\;\;\; \tcboxmath{E_n=\left(1-\alpha\right)E_0 n^{-\frac{\alpha}{\alpha-1}}\right)} \end{equation*}
(23) \begin{equation*} \tcboxmath{\omega_n(\alpha)=\dfrac{(1-\alpha)E_0}{\hbar}\left(\dfrac{1}{n_1^{\frac{\alpha}{\alpha-1}}}-\dfrac{1}{n_2^{\frac{\alpha}{\alpha-1}}}\right)} \end{equation*}
and where now
(24) \begin{equation*} \tcboxmath{E_0=\left(\dfrac{Z^{\alpha}(K_Ce^2)^\alpha}{\alpha^\alpha D_\alpha \hbar^\alpha\right)^{\frac{1}{\alpha-1}\right)}}} \end{equation*}
Note that E_k=D_\alpha p^\alpha. Virial theorem implies \overline{E_k}=\overline{U}(n/2) if U=\alpha r^n.
3B. Fractional harmonic oscillator (in 3d).
Consider now
(25) \begin{equation*} H(\alpha,\beta)=D_\alpha(-\hbar^2\Delta)^{\alpha/2}+q^2 r^\beta \end{equation*}
In the case \alpha=\beta you get
(26) \begin{equation*} H_\alpha=D_\alpha(-\hbar^2\Delta)^{\alpha/2}+q^2r^\alpha \end{equation*}
For a single d.o.f., i.e., if D=1, you can write
(27) \begin{equation*} E=D_\alpha p^\alpha+q^2x^\beta \end{equation*}
The energy levels can be calculated
(28) \begin{equation*} \tcboxmath{E_n=\dfrac{\pi \hbar \beta D_\alpha^{1/\alpha} q^{2/\beta}}{2B\left(\frac{1}{\beta},\frac{1}{\alpha}+1\right)}\left(n+\dfrac{1}{2}\right)^{\frac{\alpha\beta}{\alpha+\beta}}} \end{equation*}
and where B(x,y) is the beta function. Remarkably, only the standard QM simple HO has equidistant energy spectrum!
3C. Fractional infinite potential well.
Let the potential be
(29) \begin{equation*} V=\begin{cases}V(x)=+\infty, x<-a\\ 0,-a\leq x\leq a\\ V(x)=+\infty, x>a\end{cases} \end{equation*}
Then, the energy spectrum becomes
(30) \begin{equation*} \tcboxmath{E_n=D_\alpha\left(\dfrac{\pi\hbar}{\alpha}\right)^\alpha n^\alpha} \end{equation*}
The ground state energy is
(31) \begin{equation*} \tcboxmath{E_0=D_\alpha\left(\dfrac{\pi\hbar}{2\alpha}\right)^\alpha} \end{equation*}
3D. Delta potential well.
Consider 1<\alpha\leq 2, and the \delta-function potential V(x)=-\gamma\delta(x), with \gamma>0. The energy spectrum is, for the bound state,
(32) \begin{equation*} \tcboxmath{E=-\left[\dfrac{\gamma B\left(\frac{1}{\alpha},1-\frac{1}{\alpha}\right)}{\pi \hbar \alpha D_\alpha^{1/\alpha}}\right]^{\frac{\alpha}{\alpha-1}}} \end{equation*}
3E. Fractional linear potential.
Consider the potential
(33) \begin{equation*} V(x)=\begin{cases} Fx, x\geq 0, F>0\\ \infty, x<0\end{cases} \end{equation*}
The energy spectrum will be
(34) \begin{equation*} \tcboxmath{E_n=\lambda_n F\hbar \left(\dfrac{ D_\alpha}{\left(\alpha + 1\right) F\hbar}\right)^{-\frac{1}{\alpha + 1}}} \end{equation*}
and where \lambda_n are solutions to certain trascendental equation, with 1<\alpha\leq 2.
Hidden connection with the riemannium. Some time ago, I posted in physics stack exchange this question https://physics.stackexchange.com/questions/60991/mysterious-spectra
Thus, fractional H-atoms and oscillators, with care enough, can also be seen as riemannium-like.
See you in another blog post dimension!
LOG#240. (Super)Dimensions.
Hi, everyone! Sorry for the delay! I have returned. Even in this weird pandemic world…I have to survive. Before the blog post today, some news:
1. Changes are coming in this blog. Whenever I post the special 250th post, the format and maybe the framework will change. I am planning to post directly in .pdf format, much like a true research paper.
2. I survive, even if you don’t know it, as High School teacher. Not my higher dream, but it pays the bills. If you want to help me, consider a donation.
3. Beyond the donation, I am aiming to offer some extra stuff in this blog: free notes (and links) to my students or readers, PLUS, a customized version of them that of course you should pay me for the effort to do. It would me help me to sustain the posting or even managing independence of my other job that carries from me time to post more often.
4. I will offer a service of science consulting to writers, movie makers, and other artists who wish for a more detailed scientific oriented guide.
5. I will send the full bunch of my 250 blog posts as soon as possible, with customized versions if paid. 1 euro/dollar per blog post will be my price. The customized election of blog posts will be negotiated later, but maybe I will offer 25 blog posts packs as well, plus the edition cost. Expensive? Well, note that I had to put lot of effort to build this site alone. I need to increase income in these crisis times. You will be able to find the posts here for free anyway, but if you want them edited, you can help me further. If I could I would leave my current job since I am unhappy with it and with COVID19 is a risk to be a teacher (if presence is required into class of course).
Topic today are dimensions. Dimension is a curious concept. Fractal geometry has changed what we used to consider about dimensions, since fractals can have non-integer dimensions. Even, from certain viewpoint, you can also consider negative dimensions, complex dimensions and higher versions of it. With fractals, you have several generalized dimensions:
(1) \begin{equation*} \tcboxmath{D_{box}=D_0=-\lim_{\varepsilon\rightarrow 0}\left(\dfrac{\log N(\varepsilon)}{\log\dfrac{1}{\varepsilon}}\right)} \end{equation*}
Next, information dimension
(2) \begin{equation*} \tcboxmath{ D_1=\lim_{\varepsilon\rightarrow 0}\left[-\dfrac{\log p_\varepsilon}{\log\dfrac{1}{\varepsilon}}\right]} \end{equation*}
Generalized Renyi dimensions are next
(3) \begin{equation*} \tcboxmath{D_\alpha=\lim_{\varepsilon\rightarrow 0}\dfrac{\dfrac{1}{\alpha-1}\log \sum p_i^\alpha}{\log\varepsilon}} \end{equation*}
Now, we can also define the Higuchi dimension:
(4) \begin{equation*} \tcboxmath{ D_h=\dfrac{d \log (L(X))}{d\log (k)}} \end{equation*}
Of course, you also have the celebrated Hausdorff dimension
(5) \begin{equation*} \tcboxmath{\mbox{dim}_h(X)=\mbox{inf}\left{d\geq 0: C_H^d(X)=0\right}} \end{equation*}
In manifold theories, you can also define the codimension:
(6) \begin{equation*} \tcboxmath{\mbox{codim}(W)=\mbox{dim}(V)-\mbox{dim}(W)=\mbox{dim}\left(\dfrac{V}{W}\right)} \end{equation*}
if W is a submanifold W\subseteq V. Also, if N is a submanifold in M, you also have
(7) \begin{equation*} \mbox{codim}(N)=\mbox{dim}(M)-\mbox{dim}(N) \end{equation*}
such as
(8) \begin{equation*} \tcboxmath{\mbox{codim}(W)=\mbox{codim}\left(\dfrac{V}{W}\right)=\mbox{dim}\left(\mbox{coker}(W\rightarrow V)\right)\right)} \end{equation*}
Finally, superdimensions! In superspace (I will not go into superhyperspaces today!), you have local coordinates
(9) \begin{equation*} \tcboxmath{X=(x,\Xi)=(x^\mu, \Xi^\alpha)=(x^\mu, \theta,\overline{\theta})} \end{equation*}
where \mu=0,1,2,\ldots, n-1 and \alpha=1,2,\ldots,\nu. Generally, \nu=2m, so the superdimension is the pair (n,\nu)=(n,2m) in general. In C-spaces (Clifford spaces) you have the expansion in local coordinates:
(10) \begin{equation*} \tcboxmath{X=X^A\gamma_A=\left(\tau, X^\mu,X^{\mu_1\mu_2},\ldots,X^{\mu_1\dots\mu_D}\right)} \end{equation*}
and if you go into C-superspaces, you will also get
(11) \begin{equation*} \tcboxmath{Z=Z^W\Gamma_W=(X^A; \Xi^\Omega)=\left(\tau,X^\mu,X^{\mu_1\mu_2},\ldots,X^{\mu_1\dots\mu_D}; \theta, \theta^\alpha,\theta^{\alpha_1\alpha_2},\ldots,\theta^{\alpha_1\ldots\alpha_m}\right)} \end{equation*}
With superdimensions, you can also have superdimensional gauge fields and supermetric fields, at least in principle (in practice, it is hard to build up interacting field theories with higher spins at current time). For supergauge fields, you get
(12) \begin{equation*} \tcboxmath{A=A^W\Gamma_W=(A^Z; \Xi^\Omega)=\left(\tau,A^\mu,A^{\mu_1\mu_2},\ldots,A^{\mu_1\dots\mu_D}; \Theta, \Theta^\alpha,\Theta^{\alpha_1\alpha_2},\ldots,\Theta^{\alpha_1\ldots\alpha_m}\right)} \end{equation*}
The C-space metric reads
(13) \begin{equation*} \tcboxmath{ds^2=dX_AdX^A=d\tau^2+dx^\mu dx_\mu+dx^{\mu_1\mu_2}dx_{\mu_1\mu_2}+\cdots dx^{\mu_1\cdots \mu_D}dx_{\mu_1\cdots\mu_D}} \end{equation*}
and more elaborated formula for C-supermetrics and C-superhypermetric could be done (I am not done with them yet…). The mixed type of gauge fields in C-superspaces (even C-superhyperspaces) is yet hard to even myself. Work for another day!
Definition 1 (UR or eTHOR conjecture).
There is an unknown extended theory of relativity (eTHOR), ultimate relativity (UR), and it provides transformation rules between any type of field (scalar, spinorial,vector, tensor, vector spinor, tensor spinor, and general multitensor/multiform multispinor) and their full set of symmetries. Consequences of the conjecture:
• UR involves coherent theories of higher spins AND higher derivatives, such as there is a full set of limits/bound on the values of the n-th derivatives, even those being negatives (integrals!).
• UR involves a generalized and extended version of relativity, quantum theory and the equivalence principle.
• UR provides the limits of the ultimate knowledge in the (Multi)(Uni)verse, even beyond the Planck scale.
• UR will clarify the origin of space-time, fields, quantum mechanics, QFT and the wave function collapse.
• UR will produce an explanation of M-theory and superstring theory, the theory of (D)-p-branes and the final fate of the space-time singularities, black hole information and black hole evaporation, and the whole Universe.
See you in other blog post!
P.S.: Please, if you want to help me, I wish you can either donate or buy my stuff in the near future. My shop will be launching soon,…In September I wish…
2 visitors online now
2 guests, 0 members
Max visitors today: 6 at 02:00 am UTC
This month: 8 at 07-01-2022 10:41 pm UTC
This year: 68 at 01-07-2022 09:20 am UTC
All time: 177 at 11-13-2019 10:44 am UTC |
acb03b14cb181c6b | Skip to main content
Double-donor complex in vertically coupled quantum dots in a threading magnetic field
We consider a model of hydrogen-like artificial molecule formed by two vertically coupled quantum dots in the shape of axially symmetrical thin layers with on-axis single donor impurity in each of them and with the magnetic field directed along the symmetry axis. We present numerical results for energies of some low-lying levels as functions of the magnetic field applied along the symmetry axis for different quantum dot heights, radii, and separations between them. The evolution of the Aharonov-Bohm oscillations of the energy levels with the increase of the separation between dots is analyzed.
An important feature in low-dimensional systems is the electron-electron interaction because it plays a crucial role in understanding the electrical transport properties of quantum dots (QDs) at low temperatures [1]. Such systems may involve small or large numbers of electrons as well as being confined in one or more dimensions. The number of electrons in a QD can be varied over a considerable range. It is possible to control the size and the number of electrons and to observe their spatial distributions in QDs. Energy spectrum of two-electron QD with a parabolic confinement, for which two-particle wave equation can be separated completely, has been analyzed previously by using different methods [25].
In the present work, we propose another exactly solvable two-electron heterostructure in which two separated electrons are confined in vertically coupled QDs with a special lens-like morphology. Together with two on-axis donors, these two electrons generate an artificial hydrogen-like molecule whose properties can be controlled by varying the geometric parameters and the strength of the magnetic field applied along the symmetry axis.
The model which we analyze below consists of two identical, axially symmetrical and vertically coupled QDs with the on-axis donor located in each one of them (see Figure 1). The dimension of the heterostructure is defined by the QDs' radii R, height W, and the separation d between them along the z-axis. We assume that the QDs have a shape of very thin layers whose profiles are given by the following dependence of the thickness of the layers w on the distance ρ from the axis:
w ρ = W / 1 + ρ / R 2
Figure 1
figure 1
Scheme of the artificial hydrogen-like molecule.
Besides, for the sake of simplicity, we consider a model with infinite barrier confinement, which is defined in cylindrical coordinates as V r = 0 if 0 < z < w ρ , and V r = otherwise.
Given that the thicknesses of the layers are much smaller than their lateral dimensions, one can take advantage of the adiabatic approximation in order to exclude from consideration the rapid particle motions along the z-axis [6, 7] and obtain the following expression for effective Hamiltonian in polar coordinates:
H= i = 1 , 2 H 0 ρ i +V ρ 1 , ρ 2 +2 π 2 W 2 ; H 0 ρ i = Δ i 2 D +iγ ϑ i + ω 2 ρ i 2 4 ; ω 2 = 2 π / W R 2 + γ 2 V ρ 1 , ρ 2 = 2 d 2 + ρ 1 ρ 2 2 i = 1 , 2 2 d 2 + ρ i 2 + 2 ρ i
The effective Bohr radius a 0 = 2 ϵ / m * e 2 as the unit of length, the effective Rydberg R y * = e 2 / 2 ϵ a 0 = 2 / 2 m * a 0 2 as the energy unit, and γ = e B / 2 m * c R y * as the unit of the magnetic field strength have been used in Hamiltonian (Equation 2), with m * being the electron effective mass and ϵ, the dielectric constant. The polar coordinates ρ k = ρ k , ϑ k labeled by k = 1 , 2 correspond to the first and the second electrons, respectively. It is seen that for the selected particular profile given by Equation 1, the Hamiltonian (Equation 2) coincides with one which describes two particles in 2D quantum dot with parabolic confinement and renormalized interaction. It is well known that such Hamiltonian may be separated by using the center of mass, R = ρ 1 + ρ 2 / 2 , and the relative, ρ = ρ 1 ρ 2 coordinates [8]:
H = H R + 2 H ρ ; H R = Δ R 2 D 2 + 1 2 ω 2 R 2 ; H ρ = Δ ρ 2 D + ω 2 ρ 2 16 3 ρ 4 ρ 2 + 4 d 2
The wave function is factorized into two parts, ψ R , ρ = Φ R φ ρ , describing the center of mass and the relative motions, respectively. Meanwhile, the total energy splits into two terms depending on two radial N R , n ρ and two azimuthal L R , l ρ quantum numbers:
E N R , L R ; n ρ , l ρ = E R N R , L R + 2 E ρ n ρ , l ρ = 2 N R + L R ω + 2 E ρ n ρ , l ρ
where the first term represents the well-known expression for the exact energy levels of a two-dimensional harmonic oscillator, labeled by the radial N R = 0 , 1 , 2 , and azimuthal L R = 0 ± 1 ± 2 , quantum numbers for the center of mass motion and the relative motion energy 2 E ρ n ρ , l ρ must be found solving the following one-dimensional Schrödinger equation:
u''(ρ)+V(ρ)u(ρ)= E ρ ( n ρ , l ρ )u(ρ);V(ρ)= ω 2 ρ 2 / 4 + l ρ 2 1 / 4 / ρ 2 3 / ρ 4 / ρ 2 + 4 d 2
In our numerical, work the trigonometric sweep method [8] is used to solve this equation.
Results and discussion
Before the results are shown and discussed, it is useful to specify the labeling of quantum levels of the two-electron molecular complex. According to Equation 4, the energy levels E N R , L R ; n ρ , l ρ can be labeled by four symbols N R , L R ; n ρ , l ρ . Even and odd l p correspond to the spin singlet and triplet states, respectively, consistent with the Pauli Exclusion Principle.
We have performed numerical calculations of energy levels of complexes with radii R between 20 and 100 nm for different separations between layers. In all presented calculation results, the top thickness W is taken as 0.4 nm. In order to highlight the role of the interplay between the quantum size and correlation effects in the formation of the energy spectrum of our artificial system different from the natural hydrogen molecular complex, we have plotted in Figure 2 the potential curves E ˜ d = E N R , L R ; n ρ , l ρ + 2 / d , similar to those of the hydrogen molecule in which the complex energies with the electrostatic repulsion between donors included as functions of the separation d between QDs are shown. Comparing them with the corresponding potential curves of the hydrogen molecule, one can to take into account that in analyzing the structure here, the electron motion in contrast to hydrogen molecule is restricted inside two separated thin layers. The energy dependencies of different levels ( labeled by four quantum numbers, N R , L R ; n ρ , l ρ are shown in Figure 2 for QDs with two different radii, R = 40 nm and R = 100 nm. A clear difference in the behavior of the potential curves is readily seen. If the curves are smooth without any crossovers for QDs of small radius, the corresponding potential curves suffer a drastic change as the QD radius becomes large. In the last case, the energy levels become very sensitive to the variation of the separation between QDs, and the quantum size effect becomes essential, providing alteration of the energy gaps, multiple crossovers of levels with the same or different spins, and the level reordering, as the distance between QDs increases from 5 to 20 nm.
Figure 2
figure 2
Energies E ˜ d of the double-donor complex corresponding to some low-lying levels in vertically coupled QDs. As functions of the distance between them.
We ascribe a dramatic alteration of the potential curves with the increase of the separation between QDs from 5 to 20 nm observed in Figure 2 to the interplay between the structural confinement and the electron-electron repulsion. As the QDs' radii are small(R → 0), the confinement is strong, and the kinetic energy (~1/R2) is larger than the electron-electron repulsion energy (~1/R), vice versa for QDs with large radii. Therefore, as the QDs' radii increase, the arrangement of the electronic structure for different energy levels changes from typical for the gas-like system to crystal-like one, accompanied by the crossovers of the curves and reordering of the levels. As the two-electron structure arrangement for large separation between electrons becomes almost rigid, the relative motion of electrons is frozen out, and the two-electron structure transforms into a rigid rotator with practically fixed separation between electrons. The electrons' motion in this case becomes similar to one in 1D ring, and therefore, the energy dependencies on the external magnetic field applied along the symmetry axis should be similar to those which exhibit the Aharonov-Bohm effect.
In order to verify this hypothesis, we present in Figure 3 the calculated molecular complex energies E N R , L R ; n ρ , l ρ of some lower levels as functions of the magnetic field strength for QDs with small R = 40 nm (upper curves) and large R = 100 nm radii (lower curves).
Figure 3
figure 3
Energies E N R , L R ; n ρ , l ρ of some low-lying levels of the double-donor complex in vertically coupled QDs. As functions of the magnetic field.
It is seen that for QD of small radius, the energies are increased smoothly with very few intersections. Such dependence is typical for gas-like systems where the paramagnetic term contribution is depreciable in comparison with the diamagnetic one. On the contrary, the energy dependency curves for QD of large radius present multiple crossovers and level-ordering inversion as the magnetic field strength increases from 0 to 1. It is due to a competition between diamagnetic (positive) and paramagnetic (negative) terms of the Hamiltonian whose contributions in total two-electron energy in QDs of large radii are of the same order while the electron arrangement is similar to a rigid rotator. In other words, the correlation in this case becomes as strong as the electrons are mainly located on the opposite sides within a narrow ring-like region.
Finally, in Figures 4 and 5, we present results of the calculation of the density of electronic states for double-donor molecular complex confined in vertically coupled QDs. It is clear from the discussion above that the presence of the magnetic field should provide a significant change of the density of the electronic states as the QDs' radii are sufficiently large. Indeed, it is seen from Figure 4 that under relatively weak magnetic field (γ = 0.5), as the molecular complex is confined in QDs of 100-nm with 6-nm separation between them, the density of states becomes essentially more homogeneous since the widths of individual lines are broadened and the gaps between them are reduced. Such change of the density of states is observed due to a splitting and displacement of the individual lines accompanied by their crossovers and the reordering of the energy levels.
Figure 4
figure 4
Density of states for two different values of the magnetic field. Corresponding to low-lying levels of the double-donor complex in vertically coupled QDs.
Figure 5
figure 5
Density of states for three different distances between layers. Corresponding to low-lying levels of the double-donor complex in vertically coupled QDs.
In Figure 5, we present similar curves of the molecular complex density of states for three different separations between QDs. It is seen that the curves of the density of states are modified only slightly, essentially less than under variation of the magnetic field. Particularly, the lower energy peak positions are almost insensitive to any change of the distance between dots, while the upper energy peaks are noticeably displaced toward higher energy regions.
In short, we propose a simple numerical procedure for calculating the energies and wave functions of a molecular complex formed by two separated on-axis donors located at vertically coupled quantum dots with a particular lens-type morphology which produces in-plane parabolic confinement. We show that in the adiabatic approximation, the Hamiltonian of this two-electron system included in the presence of the external magnetic field is separable. The curves of the energy dependencies on the external magnetic field and the separation between quantum dots are presented. Analyzing the curves of the low-lying energies as functions of the magnetic field applied along the symmetry axis, we find that the two-electron configuration evolves from one similar to a rigid rotator to gas-like as the dot radii decrease. This quantum size effect is accompanied by a significant modification of the density of the energy states and the energy dependencies on the external magnetic field and geometric parameters of the structure.
1. Kramer B: Proceedings of a NATO Advanced Study Institute on Quantum Coherence in Mesoscopic System: 1990 April 2–13; Les Arcs, France. New York: Plenum; 1991.
Book Google Scholar
2. Maksym PA, Chakraborty T: Quantum dots in a magnetic field: role of electron–electron interactions. Phys Rev Lett 1990, 65: 108–111. 10.1103/PhysRevLett.65.108
Article Google Scholar
3. Pfannkuche D, Gudmundsson V, Maksym P: Comparison of a Hartree, a Hartree-Fock, and an exact treatment of quantum-dot helium. Phys Rev B 1993, 47: 2244–2250. 10.1103/PhysRevB.47.2244
Article Google Scholar
4. Zhu JL, Yu JZ, Li ZQ, Kawasoe Y: Exact solutions of two electrons in a quantum dot. J Phys Condens Matter 1996, 8: 7857. 10.1088/0953-8984/8/42/005
Article Google Scholar
5. Mikhailov ID, Betancur FJ: Energy spectra of two particles in a parabolic quantum dot: numerical sweep method. Phys stat sol (b) 1999, 213: 325–332. 10.1002/(SICI)1521-3951(199906)213:2<325::AID-PSSB325>3.0.CO;2-W
Article Google Scholar
6. Peeters FM, Schweigert VA: Two-electron quantum disks. Phys Rev B 1996, 53: 1468–1474. 10.1103/PhysRevB.53.1468
Article Google Scholar
7. Mikhailov ID, Marín JH, García F: Off-axis donors in quasi-two-dimensional quantum dots with cylindrical symmetry. Phys stat sol (b) 2005, 242(8):1636–1649. 10.1002/pssb.200540053
Article Google Scholar
8. Betancur FJ, Mikhailov ID, Oliveira LE: Shallow donor states in GaAs-(Ga, Al)As quantum dots with different potential shapes. J Appl Phys D 1998, 31: 3391. 10.1088/0022-3727/31/23/013
Article Google Scholar
Download references
Author information
Authors and Affiliations
Corresponding author
Correspondence to José Sierra-Ortega.
Additional information
Competing interests
The authors declare that they have no competing interests
Authors' contributions
All authors contributed equally to this work. JSO created the analytic model with contributions from IM, RMG, and JMT. GES performed the numerical calculations and wrote the manuscript. All authors discussed the results and implications, commented on the manuscript at all stages, and read and approved the final manuscript.
Authors’ original submitted files for images
Rights and permissions
Reprints and Permissions
About this article
Cite this article
Manjarres-García, R., Escorcia-Salas, G.E., Manjarres-Torres, J. et al. Double-donor complex in vertically coupled quantum dots in a threading magnetic field. Nanoscale Res Lett 7, 531 (2012).
Download citation
• Received:
• Accepted:
• Published:
• DOI:
• Quantum dots
• Adiabatic approximation
• Artificial molecule
• PACS
• 78.67.-n
• 78.67.Hc
• 73.21.-b |
97c0c8f71a9766c4 | Molecular orbital theory explained
See also: Molecular orbital.
In chemistry, molecular orbital theory (MO theory or MOT) is a method for describing the electronic structure of molecules using quantum mechanics. It was proposed early in the 20th century.
In molecular orbital theory, electrons in a molecule are not assigned to individual chemical bonds between atoms, but are treated as moving under the influence of the atomic nuclei in the whole molecule.[1] Quantum mechanics describes the spatial and energetic properties of electrons as molecular orbitals that surround two or more atoms in a molecule and contain valence electrons between atoms.
Molecular orbital theory revolutionized the study of chemical bonding by approximating the states of bonded electrons—the molecular orbitals—as linear combinations of atomic orbitals (LCAO). These approximations are made by applying the density functional theory (DFT) or Hartree–Fock (HF) models to the Schrödinger equation.
Molecular orbital theory and valence bond theory are the foundational theories of quantum chemistry.
Linear combination of atomic orbitals (LCAO) method
In the LCAO method, each molecule has a set of molecular orbitals. It is assumed that the molecular orbital wave function ψj can be written as a simple weighted sum of the n constituent atomic orbitals χi, according to the following equation:[2]
\psi_j = \sum_^ c_ \chi_i.
One may determine cij coefficients numerically by substituting this equation into the Schrödinger equation and applying the variational principle. The variational principle is a mathematical technique used in quantum mechanics to build up the coefficients of each atomic orbital basis. A larger coefficient means that the orbital basis is composed more of that particular contributing atomic orbital—hence, the molecular orbital is best characterized by that type. This method of quantifying orbital contribution as a linear combination of atomic orbitals is used in computational chemistry. An additional unitary transformation can be applied on the system to accelerate the convergence in some computational schemes. Molecular orbital theory was seen as a competitor to valence bond theory in the 1930s, before it was realized that the two methods are closely related and that when extended they become equivalent.
There are three main requirements for atomic orbital combinations to be suitable as approximate molecular orbitals.
1. The atomic orbital combination must have the correct symmetry, which means that it must belong to the correct irreducible representation of the molecular symmetry group. Using symmetry adapted linear combinations, or SALCs, molecular orbitals of the correct symmetry can be formed.
2. Atomic orbitals must also overlap within space. They cannot combine to form molecular orbitals if they are too far away from one another.
3. Atomic orbitals must be at similar energy levels to combine as molecular orbitals.
Molecular orbital theory was developed in the years after valence bond theory had been established (1927), primarily through the efforts of Friedrich Hund, Robert Mulliken, John C. Slater, and John Lennard-Jones.[3] MO theory was originally called the Hund-Mulliken theory.[4] According to physicist and physical chemist Erich Hückel, the first quantitative use of molecular orbital theory was the 1929 paper of Lennard-Jones.[5] [6] This paper predicted a triplet ground state for the dioxygen molecule which explained its paramagnetism[7] (see) before valence bond theory, which came up with its own explanation in 1931.[8] The word orbital was introduced by Mulliken in 1932.[4] By 1933, the molecular orbital theory had been accepted as a valid and useful theory.[9]
Erich Hückel applied molecular orbital theory to unsaturated hydrocarbon molecules starting in 1931 with his Hückel molecular orbital (HMO) method for the determination of MO energies for pi electrons, which he applied to conjugated and aromatic hydrocarbons.[10] [11] This method provided an explanation of the stability of molecules with six pi-electrons such as benzene.
The first accurate calculation of a molecular orbital wavefunction was that made by Charles Coulson in 1938 on the hydrogen molecule. By 1950, molecular orbitals were completely defined as eigenfunctions (wave functions) of the self-consistent field Hamiltonian and it was at this point that molecular orbital theory became fully rigorous and consistent.[12] This rigorous approach is known as the Hartree–Fock method for molecules although it had its origins in calculations on atoms. In calculations on molecules, the molecular orbitals are expanded in terms of an atomic orbital basis set, leading to the Roothaan equations.[13] This led to the development of many ab initio quantum chemistry methods. In parallel, molecular orbital theory was applied in a more approximate manner using some empirically derived parameters in methods now known as semi-empirical quantum chemistry methods.[13]
The success of Molecular Orbital Theory also spawned ligand field theory, which was developed during the 1930s and 1940s as an alternative to crystal field theory.
Types of orbitals
Molecular orbital (MO) theory uses a linear combination of atomic orbitals (LCAO) to represent molecular orbitals resulting from bonds between atoms. These are often divided into three types, bonding, antibonding, and non-bonding. A bonding orbital concentrates electron density in the region between a given pair of atoms, so that its electron density will tend to attract each of the two nuclei toward the other and hold the two atoms together.[14] An anti-bonding orbital concentrates electron density "behind" each nucleus (i.e. on the side of each atom which is farthest from the other atom), and so tends to pull each of the two nuclei away from the other and actually weaken the bond between the two nuclei. Electrons in non-bonding orbitals tend to be associated with atomic orbitals that do not interact positively or negatively with one another, and electrons in these orbitals neither contribute to nor detract from bond strength.[14]
Molecular orbitals are further divided according to the types of atomic orbitals they are formed from. Chemical substances will form bonding interactions if their orbitals become lower in energy when they interact with each other. Different bonding orbitals are distinguished that differ by electron configuration (electron cloud shape) and by energy levels.
The molecular orbitals of a molecule can be illustrated in molecular orbital diagrams.
Common bonding orbitals are sigma (σ) orbitals which are symmetric about the bond axis, and or pi (Π) orbitals with a nodal plane along the bond axis. Less common are delta (δ) orbitals and phi (φ) orbitals with two and three nodal planes respectively along the bond axis. Antibonding orbitals are signified by the addition of an asterisk. For example, an antibonding pi orbital may be shown as π*.
MOT provides a global, delocalized perspective on chemical bonding. In MO theory, any electron in a molecule may be found anywhere in the molecule, since quantum conditions allow electrons to travel under the influence of an arbitrarily large number of nuclei, as long as they are in eigenstates permitted by certain quantum rules. Thus, when excited with the requisite amount of energy through high-frequency light or other means, electrons can transition to higher-energy molecular orbitals. For instance, in the simple case of a hydrogen diatomic molecule, promotion of a single electron from a bonding orbital to an antibonding orbital can occur under UV radiation. This promotion weakens the bond between the two hydrogen atoms and can lead to photodissociation—the breaking of a chemical bond due to the absorption of light.
Molecular orbital theory is used to interpret ultraviolet-visible spectroscopy (UV-VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state.
Although in MO theory some molecular orbitals may hold electrons that are more localized between specific pairs of molecular atoms, other orbitals may hold electrons that are spread more uniformly over the molecule. Thus, overall, bonding is far more delocalized in MO theory, which makes it more applicable to resonant molecules that have equivalent non-integer bond orders than valence bond (VB) theory. This makes MO theory more useful for the description of extended systems.
Robert S. Mulliken, who actively participated in the advent of molecular orbital theory, considers each molecule to be a self-sufficient unit. He asserts in his article:
...Attempts to regard a molecule as consisting of specific atomic or ionic units held together by discrete numbers of bonding electrons or electron-pairs are considered as more or less meaningless, except as an approximation in special cases, or as a method of calculation […]. A molecule is here regarded as a set of nuclei, around each of which is grouped an electron configuration closely similar to that of a free atom in an external field, except that the outer parts of the electron configurations surrounding each nucleus usually belong, in part, jointly to two or more nuclei....[15]
An example is the MO description of benzene,, which is an aromatic hexagonal ring of six carbon atoms and three double bonds. In this molecule, 24 of the 30 total valence bonding electrons—24 coming from carbon atoms and 6 coming from hydrogen atoms—are located in 12 σ (sigma) bonding orbitals, which are located mostly between pairs of atoms (C-C or C-H), similarly to the electrons in the valence bond description. However, in benzene the remaining six bonding electrons are located in three π (pi) molecular bonding orbitals that are delocalized around the ring. Two of these electrons are in an MO that has equal orbital contributions from all six atoms. The other four electrons are in orbitals with vertical nodes at right angles to each other. As in the VB theory, all of these six delocalized π electrons reside in a larger space that exists above and below the ring plane. All carbon-carbon bonds in benzene are chemically equivalent. In MO theory this is a direct consequence of the fact that the three molecular π orbitals combine and evenly spread the extra six electrons over six carbon atoms.
In molecules such as methane,, the eight valence electrons are found in four MOs that are spread out over all five atoms. It is possible to transform the MOs into four localized sp3 orbitals. Linus Pauling, in 1931, hybridized the carbon 2s and 2p orbitals so that they pointed directly at the hydrogen 1s basis functions and featured maximal overlap. However, the delocalized MO description is more appropriate for predicting ionization energies and the positions of spectral absorption bands. When methane is ionized, a single electron is taken from the valence MOs, which can come from the s bonding or the triply degenerate p bonding levels, yielding two ionization energies. In comparison, the explanation in VB theory is more complicated. When one electron is removed from an sp3 orbital, resonance is invoked between four valence bond structures, each of which has a single one-electron bond and three two-electron bonds. Triply degenerate T2 and A1 ionized states (CH4+) are produced from different linear combinations of these four structures. The difference in energy between the ionized and ground state gives the two ionization energies.
As in benzene, in substances such as beta carotene, chlorophyll, or heme, some electrons in the π orbitals are spread out in molecular orbitals over long distances in a molecule, resulting in light absorption in lower energies (the visible spectrum), which accounts for the characteristic colours of these substances.[16] This and other spectroscopic data for molecules are well explained in MO theory, with an emphasis on electronic states associated with multicenter orbitals, including mixing of orbitals premised on principles of orbital symmetry matching.[14] The same MO principles also naturally explain some electrical phenomena, such as high electrical conductivity in the planar direction of the hexagonal atomic sheets that exist in graphite. This results from continuous band overlap of half-filled p orbitals and explains electrical conduction. MO theory recognizes that some electrons in the graphite atomic sheets are completely delocalized over arbitrary distances, and reside in very large molecular orbitals that cover an entire graphite sheet, and some electrons are thus as free to move and therefore conduct electricity in the sheet plane, as if they resided in a metal.
See also
External links
Notes and References
1. Book: Daintith, J. . Oxford Dictionary of Chemistry. New York . Oxford University Press. 2004. 978-0-19-860918-6.
2. Book: Licker, Mark, J. . McGraw-Hill Concise Encyclopedia of Chemistry . New York . McGraw-Hill. 2004 . 978-0-07-143953-4.
3. Book: Coulson, Charles, A. . Valence . Oxford at the Clarendon Press . 1952.
4. Spectroscopy, Molecular Orbitals, and Chemical Bonding . 1966 . 1972 . Mulliken, Robert S.. Nobel Lectures, Chemistry 1963–1970 . Elsevier Publishing Company . Amsterdam.
5. Hückel . Erich . 1934 . Trans. Faraday Soc. . 30 . 10.1039/TF9343000040 . Theory of free radicals of organic chemistry . 40–52.
6. Lennard-Jones . J.E. . 1929 . Trans. Faraday Soc. . 25 . 10.1039/TF9292500668 . The electronic structure of some diatomic molecules . 668–686. 1929FaTr...25..668L .
7. Coulson, C.A. Valence (2nd ed., Oxford University Press 1961), p.103
8. Pauling . Linus . 1931 . J. Am. Chem. Soc. . 53 . 9 . 10.1021/ja01360a004 . The Nature of the Chemical Bond. II. The One-Electron Bond and the Three-Electron Bond. . 3225–3237.
9. The Lennard-Jones paper of 1929 and the foundations of Molecular Orbital Theory . George G. . Hall . Advances in Quantum Chemistry . 22 . 1–6 . 0065-3276 . 978-0-12-034822-0 . 10.1016/S0065-3276(08)60361-5. 1991AdQC...22....1H . 1991 .
10. E. Hückel, Zeitschrift für Physik, 70, 204 (1931); 72, 310 (1931); 76, 628 (1932); 83, 632 (1933).
11. Hückel Theory for Organic Chemists, C. A. Coulson, B. O'Leary and R. B. Mallion, Academic Press, 1978.
12. 10.1098/rspa.1950.0104 . Hall . G.G. . Proc. Roy. Soc. A . 202 . 336–344 . 1070 . 7 August 1950 . The Molecular Orbital Theory of Chemical Valency. VI. Properties of Equivalent Orbitals. 1950RSPSA.202..336H . 123260646 .
13. Book: Jensen, Frank . Introduction to Computational Chemistry . John Wiley and Sons . 1999 . 978-0-471-98425-2.
14. Miessler and Tarr (2013), Inorganic Chemistry, 5th ed, 117-165, 475-534.
15. Mulliken. R. S.. October 1955. Electronic Population Analysis on LCAO–MO Molecular Wave Functions. I. The Journal of Chemical Physics. 23. 10. 1833–1840. 10.1063/1.1740588. 0021-9606. |
9973cc53aaa12090 | How the Higgs gives Mass to the Universe
"This is evidently a discovery of a new particle. If anybody claims otherwise you can tell them they have lost connection with reality." -Tommaso Dorigo
You've probably heard the news by now: the Higgs boson -- the last undiscovered fundamental particle of nature -- has been found.
Higgs Standard Model
The fundamental types of particles in the Universe, now complete.
Indeed the news reports just keep rolling in; this is easily the discovery of the century for physics, so far. I'm not here to recap the scientific discovery itself; I wrote what to expect yesterday, and that prediction was pretty much exactly what happened, with CMS announcing a 4.9-σ discovery and ATLAS announcing a 5.0-σ discovery, of a Higgs boson at 125-126 GeV. You can watch a recording of the press conference announcing the official discovery here, and all observing scientists were thoroughly convinced of both the quality and veracity of the work.
5 sigma annoucement
Screenshot from the original, live webcast of the seminars leading up to the presentation. Taken at the moment the CMS team first said the words "5-sigma," long known as the gold standard for discovery in the field.
So, the Higgs boson has been discovered! That's good news. You may have also heard that the Higgs gives mass to everything in the Universe, and that it's a field.
The odd thing is that all of these things are true, if not intuitive. There are some attempts to explain it simply, but as you can see, even the top ones are not very clear. So let's give you something to sink your teeth into: How do fundamental particles, including the Higgs boson, get their mass?
Cow Moose in a Rain Storm
Image credit: Highway Man of
The Higgs field is like rain, and there is no place you can go to keep dry. Just like there's no way to shield yourself from gravitation, there's no way to hide from the rain that is the Higgs field.
If there were no Higgs field, all the fundamental particles would be like dried-out sponges. Massless, dried-out sponges.
Dried-out sponges
You have to use your imagination, if only slightly, for the massless part.
But you can't keep these sponges out of the rain, and when you can't stop them from getting wet, they carry that water with them. Some sponges can only carry a little bit of water, while others can expand to many times their original size, carrying very large amounts of water with them once they're fully expanded.
Compressed Sponge
Image credit: GNI Phoenix International, via
The most massive fundamental particles are the ones that couple most strongly to the Higgs field, and are like the sponges that expand the most and hold the most water in the rain. Of all the particles I've shown you, atop, there are just two that are truly massless, and hence don't couple to the Higgs at all: the photon and the gluon.
They can be represented by massless sponges, too, except they are water repellent.
Water Repellant
Image credit: CETEX Water Repellent, from Waltar Enterprises; photo by © Gregory Alan Dunbar.
So, the Higgs field is rain, all the particles are like various types of sponges (with various absorbancies), and then... then there's the Higgs Boson. How can the field -- the rain -- be a particle, too?
deflated balloons
Image credit: /
If it weren't raining -- if there were no source of water -- your intended water balloon would be a sad failure. If there were no Higgs field, there wouldn't be a Higgs boson; at least, not one of any interest, and not one with any mass.
But the water comes from the Higgs field, and it also fills the balloon that is the Higgs boson: the Higgs field gives mass to all the particles that couple to the Higgs field, including the Higgs boson itself!
Image credit: Laura Williams from
Without the water, the balloons and the sponges would be far less interesting, and without the Higgs field, the Higgs boson and all the other fundamental particles would have no intrinsic mass to them.
It's only kind of like the Higgs boson
"I've found the Higgs boson! And I'm very, very wet!"
So now you not only know that we've found the Higgs Boson, but how the Higgs field gives mass to all the particles in the Universe, including the newly-discovered boson itself. Just like water can seep its way into almost anything, making it heavier, the Higgs field couples to almost all types of fundamental particles -- some more than others -- giving them mass.
And the great new find? We've been able to create and detect enough Higgs Bosons at the Large Hadron Collider to confidently announce -- for the first time -- that we've discovered it, that we've determined its mass (around 133 times the mass of a proton), and that it agrees perfectly with what our understanding of the Universe currently is.
Higgs Event
Image credit: A Higgs creation, decay and detection event, courtesy of CERN.
Like I told you yesterday, keep up with the latest Particle Physics news here, and if you want to see/hear me on TV talking about the discovery of the Higgs in all its glory, you get to, tonight!
I'll be talking about the discovery of the Higgs Boson at CERN later today, July 4th, at 7PM (Pacific Time) live on Portland, OR's own KGW NewsChannel 8 on The Square: Live @ 7! If you missed my last appearance on the show, talking about the Higgs, you can watch it anytime.
But if you want to catch tonight's show? Tune in to channel 8 if you're in Portland, otherwise you can watch the live stream from anywhere in the world at 7PM Pacific at this link. See you then, and enjoy your Higgs-Discovery/Independence Day!
More like this
I love the rain analogy! Would it make sense to think of the Higgs boson as the raindrop and the Higgs field as the rain?
By Gethyn Jones (not verified) on 04 Jul 2012 #permalink
I would say rather than perfectly agrees that it supports the validity. After all, the median expected value for the mass of the Higgs particle was less than the figure we have, but within the range of what concords with the rest of the standard model outcomes.
How this value sets other values that are to some extent free variables in the standard model will be interesting to me (and comprehensible to me too).
Interesting times.
If we build a pizza collider, each resulting fragment will contain less pasta, right? So, why does a proton smasher unveil particles whose mass is 125-126 times that of the whole enchilada? This reminds me of the miracle of multiplication of loaves and fishes, but don't let's change the subject.
By Peido Velho (not verified) on 04 Jul 2012 #permalink
@Peido Velho.
I'm not a physicist so I might be wrong here but;
- In physics mass and matter are different. In your anology you are talking about the total matter of the pizza, each fragment, added back together, must contain the same amount of matter as the two pizzas.
- Since mass is equivallent to energy (the famous E=mc2) I would imagine that the mass of the created particles goes up significantly because the acceleration goes waaaay down when they smuck into each other.
You have two particles, moving at somewhere around 99% of the speed of light, going nearly equal speeds, suddenly slamming into each other. Since they're going in opposite directions they cancel each other out but all that energy has to go SOMEWHERE, so it gets converted to mass.
I think...
@Peido Velho: The energy of the protons involved in the collision also contributes to producing the outgoing particles. With proton-proton collisions, only a fraction of the total energy actually goes toward new particle masses; the rest goes into their kinetic energy.
The CERN accelerator is currently running with 4 TeV proton beams, so there is a total of 8 TeV potentially available to create new particles.
By Michael Kelsey (not verified) on 04 Jul 2012 #permalink
Also not a physicist, so would appreciate setting me straight if I get this wrong: I thought that the Higgs mechanism for mass involved a 4-component spinor field. 3 of the components couple to other particles, producing mass, and the 4th component is free to do whatever it wants. The 4th component is the "scalar" Higgs boson. But that means that the Higgs boson itself isn't what produces mass. The mass of other particles comes from interaction with the other 3 parts of the spinor.
If I understand correctly, what they have managed to do is create extremely high energies which causes a disturbance in the higgs field. This disturbance is manifested as the higgs boson but since it is unstable it decays. Now what would happen if during the very limited existence of the higgs boson it is exposed to another field like an electron field? Is there any possibility of interaction between the higgs boson and the field it is exposed to?
Considering the early universe had high enough energy to create higgs bosons and perhaps other fields for it to interact with. Could an interaction like this be the basis for dark matter and dark energy?
If photons do not couple with the higgs field, why is their path affected by gravity?
By PhysicsDummy (not verified) on 04 Jul 2012 #permalink
i think this analogy has a couple of major flaws:
1) It doesn't seem to have anything to do with symmetry breaking
2) Your Higgs as the water just gets absorbed to the sponges, so the sponges are then "made out of" higgses. But really, the SM particles are not made out of higgses, they just "bump into it", which is a different idea.
I assume people mean Inertial mass when they say it 'gives mass' as Gravity is and has never been part of standard model.
It is my understanding that non-zero masses of the neutrino, muon neutrino and tau neutrino do not involve the Higgs mechanism.
Generally, the neutrino particle type is quite a source of surprises. For years physicists were convinced the neutrino's had to be massless. As we know, today a non-zero mass is attributed to neutrino's. As I recall, the known case of parity violation involves the neutrino.
By Cleon Teunissen (not verified) on 04 Jul 2012 #permalink
The best analogy I've read Ethan, thank you.
I have a question - if the higgs particle is nothing without the higgs field, what part does the particle play, i.e. in what way are the particle and field connected? Or was the particle just useful in confirming the existence of the field?
Maybe there's a way to shoehorn it into your analogy...
The discussion of the Higgs field giving mass to itself is not right. If the Higgs field were zero, i.e., if the vacuum expectation value were zero, then the Higgs would still have a mass. In fact, the Higgs is the only field in the SM with a fundamental mass.
Hippity Higgs, Hurrah!
Some early reflections:
- They did really well as mentioned here, better than expected.
- What they didn't handle well was the press release. Apparently they put up press videos leaking the result yesterday and press releases before the talks were finished, as well as collaboration members leaking.
- The production rates and the different combinations of observed particles produced by the Higgs, the "channels", are still somewhat rickety statistics. But they are all consistent with a standard Higgs.
What is interesting is that a standard 125-126 GeV Higgs, if that is what it is, immediately points to new physics.
For example, as I understand it several analysis including this update find that there should be supersymmetry at the weak scale, which is where LHC works. And the vacuum should be quasistable, with a lot of indication of an underlying dynamical process (multiverses).
@ david:
4 components, yes, that is what particle physicist Matt Strassler's notes on his blog Of Paricular Significance. They are all from the Higgs field, they are all "higgs" including the Higgs. 3 of them goes into Z&Ws which has mass, one is massless, _the field_ is the mechanism giving mass. (By virtual particles, same as how EM fields give potentials with virtual photons.)
Oh, and while Higgs field gives fundamental particle's mass proportionally to energy, it doesn't do proportionality for its own particles (so it ain't gravity). Something else is required, precisely as neutrinos are SM particles (I think, sort of, it's a kludge) but they get mass elsewhere.
By Torbjörn Larsson, OM (not verified) on 04 Jul 2012 #permalink
Oh, I see bob was already there regarding that Higgs's masses are different. And I fumbled the "massless", its the massive Higgs natch. Here is a description.
Given that physical reality is awfully non-intuitive, people complicate matters even further by mishandling the instrument of language. 'God particle' is obviously just a bad slogan. But 'hadron collider', 'atom smasher', and the like, when used to refer to the discovery of 'elementary' particles, are expressions that induce innocents like me to believe we're talking about proton debris. The same language problem arises when we say that not even light escapes from a black hole, as if massless photons were newtonian apples. The effect of gravity on space-time is often illustrated by some sort of bowl into which things 'fall'. And so on. Wittgenstein, we have a problem.
Depending on exactly what you mean by "mass", most of the mass of the universe is either Dark Energy or Dark Matter. The former with near certainty does not get its mass from the Higgs, and the latter may or may not, depending on what it is.
As for "everyday" baryonic matter in the universe, the Higgs contribution to baryonic mass is very small, on the order of a percent or less. Most of the universe's bayronic mass is from the confinement energy of the gluon fields inside the nucleon.
Frank Wilczek wrote a nice article on all this recently.
By Andrew Foland (not verified) on 04 Jul 2012 #permalink
Good summary Foland. You are right, there is a lot of misimformation presented here in Ethan's blogs.
So, if some particles acquire their mass through interaction with the Higgs field, then where does the Higgs boson fit in?
Why is the Higgs boson needed?
So, if most particles acquire their mass through interactions with the Higgs field, then why is the Higgs boson important? what does the Higgs boson do here?
Andrew until we know what dark matter and dark energy actually are, your statement is unsupported. Wrong even.
It's like saying invisible pink unicorns are not affected by electric fields (which is why you can't touch them either).
Bob the highs field particles are virtual particles. This means they have no mass (to within the limits of the uncertainty principle, which gives at least one limit to the mass of a free higgs)
Physicsdummy, that is one of the ways we know we don't yet have all the answers.
Higgs gives everything inertial mass. But it doesn't give gravitational mass. And one of the huge questions is "why are they the same value?"
Now I wait for homeopathic light nanowater that sucks up, by means of quantum mechanics, those bosons that make you heavy
But I still don't understand few things. About density of Higgs Field. Is it constans? I mean if some bosoms wet the sponge they will be absent in place without sponge. How about space between bosons? Do bosoms multiply to fill the emptyness? Is the field thinner or fatter? Does ideal vacuum exist or not?
You just rule-34'd homeopathy, Michael.
Have to say that I'm deeply disappointed by this latest post Ethan :( Was expecting some real explanations on the Higg's field and how it interacts with particles. Yet you have said nothing on the subject. Water and sponges.. come on. While basically wrong as someone pointed out here (as given an impression that particles somehow "absorb" the field) I am really sad that you made no attempts to even try to explain it in physics terms. How does work? How do particles interact with it? What's the difference in proton's interaction with it in comparison with photons... etc. etc? Not some kindergarden grade explanation (which makes it even more confusing) about some water baloons and whatnot, but a genuine explanation as we have them today. If we as a society don't know something still, ok, then say.. we don't know how this or that works. But at least try to explain. Your post is "how the higgs gives mass to the universe", and yet there is nothing physics about it :(( Would you please try another go at it, for all of us curious to know more without high-end mathematics. I loved your post about quarks and chromodinamics, I was really hoping that this post about the Higg's would be along those lines, but it's not :(
By Sinisa Lazarek (not verified) on 05 Jul 2012 #permalink
Since we can't see them directly and our monkey-brain doesn't do thinking on the subatomic scale too easy, we have ot use analogies.
You're merely whining that you don't like the analogy.
Touch noogies.
I'm whining because there is nothing scientific or physics related in the post. How does Higgs' field interact with other particles in real terms? Is it through strong or weak interaction or some other "new" force? I.e. we have proton moving through Higgs field... how does it interact with it? What is that "drag" (not rain) that happens to it. What are the forces in play? Is there any emittions, absorptions etc.. If yes, what are day. Then in contrast, what happens to the photon i.e. or some "less" massive particle. There is no explanation here about the mechanism nor even a hint to it. That's what I commented about. I don't care if we use fish in the sea or ping-pong balls sitting on a bed of sugar, or whatever other way popular press is trying to describe it. From Ethan I came to expect real physics and science in his posts. This particular one fell really short for me, and I just commented on that. If someone now has a better understanding about Higgs mechanism by reading about different spunges abosrbing different ammounts of water, great fro you. It brings me no closer to understanding what Higg's bosson is really about and how Higgs field really works.
I agree here with 'Sinisa Lazarek' although I do not want to point the finger at Ethan specifically.
We have amazing Visual FX technology that can create imagery of just anything imaginable, and what we get here is a sponge and some balloons to explain what's going on. It can't get any more amateurish for "the discovery of the century for physics", knowing that the Standard Model and the Higgs mechanism is nothing new, it is already more than 30 years old. Why can't CERN and all those genius physicists take a more serious approach at educating the general public to explain how this all works. This is some very poor communication.
I have a far better analogy: the Higgs boson is like The Girl from Ipanema permeating all the elevators of the world, so that neither elevators nor the world would fall apart. You just can't get rid of them, I mean, that godforsaken boson and the unstoppable song. The end of the Universe shall consist of a lukewarm soup of Higgs bosons with The Girl from Ipanema as background radiation. I hope I have clarified the matter once and for all.
And BTW Ethan, now have watched you on TV, I would love it to see put little speeches on all kind a things here too.
Would be great to have a little seminar once a month or so.
Not that I´m a lazy reader (far from it) but to see and hear adds so much.
thanx for the video link to newscientist. Is ok, but nowhere informative enough. I mean not to my apetite :) I really want to know what happens to particles in the higgs field and how it "gives" mass to particles.
Guess I'll digg deep into wiki and other resources to find out what really happens and how.
Higgs boson as water and everything else as sponges rather happily explains why some things are heavy and other things light, SL. The sponge doesn't get bigger, it gets more filled, meaning heavier. And a sponge that is water repellent will not contain water and remain "sponge only" and light.
This explains how the higgs field can make things heavier or lighter by binding to the material that we see as "massive particle".
This neatly explains this aspect of the Higgs field.
There are other aspects that are not covered by this analogy and therefore this analogy for those aspects is invalid.
HOWEVER, this isn't trying to explain those features.
If you want to explain those features, you do it. But don't complain about an analogy to explain one feature doesn't explain another, because it was never meant to.
Make your own analogy. With hookers and blackjack if you want, but you do the damn work if you're so damn cheesed off.
I bet that if Nethan was a gorgious girl who wrote about sponges and balloons they would be more than happy.
@ Wow
"Higgs boson as water and everything else as sponges rather happily explains why some things are heavy and other things light," - my issue was with this in the first place. Why use water and sponges or big fish and small fish etc.. in the first place. Why not talk about the higgs field and particles in the first place?? Why the unnecesary metaphore?
"The sponge doesn’t get bigger, it gets more filled, meaning heavier." - ok.. now let's get back to particles please. What happens to the particles in the higgs field? Do they absorb the field somehow? If so, how, by what process? Does it "suck" the energy from the higgs field and therefore increases it's own energy? Do higgs bossons get somehow coupled to particles? By what process, what energy? What is a carrier of that coupling? Those are my questions, among others.
"And a sponge that is water repellent will not contain water and remain “sponge only” and light." - so this is in reference to photons (or EM fields) not interacting with Higgs field, while other quanta do. Again, how? "How" was never touched in real physical sense and yet it's the first word of the title. How does that interaction take place, not as a metaphore but as a physical process?
"This explains how the higgs field can make things heavier or lighter by binding to the material that we see as “massive particle”. - well, no it doesn't. It explains in a metaphore WHAT happens, but doesn't explain HOW it happens.
"If you want to explain those features, you do it." - I don't want to explain anything, I want to know first.
"Make your own analogy." - one first needs to know what happens in order to make analogies.
If you know what happens, I'm glad for you. If you know how it happens, even better. But we who are not physicists don't know. But some of us would like to know. I just don't understand why it can't be written as is and needs balls, and guests and fish and whatnot. Why not use words like field, potential, charge, vector, scalar, tensor, operator, particle, quanta, etc etc etc....? Why can't it be explain in plain physics language... why these analogies that confuse?
So you just found Ethan´s explanation too simple and wanted more.
There are more sources than Ethan alone.
Go search and expand your mind. But now you just sound ungratefull towards someone who does his best to explain something hard to a wider public.
Or maby you just get angry quickly. FYI they are working right now, as we speak, to make the afformentioned homeopathic light nanowater that sucks up, by means of quantum mechanics, those bosons that make you heavy thoughts.
Your Particle Physics TrapIt page is great. I loved the headline on one of the news articles you're collecting there: "God Discovers the Elusive 'Physicist Particle.'" LOL!
Ok think I understand now. Did some wiki digging, and reading and think I have the essence of it. And without any math :)) yeeey. Please correct me if I'm wrong.
this is from wiki and think it gives the best summary possible:
So it's basically an interaction of one type of field with the other at a fundametal interaction level (W and Z bosons being the carriers of weak interaction, the interactions between quarks i.e. ) those fundamental force carriers interact with a Higgs field which then breaks and gives masss/energy to those very bosons, while others remain intact.
So no mysterious fishes and ping pong balls in sugar :)
p.s. another interesting thing that I didn't know before is that symmetry breaking occurred right after the big bang (the energies involved to have em and weak field unifing). So W and Z bosons "got their mass" at that instant. Everything from then on as far as mass goes is just an effect. It's not like we are "swimming" now in the "higgs field" sort of fluid that resists our movement. It;s the mass of W and Z bosons that gives mass to everything else. Is this correct?
Sinisa, no, the Higgs gives mass to the other particles, not the W and Z bosons. Did you really think that every single person in the world were all saying it wrong?
Don't want to argue, since it's not my field, but from everything I read, it's the W and Z bosons that are first to get directly "modified" by the interaction with the higgs field. Quarks and leptons are thought to interact via Yukawa mechanism with the higgs, but the whole point of the field being non zero is because of the initial interaction with the unified field which cuased it's symmetry to be broken.
I do not think that every single person is wrong, not did I say that. But would like if you could explain how higgs gives mass since you say it's not the W and Z bosons.
"I do not think that every single person is wrong, not did I say that."
Then why are you continually complaining about everyone else?
"why are you continually complaining about everyone...?"
what? everyone who? don't put words in my mouth which i never said or ment
By Sinisa Lazarek (not verified) on 06 Jul 2012 #permalink
Because you're weaselling out of your comments against everyone by using the pedantic "absolutely everyone" meaning rather than the colloquial "everyone".
And you're whinging about everyone else, SL.
I really like Ethan's rain analogy, but I am confused about the relationship between the Higgs field ("rain") and Higgs boson.
I am going to try and torture the analogy a little further using the idea that a gauge boson is the minimum-sized "ripple" in a quantum field e.g. a single photon is the smallest enrgy "ripple" in an electromagnetic field
The "sponges" (particles with non-zero rest mass) absorb the "rain" which gives them mass... what happens when you bang two sponges together? Nothing - these are incredibly absorbent sponges we have here. In fact, some Sponge-physicists suggested that it wasn't actually raining at all!
Nevertheless, physicists in Sponge-world went on to build a Large Sponge Collider in order to bang them together really, really hard to see if they were really absorbing water.
And when they did so, the minimum mass of the water droplet released was about 126 GeV. Sponge-physicists now triumphantly concluded that it really is raining....
(Apologies - I know an analogy is only an analogy but just trying to get my non-expert head around the ideas....)
By Gethyn Jones (not verified) on 06 Jul 2012 #permalink
"but I am confused about the relationship between the Higgs field (“rain”) and Higgs boson."
Well, it's not a good bit of the analogy. But mostly because we don't have 100% rain all the time everywhere, even indoors. Since the higgs field is everywhere (even indoors), for the rain to be like it, it has to be everywhere.
Ethan does try to get this across, but if you're spending too much time trying to find the faults, you can easily miss it:
Ethan: "The Higgs field is like rain, and there is no place you can go to keep dry."
But here is another attempt to find fault rather than look for enlightenment from you:
Gethan: "so what happens when you bang two sponges together? Nothing – these are incredibly absorbent sponges we have here."
And when you bang two items that can tough each other (bind together), you lose mass.
Why is the Higgs boson needed?
Actually, far from finding fault with Ethan's analogy, I was attempting to extend it. I was puzzled about why high energies are required to detect the HB, and I was playing with the analogy to see if it could help me picture the relationship between field and boson.
Well, one way to look at it is the be broglie wavelenght. Higher energies mean you see smaller structures. I.e. dimensions that are wrapped up smaller. Dimensions that the higgs field sits in.
Another way is harmonics on a very short tight string. To excite that string you need a certain energy before you het a standing wave that will last. The string theory view.
You can look at it like pair production: you neded at least enough energy to create the mass of the particle, and the higher the energy, the more you'll make and the liklier you get to see one.
None of these views work as water drops because the analogy isn't explaining that bit and attempting to stretch it that far tears it.
Like I said earlier, an analogy is not the thing it analogises, therefore you'll always find a way it doesn't work. Picky pedants who like to pick holes in things for pleasure love analogies from other people for this reason.
@ Sinisa, what are you talking about? Its not the W and Z bosons that gives mass. Its the Higgs. As you said, the Higgs has Yukawa couplings to the fermions and its this interaction that endows the fermions with a mass. What is it you don't get?
From the research I did in the past few days, this is what I have in summary. And seems that we are diverging in something, and would like to understand what it is.
So here it is:
"Actually, there's a significant caveat to "the Higgs field gives all particles mass." Many strongly interacting particles, such as the proton and neutron, would still be massive even if all quarks had zero mass. In fact most of the mass of the proton and neutron comes from strong interaction effects and not the Higgs-produced quark masses. For instance the proton weighs almost 1 GeV, and only a small fraction of this comes from the three up and down quarks that compose it, which weigh only around 5 MeV each. If that 5 MeV was reduced to 0 the proton mass wouldn't change very much."
and this...
"An example of energy contributing to mass occurs in the most familiar kind of matter in the universe--the protons and neutrons that make up atomic nuclei in stars, planets, people and all that we see. These particles amount to 4 to 5 percent of the mass-energy of the universe. The Standard Model tells us that protons and neutrons are composed of elementary particles called quarks that are bound together by massless particles called gluons. Although the constituents are whirling around inside each proton, from outside we see a proton as a coherent object with an intrinsic mass, which is given by adding up the masses and energies of its constituents.
The Standard Model lets us calculate that nearly all the mass of protons and neutrons is from the kinetic energy of their constituent quarks and gluons (the remainder is from the quarks' rest mass). Thus, about 4 to 5 percent of the entire universe--almost all the familiar matter around us--comes from the energy of motion of quarks and gluons in protons and neutrons. "
So yes, compound particles also get a small portion of their mass from Higg's field, but only a small part. The main mass is already there by the process' we already know and understand. What we didn't understand is why some bosons have mass (w and z) while others (photon and gluon) are massless. And this is where higgs mechanism really shows itself. It gives all the mass to those bosons. Or in other words, the interactions of higgs field and boson's gives the terms in lagrangian that corresponds to mass value of those bosons.
If I'm mistaken, please correct me. But please do give some examples and explanations instead of just saying yes or no. I want to learn more, and just saying "this isn;t so" without a follow up isn't helping :)
By Sinisa Lazarek (not verified) on 07 Jul 2012 #permalink
Way out of my depth here, but in case it helps you Sinisa; IIRC kinetic energy is dependent on mass, so if the quarks had no rest mass I assume they would also have no kinetic energy. Of course it could be my school-level physics is not relevant at this scale, not sure.. ;)
Helpful comments - thank you. I agree the analogy as originally presented by Ethan isn't intended to illustrate the relationship between boson and field, and that I'm probably overextending it...but what the heck so here goes nothing
Ethans rain analogy cleverly explains why hadrons and leptons and some bosons have mass: they are "spongy" and absorb "water".
OK but the "rain" is the Higgs field, not the Higgs boson. So can the HB be represented?
One possible way would be to picture a boson as the minimum energy wave in its associated field. I guess for a e-m field this would be a low energy photon, perhaps in the radio frequency region. For a Higgs field, this is a high energy Higgs boson.
Using the analogy, the Higgs field would be a fine mist of rain droplets (what some people call mizzle) while the HB would be a more substantial drop.
If the "sponges" were very, very absorbent then you'd have to squeeze them pretty damn hard to get even the tiniest drop of water...which is one way of picturing why the HB can only be detected at high energies...
However, an analogy is only an analogy as you rightfully point out - but they can be a lot of fun too.
By Gethyn Jones (not verified) on 07 Jul 2012 #permalink
I eould have put it that the higgs field iis the fact that it's rsining and a raindrop is the particle.
It's not about being low energy, it's about being a virtual photon or higgs boson.
Electric fields have the forces transferred by whatever energy photon it needs to do the job, but they're virtual photons, not real ones.
photons have no rest mass yet they have kinetic energy, actually all of it's energy is kinetic.
Actually we don't know that.
Kinetic energy = mass times velocity squared divided by two.
Mass zero, kinetic energy zero.
Photons do have momentum, though. Or at least can impart momentum or sosk it up. Whether thst's momentum as you get in matter is a little unclear.
But photons could have no kinetic energy, but only energy from existing (at the speed of light), as the equicalent of things at resthaving mass (=energy)
Those infinities are hard to deal with in a language developed to tell other apes whete the bannanas were.
Classical mechanics terms don't really do much better.
sinisa, it is true that the strong interaction provides most of the mass of the proton and neutron. However, the point of the Higgs is to give mass to the "elementary particles". The proton and neutron are NOT elementary, they are made out of the elementary quarks. The quark's masses and the lepton's masses (including the electron), as well as the W and Z bosons, are acquired from interaction with the Higgs field.
Dai and wow, you are both wrong. Even if a particle's mass is zero, the particle can carry kinetic energy. This is the difference between Einsteinian theory and Newtonian theory.
Bob, thanx for the reply.
"The quark’s masses and the lepton’s masses (including the electron), as well as the W and Z bosons, are acquired from interaction with the Higgs field."
with this, we are in total agreement.
Sinisa, the reason for your confusion was that Ethan claimed that the Higgs gives mass to everything in the universe, when in fatc this is competeky wrong. Almost all the mass in the universe cmes from the dark sector and nuclei, whose mass does not come from the Huggs. Instead only a very tiny percentage, less than 0.002% such as electrons, comes from the Higgs.
Bob, kinetic energy is for a photon its energy in and of itself. Try to remobe some snd the photon is reduced in itself. Red shifted.
Something different is going on here.
And note I merely maintained "we don't know that for sure"
If you're goung to say "wrong", you,re saying we ARE sure.
Using a tablet sucks.
Theoritical physicists get the best dope. That's crazy man.
Go Sinisa Lazarek! I'm with you. Though there is a place for providing 'real world' analogies to roughly explain a phenomenon, indulging in the analogy does more harm than good especially where it gives the impression that it has explained anything.
Funny how the posts of those who accuse Sinisa as being 'cheesed off' (Wow) and 'ungrateful' (not ungratefull btw) (Michel) are the ones that sound most agressive - Sinisa is just stating his thoughts in a decent and polite way.
Why do you say that this analogy has explained nothing, dink?
Making it up, yes?
Jeeze. This is like the time someone complained about an analogy to red and black marbles in closed bags to explain why quantum entanglement cannot be used to send information, because the marbles represent hidden variables and the outcome of the experiment would not match the statistical distribution of actual quantum entanglement. Even though neither of those things are relevant to explaining why you can't send information with entanglement.
The only analogy that correctly explains all aspects of a phenomenon is not an analogy, it's the actual phenomenon in question. That doesn't make analogies useless.
If you understood that summary on Wikipedia, then congratulations you're more informed on the subject than the vast majority of people with science degrees. You don't need an analogy. Most do, and this is a good one for explaining what it does.
On a non-analogy note, does this really create a problem with inertial vs gravitational mass? The intrinsic (and hypothetically inertial-only) mass granted by the Higgs is the result of a particle's potential with respect to the Higgs Field. That potential is a form of energy. Energy creates gravity. So is it really any more surprising that the gravity exactly matches the potential energy of the Higgs than it is that it also exactly matches the binding energy of a proton, or water molecule?
"If you understood that summary on Wikipedia, then congratulations you’re more informed on the subject than the vast majority of people with science degrees."
I guess I should say "thank you". But I think you went a bit too far with the "science degrees". If in biology, then ok. But as far as physics goes, there isn't much not to understand. All the terminology is from high-school grade physics (relativity, qm and some math terms). I learned in high-school what leptons and quarks are, what the fundamental forces are, how mass equals energy, what symmetry and symmetry breaking is in math and physics. So it's all there. Just needs some "dot connecting" and perhaps some cross referencing, nothing more. My strong belief is that anyone with a general notion of relativity and qm can understand that quote I took from wiki. If in fact it's not so, especially for science majors, then something is terribly wrong with the educational system. :)
By Sinisa Lazarek (not verified) on 10 Jul 2012 #permalink
Sinisa, most of the particles of the Standard Model have an interaction with the Higgs field - it is a new kind of force, a "higgs force" if you like. (Technically, for the fermions it is a type of Yukawa interaction, and for the W bosons it is a gauge interaction). The Higgs field takes on a non-zero value, even in the vacuum. So the interaction is always present. It leads to an effective mass for those particles. What more do you want to know? Did you try opening a book and finding out for yourself?
don't know what this last post of yours to me was about. Couple of days back i posted to you that I agree completely with what you posted then and that the statement that higgs gives mass to everything and anything is not correct. after that I haven't posted any questions about the higgs.
my post to which you now comment was to CB who said that that paragraph from wiki which I quoted is above the understanding of most science majors, which I find hard to believe. It's wasn't in any way connected to anything dealing with higgs directly.
Am sorry if I am hard to understand sometimes. English is not my native language, so something might get lost on the way.
"Did you try opening a book and finding out for yourself?"
... of course.. that's how you I and started discussing higgs.
But again I don't know why this last post from you? And in such a way? Wasn't about higgs or questions about it. Was about understanding the wiki quote
By Sinisa Lazarek (not verified) on 11 Jul 2012 #permalink
SL who said that higgs gives mass to everything? Strawman.
@ Wow
what's the title of this post?
By Sinisa Lazarek (not verified) on 12 Jul 2012 #permalink
And you only read that???
You did notice there were more words below that, right?
Mass is an inherent property of elementary particles.
The mass of the proton has been calculated from spin, charge and particle radius on pages 3-4 of Belgian patent
BE1002781; see e-Cat Site "Belgian LANR Patents". For the electron mass a similar formula has been used.
By Van den Bogaer… (not verified) on 16 Jul 2012 #permalink
Science discoveries are not patentable.
"Science discoveries are not patentable."
Depends on what you call a discovery. And I suppose that in theory one could see the production/creation of a Higgs-boson by the LHC as a patentable thing, no?
Nope depends on the definition of discovery by patent offices.
And these discoveries are not patentable by ANY patent office.
You csn patent the design of the macine to make the measurement.
But maths and the discoveries of science in nature are not.
In ansewr to your question- no.
I was looking here at the broad sense of science and the controversial gray zone of gene patenting.
But with "these discoveries" you surely mean in the field of physics, here I'm not going to argue with you.
Regarding the Higgs-boson, there are two parts, the collider making them, and the detectors measuring them. I think that you could patent almost everything that CERN makes, and perhaps lots of the parts being used are already patented? So you either can scoop them Higgs for free coming out of a cosmic ray collision, or probably having to pay for an artificially created one.
There's no grey area here, chelle, thankfully enough.
Discovering the electron charge value is not patentable. Inventing a machine to measure the electron charge is, but I can't think of any scientist who does that because there's no market for the singular purpose machine, and they'd rather get on with research.
They'll use patented tools. Like hammers. But they don't purchase a licence to the patent on them any more than you do.
Bogart there was claiming a patent on the theory of how to calculate masses. As maths, this is not patentable.
"There’s no grey area here, chelle, thankfully enough."
You might want to read 'The Immortal Life of Henrietta Lacks' by Rebecca Skloot, or follow up on some other patent cases.
"Each nation has its own patent law, and what is patentable in some countries is not allowed to be patented in others."
It's all about politics and company's lobbying. Anyway, the way you keep on ignoring facts just amazes me.
Nope, I won't. Guess why? Because discoveries in science and maths are not patentable.
It's not about politics, by the way, it's about money and the capitalist system that equates power with money and allows it to accumulate freely.
Maybe you want to read up on an Aus patent on swinging on a swing.
PS Irony: Chelle saying "the way you keep on ignoring facts just amazes me."
ROFL indeed...
To Mr. Wow,
The patent BE1002781 does not relate to a method of calculating the rest mass of the proton, it relates to a kind of "cold fusion" based on Coulomb explosion of charged deuterated electroconductive particles. Read the patent text in English published on the "e-Cat Site" under the title "Belgian LANR Patents" and have a look to BE1003296 published on the same site under the title "LANR by Coulomb explosion retarded from publication for 2 years by
the Belgian Government of Defense. The calculation of said rest mass is dimensionally correct and does not infringes quantumphysics or mathematics. The rest mass of the proton is intrinsic linked to spin angular momentum, electric charge and particle radius. The product of mass and spin radius is constant and charge is an inverse function of the root value of mass and said radius. the proton is composed of "spinning quarks" . Two of them spinning in the same sense , one having opposite charge spinning in opposite sense being quenched between the the two others attracting each other by the current law of
Ampère. Electric charge comes out of the formula as dualistic, positive and negative, that is correct!
By Van den Bogaer… (not verified) on 26 Jul 2012 #permalink
Then your post earlier was lying.
To Wow,
Have you read already BE1002781 through the e-Cat Site and what do you think of the equation for the rest mass of the proton on pages3-4 of that BE patent relating to lattice assisted nuclear fusion (LANR) by Coulomb explosion?
By Van den Bogaer… (not verified) on 02 Aug 2012 #permalink
I can't even work out what that patent is trying to patent.
Patents are pretty pointless now. They're nothing but lawsuit fodder.
However, in this case it looks more likely that the patent is patenting rubbish, hiding the result in obtuse verbiage and using the PTO as a proxy for publishing in a journal to lend unearned authority to the idea.
That, however, is a conclusion based on likely utility. This patent may be genuinely intended as a patent, in which case, you wasted your money, but hey, who cares?
To Wow,
I still not have comments to the equation on pages 3-4 of BE1002781. I do not like your vocabulary "rubbish". Blogs are developed to have worthful discussions, certainly when it concerns science. Cheers!
I don't really see why your dislike is my problem.
Does an understanding of the Higgs field provide any hints (perhaps vague hints) about why General Relativity's equivalence of inertial and gravitation mass should be expected?
this will give mass it's matter.
E=mc2 gives an explosion
E/m=c2 gives you fusion
A.E.I.O.U (Absolute Energy equals Input, Output Utilization)
I don't think so, Bernard.
It could do if, for example, Higgs tied to Higgs in short range interactions.
Then again, we don't know WHY vacuum has a permitivity or permeability either. Well, not since I last looked. Not why an electron has one electron's charge (though it may have more: the excess hidden by charged virtual particles hiding some of the electrons' "true" charge).
It may be that these figures are self-correcting to some "most stable local value" and gravitational mass does the same thing.
All this, however, is well beyond my pay grade...
Earth science discovery is exciting work but if your new data goes against the accepted models it can take time for the community to incorporate new data.
Carl Sagan wrote,
A sad truth indeed from someone who carried the burdens of innovation.
This brings us to the big bang dogma today that all elements were created in a singularity event out of nothing, when new data reveals that cosmological processes are creating new elements continuously and on a massive scale. ie, Navy drill cores from ocean rifting, covering massive planet surface areas are only from a few years old to 180 million years old.
Those still wearing the big bang blinders can not appreciate that we indeed have a growing earth with a changing radius (continental mass is growing and ocean bottom surface is growing) from new elements being created at the core and not from space dust accretions.
Maverick scientists at Blue Eagle have now confirmed using LENR Interferometry Microsmelt Technology Processes (basically mimicking nature’s elemental bloom conditions) and are now making new precious elements. Not the wispy Hadron atomic scale elements but visible gold beads measured on a gram scale.
Our team of credentialed scientists and entrepreneurial engineers have accomplished more science in the last 18 months than the legions of those labouring over bosons in billion dollars budgets. For their efforts they are labelled as Crackpots when they should be recipients of the highest awards for progressing science.
To see a video of a modern day alchemist makng real gold in an LENR Interferometry Microsmelt low budget lab go to:
By Martin Burger (not verified) on 15 Oct 2013 #permalink
Well, of course.
For a start you'd need to find a new model if it is going to be refuted by the new data and that takes time UNLESS you've gone looking for data to fit a preconceived model. Which may be correct (e.g. looking for how the photoelectric effect disproves the wave theory of light and proves the quantisation of same). Or completely anti-science (e.g. looking at the time of diagnosis of autism and the similar time you can first be immunised against deadly childhood diseases so you can push your own "miracle cure" and rubbish the vaccine).
Either case does DEMAND you state a priori what model you did your measurements to fit so that others can check for confirmation bias. Much as Carl Sagan constantly exhorts guarding against, but almost never quoted by cranks and quacks.
Martin Burger,
Please educate yourself about what the big bang theory actually says before you try to criticize it. The big bang theory does NOT say that all the elements were created "in a singularity event". In fact, in the earliest moments (ie fractions of a second) after the BB, there was nothing that could conceivably be called an element. The universe was too energetic for atomic nuclei to remain intact. Nuclei only formed later as the universe cooled. Furthermore, not all elements formed at this time. The BB theory does quite well at predicting the abundance of elements formed at this time, and it consisted mainly of hydrogen, with a smaller amount of helium formed and trace amounts of other light nuclei such as lithium. Heavier elements (up to iron) formed via nuclear fusion in stars. Elements heavier than iron formed in supernovae.
Short story: formation of new elements today in no way invalidates the big bang theory.
I love when people call the Big Bang "dogma", ignorant of the fact that the Big Bang suffered all the resistance one could imagine but eventually won everyone over by its overwhelming predictive success.
In the same way, even if scientists are obstinately opposed, if you can do as you say and produce gold from silicon dioxide, then they will be forced to accept the evidence.
It's the E-Cat all over again:
- If the goal is to convince science of the new theory behind this invention, it would be easy to produce the necessary evidence. But how much do you want to be that a proper test that controls for any possible source of fraud will NEVER be done. Maybe sham "demonstrations", but never the kind of test you would design if you really wanted to prove the device worked. Just the kind you would design if you wanted to sucker in gullible investors.
- Screw those dogmatic scientists! You have a device that makes GOLD from SAND. Much like a simple cold fusion reactor, this is a project that, if real, would have zero problem funding itself. Once deployed at industrial scale it would drop the bottom out of the gold market, but in the meantime you'd be raking in the cash. In fact to prevent speculation, you'd probably keep really quiet, just slowly selling enough gold on the market to keep going (and getting rich) until one day you open your factory and reveal you're now the world's gold supplier.
Instead, you have a kickstarter page.
there is probably a good reason for your "scientists" to be called crackpots. You website indicates that.
Cb, it isn't quite a kick starter page, given this disclaimer at the top
This is not a live project. This is a draft shared by martin burger for feedback.
@ Martin Burger
hahha... OMG... hahaha...
By Sinisa Lazarek (not verified) on 16 Oct 2013 #permalink
Dean: Wouldn't the purpose of the draft be to eventually set up a proper kickstarter based on the feedback? Seems like otherwise there's no reason for it to be on kickstarter (with sponsorship prizes and all); it could just be a facebook post. Or am I missing something?
Sinisa: Once again you find a way to summarize my own thoughts in far fewer words.
"Wouldn’t the purpose of the draft be to eventually set up a proper kickstarter based on the feedback? "
It seems that this was a feeler to get a sense of interest - my take away that whoever put it there hasn't done the hoop jumping to get it okayed to get to the point of taking money. It seems that there has not been any interest in it at all. That could be because
* there is little interest for certain science items, or
* people skim over it because of what this particular item is
Or, I could be missing a bigger piece of the puzzle. My wife tells me that happens quite often.
I'm just trying to infer the intention to eventually, should interest be sufficient etc. etc., fund the miraculous alchemical gold-making machine (which supposedly already works and can make significant amounts of gold!) using kickstarter.
Because that's hilarious to me.
Gravity waves are the result of the product of mass of an elementary particle (fermion) and its spin radius.
Said product is constant but results in zitterbewegung.
Longitudinal waves are produced by the trembling motion of the particles. The spin radius is fluctuating inversionally proportional to the value of the mass. Mass fluctuations are gravity fluctuations. Interference of gravity waves inbetween massive objects is at the origin of attraction (pushing from the other side).
Photons have no rest mass but are composed of matter and antimatter particles in equal strength with curvature infinite. Their traject curvature (bending) is influenced by the permittivity and permeability of the vacuum in the neighbourhood of massive objects such as the sun.
By Joannes Van de… (not verified) on 05 Mar 2014 #permalink
@Joannes #112: [citation needed]
By Michael Kelsey (not verified) on 05 Mar 2014 #permalink
In connection with the preceding blog of mine have look at the equation for the mass of the proton in the Belgian patent BE1002781 available in English on the blog site ; "e-Cat Site" in the article "Belgian LANR Patents" Have a look also at the article of Rockenbauer concerning the cause of mass formation through spin of the elementary particle.
By Van den Bogaer… (not verified) on 13 Mar 2014 #permalink
@Joannes #114: Thanks. So no published journal papers, then. Just blog posts, patents (which are neither reviewed for, nor required to meet, conditions of reality), and vanity-press papers.
By Michael Kelsey (not verified) on 13 Mar 2014 #permalink
To Mr. Michael Kelsey
Dear Sir,
You are right about the non-existence of publications of mine in journal papers. Being a self-teached person in quantum physics I read some books about it, e.g. "101 Quantum Questions" from Kenneth Ford and was impressed by the statement that nowone knows the real nature of "electric charge" (je ne sais quoi) statement in the book. I like to draw your attention to the Bohr-atom formulae of the electron (Essentials of Physics from Borowitz-Beiser wherein you will find how to calculate electric charge in function of the product mass and (spin)radius of a fermiparticle such as an electron. See also BE1002781 PAGES 3 AND 4 for the proton restmass and its connection with electric charge.
Further I like to draw your attention to my Belgian patent BE904719 (in Dutch) for calculating the spin radius of the electron using a time independent Schrödinger equation for a "Standing wave" and have a look at the BE-patent referenced therein (Fig. 2 and 3).
It has been a pleasure to me to hear from you. Have a look at my article "Cold Fusion Catalyst" on the E-Cat Site and have a comment thereto if possible . Thanks!
By Van den Bogaer… (not verified) on 17 Mar 2014 #permalink
The Figures 2 and 3 are in the Belgian patent BE895572 (abstract in English available through ESPACENET.
My e-mail address is :
Do not hesitate to ask questions about my patents(12) no longer in force.
By Van den Bogaer… (not verified) on 19 Mar 2014 #permalink
The frequency of the gravity waves emitted by the trembling in the ground state of the electron "f" is 0.000008717 cycles/sec. This value has been obtained starting with the pendulum equation of Huygens (Dutch scientist) . In that equation L has been put equal to the spin radius calculated according to my Belgian patent BE904719 viz. 2.64X10^-11 m. The period (T) is consequently 114715.2798 seconds and the energy E being h.f = 5.7758842x10^39 joule.
For calculating the acceleration factor (a) in the Newton formula of gravity force( (F) I had to divide through the restmass of the electron being 9.108x10^31 kg.
For calculating the electromagnetic trembling being origin of electromagnetic attraction or repulsion I started with the Coulomb formula for electrostatic force giving (a) by dividing through the restmass of the electron . The force relationship of 10^42 of electromagnetic force to gravity force comes out. Comments are welcome.
The rest maas of the electron is 9.108x10^-31 kg and the outcome of h.f = 5.7758842x10^-39 joule . Sorry for the typist error.
By Van den Bogaer… (not verified) on 22 Mar 2014 #permalink
When an electron in an atom goes from a higher state to lower state the mass of the atom decreases.This is explained by electromagnetism and quantum mechanics.
The "Higgs Field" is not needed. The Higgs theory is incomplete and the predicted mass of the "Higgs particle"
kept changing to higher and higher values used to justify
to the European politicians to fund the Hadron Collider.
Peoples jobs depended on the Hadron Collider finding the "Higgs boson". I tried to ask Steven Weinberg about this and he wouldn't look me in the eyes, I suspect there is something very incomplete about this even to the people who created it.
By S Kennnedy (not verified) on 08 Jul 2014 #permalink
Different levels of sponginess,indeed a good anology. But the concept can be deemed to be fully explained only after establishing why and different levels of sponginess? If this picture too is clear to the dedicated scintists it may merit a similar clarification and explanations .
By lakshminarayan… (not verified) on 09 Aug 2014 #permalink
Similar to Higgs Particle and its field can there be say "Kiggs"particle and field for force fields?-
Further would like to be enlightened on
1-is Higgs field eneergy into mass converting factory?
2-any conceivable relationship between Vedic and Ervin László Akashic field ?
By lakshminarayan… (not verified) on 06 Jun 2015 #permalink
2-any conceivable relationship between Vedic and Ervin László Akashic field ?-- i meant between Higgs and Akashic field
|
b0c55f1f6eb9fe73 | Psi banner Psi banner
Classical objects push and pull in tangible and deterministic gestures. A Newton's cradle collides on one side, energy courses through the system, and it erupts on the other side. Quantum objects mystify the imagination with erratic and unpredictable behavior. Psi guides the listener from a classical mechanical sound world into a quantum soundscape populated by quantum harmonic oscillators. For these quantum sounds, I created a software (freely available from my website) that sonifies evolving wave functions using the time-dependent Schrödinger equation. Psi is the culmination of years of compositional work and research into the sonification of classical and quantum systems.
The narrative of Psi unfolds from the contrast between classical and quantum systems. The piece begins with the sound of a Newton's Cradle; the quintessential classical system. Some synthetic sounds based on the same physics are added gradually while leaving the focus on the sound of the Newton's cradle. This material is explored extensively from various perspectives, diving deeper into the sounds before zooming in to a level where quantum effects begin to emerge. In this quantum sound world, particles become smeared out into droning waves and jump around in the stereo field unpredictably. Eventually, echoes of the classical return as processed versions of the cradle sounds from the beginning, now unstable and flitting about the sound field haphazardly. Quantum particles dance like tiny corpuscular creatures popping in and out of our perception, spinning, collapsing, dispersing, and entangling with one another. The activity grows as we begin to zoom out and widen our perspective once again. We finally emerge from the quantum sound world to re-examine the Newton's Cradle with the remnants of the quantum effects now coloring our aural perception.
Rodney DuPlessis
NYCEMF 2022 (Sheen Center for Thought and Culture)
in New York, NY, USA
Rodney DuPlessis
SMC 2022 (Bourse du Travail)
in Saint-Étienne, France
Rodney DuPlessis
Earth Day Art Model (Online)
in Indianapolis, IN, USA |
158f8557388166e3 | Exciton dynamics
Excitons are electron-hole pairs bound by Coulomb interaction which can be generated in semiconductors or insulators by interaction with light. In the simplest case, they involve excitations of electrons from the (highest) occupied to the (lowest) unoccupied molecular orbital, or from valence to conduction band. Regarded as quasi-particles in solid-state materials, excitons can transport energy without transporting net electric charge. Eventually they release their energy by recombination, coupling to lattice vibrations, or by dissociation into separate charges. Efficient excitonic energy transport is of paramount importance in a variety of opto-electronic applications. For example, in photovoltaic solar cells, excitons have to migrate from "antenna" sites of efficient light absorption to active interfaces such as electrodes or embedded catalytic sites in order for charge separation to occur.
Exciton dynamics in organic semiconductors
Rupert Klein with Burkhard Schmidt
Cooperations with Patrick Gelss, Felix Henneke, Sebastian Matera
Support by ECMath (Einstein Center for Mathematics Berlin) through project SE 20 (2017/18)
Support by MATH+ (Berlin Mathematics Research Center) through projects AA2-2 (2019/20) and AA2-11 (2021/22)
In organic semiconductors such as molecular crystals or conjugated polymer chains, excitons are typically localized (Frenkel excitons), and their transport is normally modeled in terms of excitons diffusively hopping between sites. An improved understanding of excitonic energy transport has to account for the role of electron-phonon coupling (EPC). We limit ourselves to the use of rather simple models of quantum dynamics of excitons, i.e., only two electronic states with nearest-neighbor interactions, only harmonic lattice vibrations, and only linear EPC (known as Frenkel, Holstein, Fröhlich, Davydov, and/or Peierls Hamiltonians).
Despite of these models being under investigation for several decades already, and despite of their apparent simplicity, solving the corresponding quantum-mechanical Schrödinger equation still represents a major challenge. Analytic solutions are elusive, and numeric approaches suffer from the curse of dimensionality, i.e. the exponential growth of computational effort with the number of sites involved. To cope with that problem, we employ a hierarchy of different approaches detailed in the following.
Self-localization of electrons and phonons for a chain length of N = 40. Top: Quantum numbers of excitons, together with a sech^2 fit function. Bottom: Local distortions of the lattice sites (blue bars), together with a 1−tanh^2 fit function.
Fully quantum-mechanical approaches
Our work on a fully quantum-mechanical approach to coupled excitons and phonons focuses on the use of efficient low-rank tensor decomposition techniques to beat the curse of dimensionality. The limitation to chain structures with nearest neighbor interactions in the electron-phonon Hamiltonians mentioned above suggests the use of tensor train formats, also known as matrix product states, representing a good compromise between storage consumption and computational robustness. The time-independent Schrödinger equation is solved using an alternating linear scheme (ALS), and higher quantum states are obtained by an approach that directly incorporates the Wielandt deflation technique into the ALS for the solution of eigenproblems. In test calculations for homogeneous systems, we find that the tensor-train ranks of the state vectors only marginally depend on the chain length, which results in a linear growth of the storage consumption. However, the CPU time increases slightly faster with the chain length because the ALS requires more iterations to achieve convergence for longer chains and a given rank [86].
As a first test, tensor train approaches based on a SLIM decomposition of the Hamiltonian have been used to investigate the phenomenon of self-trapping, i.e., the formation of localized excitons "dressed" with deformations of the ionic scaffold. Within a certain range of the parameters involved, our calculations exactly reproduce the predictions by Davydov's soliton theory of excitonic energy transport, but we are also able to explore cases where the rigorous assumptions of that approximate analytic theory do not apply [86].
Mixed quantum-classical approaches
In cases where the space and time scales governing the dynamics of excitons and phonons are well separated, mixed quantum-classical molecular dynamics (QCMD) provides a suitable approximation for exciton-phonon coupling. There, the electronic degrees of freedom (excitons) are treated quantum-mechanically while the ionic motions (molecular vibrations and/or lattice vibrations, aka phonons) are treated classically. In Ehrenfeld (mean field) approaches, the latter ones are subject to forces averaged over the quantum states of the former ones. An alternative is the widely used concept of surface hopping trajectories (SHT) algorithms, where the ionic positions are modeled by classical trajectories that may stochastically switch between electronic states thus resembling non-adiabatic transitions. In cooperation with L. Cancissu Araujo and C. Lasser at TU Munich, we implemented and evaluated various non-standard SHT variants for the case of Holstein-type Hamiltonians typically used to describe the dynamics of excitons coupled to phonon modes [82], see also our page on quantum-classical dynamics.
Semi-classical approaches
In another line of activities, we combine advanced semi-classical simulation techniques for the ionic degrees of freedom with multiscale asymptotics to take advantage of a systematic scaling behavior of exciton-nuclear couplings in models of conjugated polymer chains. Our work rests on work by Hagedorn who extended the well-established theory of approximate Gaussian wave packet solutions to the time-dependent Schrödinger equation toward moving and deforming complex Gaussian packets multiplied by Hermite polynomials, yielding semi-classical approximations which are valid on (at least) the Ehrenfest time scale, i.e., the characteristic time scale of the motion of the ions. Lubich and Lasser, see their 2020 review article, developed numerical approximations based on those ideas. Their variational approaches rely on approximations to wave functions by linear combinations of (frozen or thawed) Gauss or Hagedorn functions. In principle, error bounds of any prescribed order in the semi-classical smallness parameter can be obtained, and also estimators for both the temporal and spatial discretization can be obtained efficiently, thus paving the way for fully adaptive propagation.
|
b622b28b2441af74 | You are here
I. Introduction. - II. A Brief History of Chemistry. 1. Roots in the Ancient World. 2. Alchemy. 3. The Development of Chemistry in the Islamic World. 4. The Chemical Revolution. 5. The Development of Organic Chemistry. 6. The Molecules of Life. 7. The Future of Chemistry. - III. Chemistry and Society. 1. Chemistry and War. 2. A World Made of Plastic: The Environmental Problem. 3. Towards a Green Chemistry. 4. Women in Chemistry. - IV. Philosophical questions emerging from the chemical sciences. 1. Chemistry and the Origin of Life. 2. A Philosophy of Chemistry. 3. The Chemical Structure as an Irreducible Structure. 4. The Periodic Table and the Intelligibility of Nature.
I. Introduction
Chemistry is the science of molecules, or more precisely the science of the chemical structure of molecules in relation to their properties. At the same time, chemistry is concerned with the study of transformation, of the material metamorphosis of materials. Notably, when Christian missionaries first began to translate western textbooks into Chinese in the 1870s and needed a term to stand for chemistry, they coined the phrase ʺhua-hsüehʺ, literally meaning ʺthe study of change.ʺ The chemical approach to the interpretation of the material world has the unique and fascinating characteristic of linking the macroscopic world, the properties of all that surrounds us with the phenomena that happen inside and outside us, to the microscopic world of molecules and atoms. Chemistry is fascinating for its infinite possibilities and the vast horizons of creativity that it opens up. It is unique for its investigative capacity and its knowledge is indispensable for the pursuit of any of the other sciences. Today chemistry has become an incredibly rich and powerful science and often its areas of investigation overlap with that of physics and biology. The study of modern chemistry has branched out into several sub-disciplines such as physical chemistry, organic chemistry, inorganic chemistry, analytical chemistry and biochemistry.
II. A Brief History of Chemistry
1. Roots in the Ancient World. Ancient civilizations had knowledge of seven metals (gold, silver, copper, lead, tin, iron and mercury) and a wide variety of chemicals that they exploited in their pottery, jewellery, cosmetics, cooking and weaponry or as drugs. The very first appearance of chemistry could be traced as back as the Stone Age. The Palaeolithic (Old Stone Age) paintings at Lascaux and elsewhere show that stone-age people were able not merely to alter stones to fashion tools and buildings but also to prepare pigments to colour their representations of animals (12,000-8,000 B.C.). By the time the bronze or copper age was reached, the working of minerals to produce copper and zinc had been developed. These metallurgical practices continued with the discovery of iron ore, which gave iron-age man to produce weapons that made animal hunting and butchering easier. Around 300,000 years ago men had learned how to create, control and propagate fire, providing humans with warmth, expanding their daylight hours and improving diet, health and longevity through the processing of meat and vegetables. There is abundant archaeological evidence that by 8,000 B.C. humankind was using biochemical processes (fermentation) to exploit grains of various kinds to bake bread and create beer and wine. The ability to control fire and temperature led to the first chemical technologies such as the production of pottery, metals, glass and bitumen products.
The earliest critical thinking on the nature of substances was by Greek philosophers beginning about 600 B.C. Thales of Miletus, Anaximander, Empedocles, and others propounded theories that the world consisted of varieties of earth, water, air, fire, or indeterminate “seeds” and “unbounded” matter. Leucippus and Democritus propounded a materialistic theory of invisibly tiny irreducible atoms from which the world was made. In the 4th century B.C., Plato taught that the world of the senses was but the shadow of a mathematical world of “forms” beyond human perception. In contrast, Plato’s student Aristotle took the world of the senses seriously. Adopting Empedocles’s view that everything is composed of earth, water, air, and fire, Aristotle taught that each of these materials was a combination of qualities such as hot, cold, moist, and dry. For Aristotle, these “elements” were not building blocks of matter as we think today; rather, they resulted from the qualities imposed on otherwise featureless prime matter. Consequently, there were many different kinds of earth, for instance, and nothing precluded one element from being transformed into another by appropriate adjustment of its qualities. Thus, Aristotle rejected the speculations of the ancient atomists and their irreducible fundamental particles. His views were highly regarded in late antiquity and remained influential throughout the Middle Ages.
For thousands of years before Aristotle, metalsmiths, assayers, ceramists, and dyers had worked to perfect their crafts using empirically derived knowledge of chemical processes. By Hellenistic and Roman times, their skills were well advanced, and sophisticated ceramics, glasses, dyes, drugs, steels, bronze, brass, alloys of gold and silver, foodstuffs, and many other chemical products were traded. Hellenistic Alexandria in Egypt was a centre for these arts, and it was apparently there that a group of ideas emerged that later became known as alchemy.
2. Alchemy. In early times, chemistry was more an art than a science. The first forms of ʺchemicalʺ were linked to the discovery of the properties of minerals and plants. Investigation of metals, plants and animals, revolved around the preparation of dyes, pigments, cosmetics and drugs, and were used empirically by ancient man to satisfying their needs and their sense of curiosity about the world they lived in. This body of knowledge about the transformations of matter is known as “alchemy” and it had often-mystical overtones.
The word alchemy was most probably derived from the Arabic word ʺalkimiaʺand may ultimately derive from the ancient Egyptian word ʺkmtʺ or ʺchem,ʺ meaning black or from the Greek ʺchyma,ʺ meaning to fuse or cast a metal.The operations of craftsmen were often carried out to the accompaniment of religious or magical practices, and supposed connexions were seen between metals, minerals, plants, planets, the sun, the moon, and the gods. The alchemist, through his work of transformation of matter, saw himself on a mystical path of personal elevation to the Transcendent involving at the same time the social and the ethical dimension in a process of purification of the self in order to be more available to higher ideals. Alchemy is of a twofold nature, an outward or ʺexotericʺ and a hidden or ʺesoteric.ʺ Exoteric alchemy is concerned with attempts to prepare a substance, the philosophers’ stone, or simply the Stone, endowed with the power of transmuting the base metals lead, tin, copper, iron, and mercury into the precious metals gold and silver. The Stone was also sometimes known as the Elixir or Tincture, and was credited not only with the power of transmutation but with that of prolonging human life indefinitely. The belief that it could be obtained only by divine grace and favour led to the development of esoteric or mystical alchemy, and this gradually developed into a devotional system where the mundane transmutation of metals became merely symbolic of the transformation of sinful man into a perfect being through prayer and submission to the will of God. The two kinds of alchemy were often inextricably mixed; however, in some of the mystical treatises it is clear that the authors are not concerned with material substances but are employing the language of exoteric alchemy for the sole purpose of expressing theological, philosophical, or mystical beliefs and aspirations. Many clerics were alchemists. To Albertus Magnus, a prominent Dominican and Bishop of Ratisbon, is attributed the work "De Alchimia", though this is of doubtful authenticity. Several treatises on alchemy are attributed to St. Thomas Aquinas. He investigated theologically the question of whether gold produced by alchemy could be sold as real gold, and decided that it could, if it really possess the properties of gold (Summa Theologiæ II-II, q. 77, a. 2). A treatise on the subject is attributed to Pope John XXII, who is also the author of a Bull Spondent quas non exhibent (1317) against dishonest alchemists.
3. The Development of Chemistry in the Islamic World. In the 7th century, the Arabs started a process of territorial expansion that quickly brought their empire and influence ranging from India to Andalusia. Fruitful contacts with ancient cultural traditions were a natural consequence of this territorial expansion, and Arabic culture proved ready to absorb and reinterpret much of the technical and theoretical innovations of previous civilizations. This was certainly the case with respect to alchemy, which had been practiced and studied in ancient Greece and Hellenistic Egypt. Alchemy came to the Muslims originally from Alexandria. Islam gradually appropriated the Greek alchemical authority in toto. The transmission was made chiefly through direct contact in Alexandria and other Egyptian cities. Nestorian Christians played a great part in translating Greek works into Arabic. The first Muslim to take an interest in alchemy was probably Khalid ibn Yazid (?-704) who seems to have first ordered the translation of alchemical books from the Greek and Coptic languages into Arabic. By the second part of the 8th century, Arabic knowledge of alchemy was already far enough advanced to produce the Corpus Jabirianum—an impressively large body of alchemical works attributed to Jabir ibn Hayyan (c.721–c.815) also known in the West as Geber. The Corpus, together with the alchemical works of Muhammad ibn Zakariyā Rāzī (865–925), marks the creative peak of Arabic alchemy. The success of Arabic alchemy relied most certainly from the multicultural milieu of Hellenistic Egypt and included a mixture of local, Hebrew, Christian, Gnostic, ancient Greek, Indian, and Mesopotamian influences.
The contribution of Arabic alchemists to the history of alchemy is profound. They excelled in the field of practical laboratory experience and offered the first descriptions of some of the substances still used in modern chemistry. Muriatic (hydrochloric) acid, sulfuric acid, and nitric acid are discoveries of Arabic alchemists, as are soda (al-natrun) and potassium (al-qali). The words used in Arabic alchemical books have left a deep mark on the language of chemistry: besides the word alchemy itself, we see Arabic influence in alcohol (al-kohl), elixir (al-iksir), and alembic (al-inbiq). Moreover, Arabic alchemists perfected the process of distillation, equipping their distilling apparatuses with thermometers in order to better regulate the heating during alchemical operations. Finally, the discovery of the solvent later known as aqua regia—a mixture of nitric and muriatic acids—is reported to be one of their most important contributions to later alchemy and chemistry.
After the Middle Ages, among the most important of the European alchemists was German-Swiss physician Paracelsus (1493-1531). He expanded on the Arabic doctrine that two principles, sulphur and mercury, were the roots of all things by adding a third principle, salt. Paracelsus also taught that the universe itself functioned like a cosmic chemical laboratory. God the Creator, he believed, was a divine alchemist whose macrocosmic drama was mirrored in the microcosmic world of man and earthly creatures. It followed that physiological and pathological processes were chemical in nature, and that disease was best treated by chemical medicines rather than by the herbal ones of the ancients. Paracelsus practiced alchemy, Kabbala, astrology, and magic, and in the first half of the 16th century, he championed the role of mineral rather than herbal remedies. His emphasis on chemicals in pharmacy and medicine was influential on later figures, and lively controversies over the Paracelsian approach raged around the turn of the 17th century. It is worth noting that that open-minded empirical investigation well integrated with theory (which is how one might define science today) was not absent from the history of alchemy. Alchemy had many quite scientific practitioners through the centuries, notably including Britain’s Robert Boyle and Isaac Newton who applied systematic and quantitative method to their (mostly secret) alchemical studies. Indeed, as late as the end of the 17th century, there was little to distinguish alchemy from chemistry, either substantively or semantically, since both words were applied to the same set of ideas. It was only in the early 18th century that chemists agreed upon different definitions on the two words, forever discrediting alchemy as pseudoscience.
4. The Chemical Revolution. It was through the contribution of Robert Boyle (1627-1691) that a revolution started to take place in chemistry as it had already begun in physics with Galileo Galilei (1564-1642). Central to Boyle’s contribution was his corpuscularian hypothesis. According to Boyle, all matter consisted of varying arrangements of identical corpuscles. In his 1661 book ʺThe Sceptical Chymist,ʺ Boyle explains his hypothesis and dismisses Aristotle’s four-elements theory, which had persisted through the ages. Boyle recognised that certain substances decompose into other substances (water decomposes into hydrogen and oxygen when it is electrically charged) that cannot themselves be broken down any further. These fundamental substances he labelled ʺelements,ʺ which could be identified by experimentation. By the late 18th century, the field of chemistry had fully separated from traditional alchemy while remaining focused on questions relating to the composition of matter. The chemist who transformed our understanding about elements and composition was Antonie Lavoisier (1743-1794). In 1789, Lavoisier wrote the first comprehensive chemistry textbook, and, together with Robert Boyle, he is often referred to as the father of modern chemistry. Lavoisier compiled a list of metallic and non-metallic elements that would point towards the periodic table developed later by Mendeleev in 1869. The chemical revolution was not merely conceptual but also instrumental, in that it involved the practical ability to manipulate, weigh, and measure gases using accurate balances, glass apparatus etc… Around the turn of the 18th century, the English Quaker John Dalton (1766-1844) began to wonder about the invisibly small ultimate particles of which each of these elemental substances might be composed. He thought that if the atoms of each of the elements were distinct, they must be characterised by a distinct weight that is unique to each element. Dalton’s atomic theory was a landmark event in the history of chemistry. Subsequently, in 1869, Mendeleev proposed a way to organise the sixty or so known elements at the time highlighting the periodic law. When elements are arranged according to the magnitude of their atomic weights in fact, they display a step-like alteration in their properties such that chemically analogues like the alkali metals (lithium, sodium, potassium, etc…) and the halogens (fluorine, chlorine, bromine, iodine) fall into natural groups. The turn of the 20th century was marked by a remarkable series of discoveries that gradually shed light on the structure of the atom. J.J. Thomson (1856-1940) first proved that atoms were not the most basic form of matter. He demonstrated that all atom had fundamental particles with a net negative charge that, in his apparatus, could be deflected by magnetic or electric fields. There particles are now called electrons and are most relevant in chemistry. Subsequently, Robert Millikan (1868-1953) calculated the charge and the mass of a single electron. Shortly after Thomson’s discovery, Ernest Rutherford (1871-1937) with his famous experiment in which he collided α particles on a thin gold foil showed that both the mass and the charge of the atom were concentrated in a tiny fraction of its volume, the nucleus. He called these positive particles protons. Niels Bohr (1885-1962) developed Rutherford’s atomic model proposing that electrons moved around the nucleus in fixed circular orbits. In 1926 Erwin Schrödinger (1887-1961) took Bohr’s model one step further using mathematical equations to describe the likelihood of finding an electron in a certain position. In the current model of the atom, electrons occupy regions of space called ʺorbitalsʺ around the nucleus distributed according to a set of principles described by quantum mechanics. Linus Carl Pauling (1901–1994) made a significant contribution to the understanding of how atoms come together to form chemical bonding and chemical structure. He conducted pioneering studies in the magnetic properties of atoms and molecules and the relation of electronegativity (the tendency of an atom to attract electrons in a bond) to the types of bonds that atoms form To better explain the nature of covalent bonding, in which electrons are shared between bonded atoms, Pauling formulated the groundbreaking concepts of resonance and hybridization, which in turn provided chemists with a more robust theoretical basis for predicting new compounds and chemical reactions.
5. The Development of Organic Chemistry. At the beginning of the 19th century, the elements and compounds known to chemistry numbered only a few hundred; today, they number more than seventy-one million. Few of these substances actually exist in Nature; rather they have been isolated, prepared, and studied by chemists in particular times and places by an evolving repertoire of laboratory practices and the development of organic chemistry. Prior to the 19th century, chemists generally believed that organic compounds found in living organisms were too complex to be synthesised and studied. During this period, the concept of vitalism was widely accepted, according to which living organisms are fundamentally different from non-living entities or are governed by principles different from those at work in inanimate things. They also believed that all organic compounds possessed vital force, unlike inorganic. In 1828, Friedrich Wöhler (1800-1882) synthesised the organic compound urea from inorganic compounds ammonium cyanate challenging the idea of vitalism. Three decades after, the German chemist Friedrich August Kekulé (1829-1896) made a crucial contribution towards the birth of the new discipline by defining organic chemistry as the chemistry of carbon compounds, and by proposing not only that carbon atoms were tetravalent but also that they could bond to each other to form chains, comprising a molecular “skeleton” to which other atoms could cling. Kekule’s theory of chemical structure clarified the compositions of hundreds of organic compounds and served as a guide to the synthesis of thousands more. The history of organic chemistry continued with the discovery of petroleum and its separation into fractions according to boiling ranges, which lead to the petrochemical industry and later to the production of plastics. In the late 19th century, the pharmaceutical industry began with the synthesis of acetylsalicylic acid (aspirin). The great organic synthesis that followed deeply revolutionized chemistry and society. The imitation of nature led to the possibility of breaking with nature itself and surpassing it to form an artificial world.
6. The Molecules of Life. From late 19th century up until the First World War, the focus of the chemical research shifted significantly towards the studies and understanding of the chemistry underpinning the biological systems, thus leading to the birth of biochemistry. Biochemistry began with studies of substances derived from plants and animals with their classification into groups of biomolecules such as proteins, lipids, and carbohydrates. The German chemist Emil Fischer (1852-1919) in particular made a massive contribution by determining the nature and structure of many carbohydrates and proteins. By the end of the century, the role of enzymes as organic catalysts was clarified, and amino acids identified as constituents of proteins. The announcement of the discovery of vitamins ın 1912, independently by the Polish-born American biochemist Casimir Funk (1884-1967) and the British biochemist Frederick Hopkins (1861-1947), initiated a revolution in both biochemistry and human nutrition. Gradually, the details of intermediary metabolism, as well as the way the body uses nutrient substances for energy, growth, and tissue repair were unveiled. Perhaps the most representative example of this kind of work was the German-born British biochemist Hans Krebs (1900-1981) who established of the tricarboxylic acid cycle, also known as Krebs cycle, in the 1930s.
The most dramatic discovery in the history of 20th century biochemistry however was surely the discovery of the double helix structure of DNA (deoxyribonucleic acid) in 1953 by American geneticist James Watson (1928- ) and British biophysicist Francis Crick (1916-2004) in 1953. Rosalind Franklin (1920-1958) also made a great contribution to the discovery of the molecular structure of the DNA. The new understanding of the molecule that encode the genetic code, the sequence of the four nucleotides adenine, guanine, cytosine, and thymine) provided an essential link between chemistry and biology. In June 2000, representatives from the publicly funded U.S. Human Genome Project and from privately held company Celera Genomics, simultaneously announced the independent and nearly complete sequencing of the more than three billion nucleotides in the human genome. However, both groups emphasized that this monumental accomplishment was, in a broader perspective, only the end of a race to the starting line.
7. The Future of Chemistry. In the 19th century, the different scientific disciplines, including chemistry, came to have distinctive boundaries with their own academic journals and professional societies. However, we have now reached the stage in the 21st century when cultural historians are asking whether it any longer makes sense to speak of chemistry as a separate discipline, because of the strong collaborative character of research involving mathematicians, physicists, biologists and engineers. So, what does the future of chemistry look like? Over the last two decades, innovation has mostly arisen at the boundaries of traditional subjects. Interdisciplinarity has become essential in tackling major technological and societal challenges and more in general a key factor in driving innovation. The chemical sciences will likely be increasingly required to solve challenges in health, energy and climate change, water and food production. One of the most important trends is the relationship of chemistry to biology and their role in shaping the pharmaceutical industry. Engagement with the arts and social sciences might also play a key role in changing attitudes to design and consumption, with implications for future manufacturing processes and use of natural resources. A decisive shift from blue-skies to problem-driven research seems to be marking the future of the chemical sciences.
III. Chemistry and Society
1. Chemistry and War. The Great War of 1914-1918 was the first conflict in which European chemists were involved in both defensive and offensive research to the point that is popularly known as ʺthe chemists’ warʺ because of the use of poisonous gas warfare such as tear gas and lethal agents like phosgene, chlorine, and mustard gas. By analogy, the second world conflict (1939-1945) has been called ʺthe physicists’ warʺ because of the research effort around the making of the atomic bomb. In fact, chemists were closely involved also in the second World War in the separation of uranium isotopes and in the manufacture of heavy water without which there would have been no such bomb.
In both world wars the chemical industry was dominated by the drive to improve and raise production levels of conventional high explosives and metal production for cannon. The petroleum soap (napalm) devised by Louis Fieser (1899-1977) killed more Japanese than the atomic weapons that destroyed Hiroshima and Nagasaki combined. Its use during the Korean War (1950-1953) and the prolonged Vietnam war (1955-1975) became a symbol of the evil of warfare and was responsible for a chemiophobic swing against chemistry that has had a lasting effect on popular culture.
For centuries weapons worked fuelled by gunpowder. The situation drastically changed when Alfred Nobel (1833-1896) invented dynamite in 1867, a substance easier and safer to handle than the less stable nitroglycerin, which had been discovered by Ascanio Sobrero (1812-1888) in 1847. Nitrogen plays a central role in the manufacture of explosives. In 1908, Fritz Haber (1868-1934) filed a patent on the ʺsynthesis of ammonia from its elementsʺ for which he was later awarded the 1918 Nobel Prize in Chemistry. Through this reaction, which today is known as the ʺHaber–Bosch process,ʺ ammonia, a chemically reactive, highly usable form of nitrogen, could be synthesized by reacting atmospheric nitrogen with hydrogen in the presence of iron at high pressures and temperatures. The importance of Haber’s discovery cannot be overestimated — as a result, millions of people have died in armed conflicts over the past 100 years, but, at the same time, billions of people have been fed. In his Nobel lecture, Haber explained that his main motivation for synthesizing ammonia from its elements was the growing demand for food, and the concomitant need to replace the nitrogen lost from fields owing to the harvesting of crops. In addition, the large-scale production of ammonia has facilitated the industrial manufacture of a large number of chemical compounds and many synthetic products. Thus the Haber–Bosch process, with its impacts on agriculture, industry and the course of modern history, has literally changed the world. What Fritz Haber could not foresee, however, was the environmental impact of his discovery, including the increase in water and air pollution, the perturbation of greenhouse gas levels and the loss of biodiversity that was to result from the colossal increase in ammonia production and use that was to ensue. The invention boosted the production of fixed nitrogen meeting the ever growing request of fertilizers as well as that of raw material for explosives to be used in weapons which up until then had relied on natural reservoirs of reactive nitrogen, particularly Peruvian guano, Chilean saltpeter and sal ammoniac extracted from coal. Haber’s discovery fuelled the First World War providing Germany with a home supply of ammonia. This was then oxidized to nitric acid and used to produce ammonium nitrate, nitroglycerine, TNT (trinitrotoluene) and other nitrogen-containing explosives. Since then, reactive nitrogen produced by the Haber–Bosch process has become the central foundation of the world’s ammunition supplies. At the same time, the Haber–Bosch process has facilitated the production of agricultural fertilizers on an industrial scale, dramatically increasing global agricultural productivity in most regions of the world. It is estimated that today the lives of around half of humanity are made possible by nitrogen fertilizers produced via Haber–Bosch process.
2. A World Made of Plastic: The Environmental Problem. Over the last century and a half, chemists have learned how to make synthetic polymers using mainly petroleum and other fossil fuels. Polymers are essentially long chains of atoms, arranged in repeating units. The length and the nature of such repeating units impart to the synthetic polymers unique characteristics in terms of strength, flexibility and weight that make them incredibility useful. During the last 50 years, plastics have become ubiquitous and an essential part of our lives. The first fully synthetic plastic, Bakelite, was invented in 1907 providing a material which was at the same time and electrical insulator, durable and heat resistant. In 1935 Nylon was invented as a synthetic silk to be used during the war for parachutes, ropes and more uses, while Plexiglas became an alternative to glass. The production of plastic boomed in the United States during the Word War II and continued after the war ended opening a new era in which plastics seem to offer a future with abundant material wealth thanks to an inexpensive, safe, sanitary substance that could be shaped by humans to their very whim. However, already in the postwar years, there started to be a shift in the social perception and plastics were no longer seen as unambiguously positive. Plastic debris was first observed in the oceans in the 1960s, some major oil spills and in the same years, and the 1962 Rachel Carson’s book, Silent Spring exposed the dangers of chemical pesticides. As awareness about environmental issues spread, the persistence of plastic waste began to trouble observers. In 1970s and 1980s the social anxiety about waste increased with so many disposable plastic products lasting forever in the environment. Despite the introduction of recycling as a waste-management system most plastics still end up in landfills or in the environment. The ultimate symbol of the problem of plastic waste is the Great Pacific Garbage Patch, which has been described as a swirl of plastic garbage the size of Texas floating in the Pacific Ocean. The reputation of plastics has suffered further thanks to a growing concern about the potential threat they pose to human health. These concerns focus on the additives, such as the much-discussed bisphenol A (BPA) and a class of chemicals called phthalates, that go into plastics during the manufacturing process, making them more flexible, durable, and transparent. Some scientists and members of the public are concerned about evidence that these chemicals leach out of plastics into our food, water, and bodies. In very high doses these chemicals can disrupt the endocrine (or hormonal) system. Researchers worry particularly about the effects of these chemicals on children and what continued accumulation means for future generations. Despite growing mistrust, plastics are critical to modern life as they made possible the development of computers, cell phones, and most of the lifesaving advances of modern medicine. Today scientists are continually developing safer and more sustainable plastics such as bioplastics, made from plant crops instead of fossil fuels, to create substances that are more environmentally friendly than conventional plastics.
The magisterium of Pope Francis has taken a clear position towards the protection of the environment with the Encyclical Letter Laudato Si. In the document the Pope acknowledges the enormous benefits that the sciences have brought about in society, rejoicing in the advancements of technology (Laudato Si, 102). At the same time though he urgently appeals for a ʺnew dialogue about how we are shaping the future of our planetʺ, (…) and calls for ʺa conversation which includes everyone, since the environmental challenge we are undergoing, and its human roots, concern and affect us allʺ (Laudato Si, 14).
3. Towards a Green Chemistry. Chemistry has long been perceived as a dangerous science and often the public associates the word ʺchemicalʺ with ʺtoxicʺ. Over the past three decades, a new awareness on man’s ability to harness chemical innovation while meeting urgent environmental and sustainable economic goals has emerged giving rise to Green Chemistry. Green Chemistry can be defined as the ʺdesign of chemical products and processes to reduce or eliminate the use and generation of hazardous substances.ʺ The central idea of Green Chemistry is that of ʺdesign,ʺ intended as a statement of human intention to achieve sustainability, starting from the molecular level up until the industry sectors. From aerospace, automobile, cosmetic, electronics, energy, household products, pharmaceutical, to agriculture, today there are hundreds of examples of successful applications of economically competitive technologies. The concept of Green Chemistry has gone beyond the research laboratories making an impact in industry, education, environment, and the general public. Chemists have been able more and more to design next generation products and processes so that they are profitable while being good for human health and the environment. A list of Twelve Principles of Green Chemistry has been suggested as a cohesive system to reduce or eliminate intrinsic hazards associated to chemicals and processes. Although a great deal of work has been done to advance Green Chemistry around the world, this still remains an area of great potential in the face of the ecological crisis humanity is facing.
4. Women in Chemistry. Women have contributed to the chemical sciences since the age of alchemy, but for centuries they did so largely unseen and unheard. In the 19th and much of the 20th century, women who pursued careers in chemistry often faced intense discrimination and were allowed only ancillary roles in the laboratory. Nowadays women start gaining more prominence in chemical fields. So far five women chemists have received the Nobel Prize for Chemistry. The French-Polish Marie Sklodowska Curie (1867-1934) was the first female Nobel Prize winner and also the first person in history to receive the prestigious award twice: in 1903 in physics for her work on radioactivity together with Henri Becquerel, and in 1911 in chemistry for the discovery of the elements radium and polonium. Marie Sklodowska Curie was an extraordinary woman of acute intellect as well as of strong social and political engagement. Her figure has inspired generations of scientists and today is remembered in the name of universities, institutes of research and charities all over the world. In 1935 Irène Joliot-Curie (1897-1956), Marie Curies’s daughter, also received the Nobel Prize in recognition of her work on the synthesis of new radioactive elements. Dorothy Crowfoot Hodgkin (1910-1994) is awarded the prestigious prize in 1964 for the determination of the structure of penicillin and that of vitamin B12. In the last years other female Nobel laureates in chemistry are Ada Yonath (1939- ) for the successful mapping of the structure of ribosomes (2009) and Frances H. Arnold (1956- ) for the directed evolution of enzymes (2018). Recently the Bank of England opened nominations for the face to adorn the new £50 note for people who have contributed to science. DNA crystallographer Rosalind Franklin (1920-1958), and protein pioneer and Nobel laureate Dorothy Hodgkin (1910-1994) are, among the 200 women, the chemists nominated.
IV. Philosophical questions emerging from the chemical sciences
1. Chemistry and the Origin of Life. The origin of life on our planet is a deeply fascinating one for chemistry. How did complex systems of chemical reactions on the prebiotic Earth lead to living organisms? The issue poses a complex philosophical set of questions, among which, above all, the one about the functions an organism needs to be called ʺliving.ʺ At least three key features are considered essential for this: i) ʺcompartmentalisation,ʺ to maintain its internal environment (presence of cell wall), ii) ʺmetabolism,ʺ to turn external resources (food) into energy (catabolism) and new components of the organism (anabolism) and iii) ʺself-replication,ʺ to reproduce the living organism both in terms of means and information needed for this purpose (proteins and DNA). In 1952 the American chemists Stanley Miller (1930-2007) and Harold Urey (1893-1981) performed a landmark experiment simulating the atmosphere of the early earth with a mixture of water, methane, ammonia and hydrogen. Then through a spark in the gaseous mixture a host of organic and inorganic molecules was produced, including most of the amino acids we see in biological proteins. At the time, proteins were viewed as the main component of cellular systems. This simple experiment showed for the first time that a ʺprimordial soupʺ model for life’s origins, where complex systems of reactions could lead to the synthesis of ever more complex molecules, might be more than a speculation. Only a year after Miller and Urey’s experiment, Francis Crick (1916-2004) and James Watson (1928- ) identify the molecular structure of the DNA. This paved the way to the unveiling of life’s genetic code, which was now seen tied up with nucleic acids, polymers of nucleotide units. Perhaps life could have begun with DNA’s precursor, RNA? RNA could have provided with the information storage needed to get the first living organism started. However, this created a philosophical paradox. The synthesis and replication of RNA itself in fact is carried out by proteins, but the structure of the proteins is encoded by RNA. Leslie Orgel (1927-2007) first proposed that RNA could not only store information but also catalyze the chemical reactions needed to make itself (RNA world hypotheses). Perhaps short sequences of RNA in the primordial soup could have catalyzed the synthesis of identical sequences – they could self-replicate. In time, through slow evolution and building up of new functions, the information-carrying role of RNA would be taken over by the more stable DNA, and the duty of replication taken over by more catalytically versatile proteins. However, ever since the early days of the RNA world hypothesis, the synthesis of RNA from simple building blocks has proved to be extremely challenging to chemists. Even more challenging is to prove the emergence of the genetic code and of life itself just as the outcome of a sophisticate interaction of millions of molecules (enzymes, nucleic acids, metabolites etc.) within the components of a cell. Michael Polanyi (1891-1976) in his excellent and timeless article in Science in 1968, Life’s irreducible structure, looks at the implications of the genetic code and its physical indeterminacy pointing out how life transcends physics and chemistry.
2. A Philosophy of Chemistry. Chemistry has justly been called the central science. Given the unique place that chemistry occupies between physics and biology in the traditional hierarchy of the natural sciences, the discipline has attracted in time increased philosophical attention. Chemistry has traditionally been, and continues to be, the science concerned with the nature of the elements, of substance and indeed of the nature of matter, all traditional philosophical questions. The philosophy of chemistry has gradually emerged as an important area of study within the philosophy of science in its own right. Chemistry poses some very specific chemical issues that have been argued to be worth of specific philosophical attention. One of the most compelling being the very issue of the reduction of chemistry itself to physics. A ʺquantitative reductionʺ through quantum theory and ab initio calculations is often followed to predict quantitative properties such as energies of molecules or bond angles. However, the Schrödinger equation upon which such calculations are based only possesses an exact solution in the case of the hydrogen atom showing that full reduction would never be attainable for even a small molecule. ʺConceptual reductionʺ instead attempts to reduce chemical concepts such as composition, bonding, and molecular structure. According to some philosophers of science, such position is not possible in principle due to the very nature of the concepts themselves and therefore concepts such as composition, bonding, and molecular structure cannot be expressed except than at the chemical level.
3. The Chemical Structure as an Irreducible Structure. Chemistry occupies a central position in epistemology. The chemical level is the first in science where new aspects of the real emerge out of complexity: the molecules. Molecules represent the simplest persistent entity whose extraordinary richness and diversity requires a specific science. In other words, when chemical elements react together to form a new entity, the compound, new properties emerge that are not in a direct and simple relation to those of their own components. In fact, the knowledge of the molecular formula indicating the simple numbers of each type of atom in a molecule very rarely can give us any reasonable idea about the chemical reactivity of the compound. The molecular structure instead – the way the atoms are connected to one another through chemical bonds in the space, determines the properties of the compound and its reactivity. The molecular structure represents therefore the emergent property of the molecule as a complex system. The fact that a molecule is made up of atoms is not denied but in the idea that the molecule will be just-a-bunch-of-atoms cannot be accepted without doubts. The emergence of new properties is manifested also in the interaction of more molecules, as it is evident in supramolecular chemistry.
4. The Periodic Table and the Intelligibility of Nature. The United Nations has proclaimed 2019 as the International Year of the Periodic Table of Chemical Elements (IYPT 2019) to mark the 150 years since its development by Dmitrij Mendeleev. The Periodic Table of the Elements represents one of the most significant achievements in science as a uniting scientific concept. A symbolic representation and an accurate map of our knowledge of the universe. It reveals two very important characteristics of Nature: its intelligibility and relatedness. The possibility for human beings to identify the fundamental chemical elements and cast them in a systematic way into a Table organized in rows and columns according to their atomic number and their electrons reveals the intelligible character of nature. At the same time, it displays the relational aspect of nature not only because the elements are made up of common constituents (protons, electrons and neutrons) but also because the elements are organized in patterns, in a periodic fashion (horizontally) by atomic number (that is the number of protons in each nucleus) and vertically in groups according to the number of electrons on their external orbitals (therefore elements with similar reactivity).
Documents of the Catholic Church related to the subject:
P. T. ANASTAS, J. C. WARNER, Green Chemistry: Theory and Practice (Oxford: Oxford University Press, 2000); G. BACHELARD, Le pluralisme cohérent de la chimie modern (Paris: Vrin, 1973); W. H. BROCK, The History of Chemistry: A Very Short Introduction (Oxford: Oxford University Press, 2016); M. BUNGE, Method, Model and Matter, (Dordrecht-Boston: D. Reidel, 1973); L. CERRUTI, Bella e Potente: La Chimica del Novecentro fra Scienza e Società (Roma: Editori Riuniti, 2003); F. DAGOGNET, Tableaux et langages de la chimie (Paris: Éditions du Seuil, 1969); G. DEL RE, Technology and the Spirit of Alchemy, “Hyle” 3 (1997), pp. 51-63; G. DEL RE, The Specificity of Chemistry and the Philosophy of Science, in V. Mosini, (ed.), Philosophers in The Laboratory (Editrice Universitaria, Roma 1994, pp. 11-20); R. HOFFMANN, “Molecular beauty” Journal of Aesthetics and Art Criticism 48 (1990), n. 3, pp. 191-204;E. J. HOLMYARD, Alchemy (New York: Dover Publications Inc., 1990); J. M. LEHN, Supramolecular Chemistry - Concepts and Perspectives (Wiley-VCH, Verlagsgesellschaft, Weinheim, 1995); A. PALERMO, "The Future of the Chemical Sciences," Chemistry International 40.3 (2018), pp. 4-6; M. POLANYI, "Life’s irreducible structure," Science (21 June 1968), 160 (3834), pp. 1308-1312; I. PRIGOGINE, From Being to Becoming: Time and Complexity in the Physical Sciences (New York: W H Freeman & Co, 1980); J. REARDON-ANDERSON, The Study of Change: Chemistry in China, 1840-1949 (Cambridge: Cambridge University Press, 2003); E. SCERRY, G. FISHER, Essays in the Philosophy of Chemistry (Oxford: Oxford University Press, 2016); E. SCERRY, L. MCINTYRE, "The case for the philosophy of chemistry," Synthese 111.3 (1997), pp. 213-232; S. TOULMIN, J. GOODFIELD, The Architecture of Matter, Hutchinson, London 1962; G. VILLANI, La Chiave del Mondo (Napoli: CUEN, 2001). |
70a200854013e59c | Toward a more symmetric relation between space and time in non-relativistic quantum mechanics
Seminars | Friday, October 06, 2017 | 15:30:00
Fernando Parisio
In our tangible world, both position and time cannot be determined with arbitrary precision. Despite this evident experimental fact, in quantum theory, we routinely refer to the probability of measuring a particle between positions x and x+dx, exactly at the instant t, but never to the probability of detecting it during the time interval [t, t+dt], exactly at the position x. The latter question is clearly a well-posed one, and this space-time asymmetry has nothing to do with the lack of Lorentz covariance of the Schrödinger equation. In this talk, first we present an extended non-relativistic quantum formalism where space and time play equivalent roles. It leads to the probability of finding a particle between x and x + dx during [t, t + dt]. Then, we find a Schrödinger-like equation for a “mirror” wave function φ(t,x) associated with the probability of measuring the system between t and t + dt, given that detection occurs at x. Several possible experimental consequences of this proposal will be discussed. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.